Thursday, October 31, 2019

Literary theory (Althusser) Answering discussion questions Assignment

Literary theory (Althusser) Answering discussion questions - Assignment Example Indeed, in Reading Capital, he points out that the retrospection of the past is not ideology, but serves, in his words, â€Å"the legitimate epistemological primacy of the present over the past.† This served as one of the sources of his criticism of the French communist party, as a criticism of ideology on epistemological grounds will always sit uncomfortably with those emotionally invested in a given ideology. His criticism in the same work of â€Å"subjective and arbitrary ideologies† on Marxist grounds suggests that discourse within ideology was not his aim. Similarly, there is the question of â€Å"complete† ideology. In Althusserian criticism, no ideology is ever complete, and any attempt by an ideology to fully control or dominate a text will only end up exposing the limitations of that ideology. One classic example is Frank Capra’s classic â€Å"message† film It’s A Wonderful Life, in which George Bailey triumphs over the machinations of the evil Mr. Potter, except insofar as he doesn’t. The film ends with Potter still fully in control of Bedford Falls, having suffered not so much as a moment’s inconvenience while George was wrestling with suicide.

Tuesday, October 29, 2019

The Development and Use of the Six Markets Model Essay Example for Free

The Development and Use of the Six Markets Model Essay Introduction The idea that business organisations have a range of stakeholders other than shareholders is obvious. Yet stakeholder theory has not guided mainstream marketing practice to any great extent (Polonsky, 1995). To use the theory/practice distinction provided by Argyris and Schon (1978), it is a theory espoused rather more than a theory practiced in action. Research by Freeman and Reed (1983) traced the origins of the stakeholder concept to the Stanford Research Institute. They suggest a SRI internal document of 1963 is the earliest example of the term’s usage. This document included customers, shareowners, employees, suppliers, lenders and society in its list of stakeholders. The stakeholder concept has attracted considerable interest in the strategic management literature, especially since the publication of an influential text (Freeman, 1984) that contained a deceptively simple but broad definition of stakeholders (p. 46), namely: â€Å". . . all of those groups and individuals that can affect, or are affected by, the accomplishment of organizational purpose†. An important dialogue on stakeholder theory has emerged over the past decade, especially in articles and contributions to the Academy of Management Review, starting with a critique from Donaldson and Preston (1995) that argued that three associated strands of theory might converge within a justifiable stakeholder theory, namely descriptive accuracy, instrumental power and normative validity. Stakeholder theory is clearly an important issue in strategy (e.g. Carroll, 1989; Donaldson and Preston, 1995; Harrison and St John, 1996; Useem, 1996; Campbell, 1997; Harrison and Freeman, 1999). However, within the strategy field there is not a great deal of agreement on the scope of stakeholder theory (Harrison and Freeman, 1999). In particular, there is still a debate regarding which constituent groups an organisation should consider as stakeholders. For example, Argenti (1997) suggested an infinite number of potential groups while Freeman (1984) has argued that there is excessive breadth in identification of stakeholders. Recently Polonsky et al. (2003) concluded that there are â€Å"no universally accepted definitions of stakeholder theory or even what constitutes a  stakeholder† (p. 351). However, they see two rival perspectives: one where stakeholder intent means â€Å"improving corporate performance†, and another where it means â€Å"maximising social welfare and minimising the level of harm produced within the exchange process† (p. 351). While these aims may never be entirely reconciled in practice (Gioia, 1999), the dominant assumption that the pursuit of â€Å"profit† is for the shareholders effectively denies legitimacy to other claims to the meaning of profit as a â€Å"shared benefit†, or as aâ€Å"shared good† (Smithee and Lee, 2004). Relationship-based approaches to marketing offer a reformist stakeholder agenda with an emphasis on stakeholder collaboration beyond the immediacy of market transactions. According to different authors, this involves creating exchanges of mutually beneficial value (Christopher et al., 2002), interactions within networks of relationships (Gummesson, 1999), or mutual commitment and trust that may or may not be achievable (Morgan and Hunt, 1994). Relating is connecting, and at its simplest level, a relationship is a state of being connected. A critical question arises: â€Å"With whom are you connected, and why?†. These questions require judgments about particular relationships – and strategic value choices. This article explores the development, extension and use of the â€Å"six markets† stakeholder model (Christopher et al., 1991) and proposes a framework for analyzing stakeholder relationships and planning stakeholder strategy. The article is structured as follows. First, we review the role of stakeholders in relationship marketing. Second, we discuss the development and refinement of the six markets model, and describe how the model has been operationalised and refined as a result of testing and experience in use with managers. Next, we discuss the development of a stakeholder relationship planning model that enables strategies to be developed for each stakeholder group. Finally, we discuss the managerial and research issues associated with stakeholder theory in marketing and review some future research opportunities. Our objective is to explain how a conceptual stakeholder model has practical application in marketing management and in this way make a contribution  towards eliminating the current gap between stakeholder theories and marketing practice. Relationship marketing and the role of stakeholders Marketing interest in relationship based strategic approaches has increased strongly over the last decade in line with expanding global markets, the ongoing deregulation of many industries and the application of new information and communication technologies. Notwithstanding, practitioners and academics alike can overlook the fact that business and industrial relationships are of many kinds (Wilkinson and Young, 1994), and that an understanding of the value generating processes is required (Anderson and Narus, 1999; Donaldson and O’Toole, 2002; Gro ¨nroos, 1997; Payne and Holt, 1999; Ravald and Gro ¨nroos, 1996; Tzokas and Saren, 1999; Wilson and Jantrania, 1994). Understanding the role of long-term relationships with both customer and other stakeholder groups has been largely neglected in the mainstream marketing literature but is acknowledged in the relationship marketing literature (e.g. Gro ¨nroos, 1994; Gummesson, 1995; Hennig-Thurau and Hansen, 2000; Ha °kansson, 1982; Mo ¨ller, 1992, 1994; Parvatiyar and Sheth, 1997; Sheth and Parvatiyar, 1995). Kotler (1992) has on occasion called for a broadening of marketing interests to take into account the relationships between an organisation and its publics. However, it is the relationship marketing literature in particular that has stressed the importance of stakeholder relationships (e.g. Christopher et al., 1991; Morgan and Hunt, 1994; Doyle, 1995; Gummesson, 1995; Buttle, 1999). Gummesson (2002b) has provided a comparison of four of the better known approaches to classifying multiple stakeholders, including Christopher et al. (1991), Kotler (1992),Morgan and Hunt (1994), and also Gummesson (1994). While the first three of these models are concerned with the relationships that an organisation has with its more traditional stakeholders, the approach of Gummesson (1994) goes beyond the focus of this article in that it includes criminal network relationships, para-social relationships and supranational mega-alliances. The Christopher et al. (1991) framework has six stakeholder market domains, each of which comprises a number of  Ã¢â‚¬Å"sub-markets†, while that of Kotler (1992) identifies ten specific constituents. Morgan and Hunt (1994) suggest ten relationship exchanges with four partnership groups. Other models include the SCOPE model (Buttle, 1999) and a framework by Doyle (1995). The conceptual model and the related planning framework described in this article are the result of recursive research and development over a number of years. Our initial conceptual work on the model was later supplemented with learning from field-based interactions with marketing managers and other executives in order to further refine it and to develop the conceptual planning framework reported here. This follows what Gummesson (2002a) terms â€Å"interactive research†. This research approach emphasizes that interaction and communication play a crucial part in research and that testing concepts, ideas and results through interaction with different target groups is an integral part of the theory development and indeed the whole research process (Gummesson, 2002a, pp. 344-6). Managers’ observations and suggestions were found to be invaluable in developing and refining the model, supporting Gioia and Pitre’s (1990) proposals that multiple perspectives yield a more comprehensive view of organizational phenomena and where assumptions about the processes under enquiry can be modified by further consultation with informants. Research objectives and approach The objective of the research was to develop and refine the six markets model through testing its applicability in a wide range of organisational contexts. More specifically, we wished to develop a categorization scheme that enabled key constituent stakeholder groups within each market domain to be identified and classified and to develop a stakeholder planning framework. This was motivated, in part, by managers in these companies who expressed the need for both a classification scheme and a planning framework. We have utilized a range of approaches over a number of years in our research to test and refine the six markets model and the planning framework and to  gain field-based insights, including: 1) Piloting and testing the six markets model with an initial group of 15 UKorganisations. The organisations in this sample were drawn from a range of sectors including manufacturing (two), financial services including banking and insurance (six), other services including retailing (six), professional services (two) and one not-for-profit industry association (the Royal Aeronautical Society). All were very large firms within their sector with the exception of the two professional services firms and the not-for-profit organization. 2) Using the model in substantive case studies on UK organisations in the following sectors: retailing (two), manufacturing (two), a global airline and a major conservation charity. 3) Using the framework as a planning tool in a two major international banks (one a l arge British commercial and retail bank, the other a large French investment bank), chosen as they had challenging and complex stakeholder issues across many countries. A total of eight workshops was used to analyse stakeholder markets in four countries for the first bank and six workshops in three countries for the second bank. 4) Working on projects with over 80 further organisations to evolve and test the planning framework. This involved working with groups of mid-career managers in the UK and Australia. Given the predominantly service-based economies of the developed countries in which this research was undertaken, those organisations that were selected included a high proportion from the services sector. While the earlier research primarily included large organizations in their sectors, this work also included a selection of medium-sized and smaller organisations. Overall, 65 per cent of the organisations were from the services sector, 20 per cent from manufacturing and 15 per cent from the not-for-profit sector. A wide diversity of organisations was used, including financial services companies, retailing and other services, manufacturing companies, a mobile telephony company, a major hotel chain, an insurance broker, a consulting firm, an airport authority, a university, a conference centre, a holiday company, a foreign languages teaching institute and a hospice. Our shared learning approach also draws on action research concepts suggested by Rapoport (1970) which aim at contributing to the practical concerns of people in a challenging situation – such as stakeholder management – and to  the goals of research by collaboration within a mutually acceptable framework. The revised six markets model (Christopher et al., 2002) is shown in Figure 1. The intent behind the model is to emphasise relationships between the organisation and all its stakeholder constituents in each of six â€Å"markets†. The key assumption is that organisations can only optimise relationships with customers if they understand and manage relationships with other relevant stakeholders. This model addresses the concern raised by Dill (1975) that some groups or parties may be involved in multiple role relationships. Any one constituent group, firm or individual may be classified within one or more of these market domains. For example, customers may play a role within the customer market (where the interaction is between a firm and its customers) and in the referral market (where the interaction is between an existing customer and a prospective customer). The six markets model provides a structure for managers in organisations to undertake a diagnostic review of the key market domains and stakeholders that may be important to them. As a result of this diagnosis, they will be able to identify a number of key constituents within the market domains that are strategically critical, or where unexpected opportunities emerge. Using and testing the model These six key market domains represent groups that can have a significant impact on an organisation’s marketplace effectiveness. Each â€Å"market† is made up of a number key groups, segments, or participants. To test the applicability of the model we followed four steps: (1) identify key participants, or segments, within each of the market domains; (2) review expectations and needs of key participants; (3) review current and proposed level of emphasis in each market; and (4) formulate an appropriate relationship strategy. In this section we consider the first two steps. We worked with groups of  managers to address these steps. Typically, the group comprised three to six mid-career managers from a range of functional backgrounds. The process started with the examination and analysis of each market domain to identify the key groups of participants or market segments within each of them. We explored the expectations and needs of each of the identified stakeholder groups through a combination of approaches, including interviews and questionnaires and a review of key issues with senior management. In applying the revised six markets model above we found all stakeholders we identified could be conveniently categorised into one of the six market domains. Initially the identification of the constituent groups within each market domain, for a given organization, was approached on a case-by-case basis. However, as our experience in using the model grew, the need for a more specific categorisation became apparent. This was prompted, in part, by research such as Lovelock’s (1995) work on classifying supplementary services. Developing and refining categorisation schemes for stakeholders was important because, as Emshoff and Freeman (1979) have noted, functionally based organizations typically place too much resource emphasis on highly visible stakeholders such as their customers, and too little emphasis on other special interest groups whose management falls outside specific functional boundaries. Identification of all relevant stakeholder groups should enhance their visibility and lead to their greater prominence within the organization – thus the company is more likely address them as part of an integrated stakeholder strat egy. Through the work in the companies referred to above, a categorisation scheme was developed and refined over time that assisted the identification of typical groups within each market domain. In summary, this categorisation of market domains identified the following constituents: 1) Customer markets are made up of buyers (e.g. a wholesaler), intermediaries and final consumers. Each intermediary or member of the supply chain can then be further sub-divided according to the most relevant segmentation approach. 2) Referral markets comprise two main categories – customer and non-customer referral sources. The customer category includes advocacy referrals (or  advocate-initiated customer referrals) and customer-base development (or company-initiated customer referrals). The wide range of non-customer referrals are divided into general referrals, reciprocal referrals, incentive-based referrals and staff referrals. 3) Supplier and alliance markets – suppliers provide physical resources to the business and can be classified into strategic suppliers, key suppliers, approved suppliers and nominated suppliers. Alliance partners supply competencies and capabilities that are typically knowledge-based rather than product-based, and Sheth’s (1994) classification of alliance, partnering transaction and co-operative relationships is especially useful here. 4) Influence markets have the most diverse range of constituent groups, including financial and investor groups, unions, industry bodies, regulatory bodies, business press and media, user and evaluator groups, environmental groups, political and government agencies, and competitors. 5) Recruitment markets comprise all potential employees together with the third parties that serve as access channels. They can be segmented by function, job role, geography and level of seniority. Channels include executive search companies, employment agencies, job centres, off-line and on-line advertising, and using an organisation’s own staff to suggest potential applicants. 6) Internal markets follow the segmentation used for potential employees in the recruitment market, i.e. by function, job role, geography and level of seniority. Special emphasis needs to be placed on behavioural characteristics for customer-facing employees. From this testing of the six market categories, we concluded that they are a workable reference frame to consider a broader range of constituent stakeholders, whether individuals, groups, or others whose interests have relevance to the enterprise. Further development of the model Having identified relevant stakeholders, the third step outlined above involved a review of the current and proposed level of emphasis on each market domain. Not all stakeholder markets require the same degree of attention and emphasis, and Gummesson (1994) has argued that managers need  to prioritise and establish the appropriate mix of relationships needed for the company’s success. To identify the present level of emphasis and the future desired emphasis on each of the market domains and their constituent parts, we developed a stakeholder network map (Payne, 1995). This was used to identify an organisation’s present emphasis on each market, the desired emphasis at a future point in time, and the gap between these two positions. This network map configures each of the major market domains, including customer markets (which are sub-divided into existing and new customers), on a series of axes and enables a group of managers within a firm to make an assessment as to the current and desired levels of emphasis on each market domain by means of a jury of executive opinion – usually developed from inputs from one or more groups of senior managers within the organisation being examined. Although this work resulted in some initial variation of views amongst managers regarding present and desired emphasis, as a result of more detailed discussion the outcome was generally a strong degree of consensus amongst these managers. The stakeholder network map has seven axes – two for customers (existing and new) and one for each of the other five relationship markets discussed earlier. The scale of 1 (low) to 10 (high) reflects the degree of emphasis (costs and effects) placed on each relationship market. The division of customers into â€Å"new† and â€Å"existing† reflects the two critical tasks within the customer domain, those of customer attraction and customer retention. Figure 2 shows a network map for the Royal Society for the Protection of Birds (RSPB), a major British conservation charity. It shows the current emphasis (at the time of analysis) and the proposed new emphasis. At this point in time the RSPB might have considered a number of issues, such as: 1) placing greater attention on retaining existing members; 2) a reinforcement of customer care and service quality issues with internal staff; and 3) a stronger focus on influence markets (Payne, 2000). The analysis shown in Figure 2 represents the first level of diagnostic  review of the overall emphasis at the market domain level, in order to make an initial judgement as to the existing and desired relevant emphasis. A second level of analysis explores each market domain in much greater detail and enables analysis at the sub-segment or group level within the domains. For example, in the analysis of the referral market for a major international accounting firm we identified present and future desired emphasis on a number of groups within the referral market domain, including their clients, banks, joint venture candidates, their international practice and their audit practice. We have used the stakeholder network mapping technique in our research with many organisations. Although simple in concept, it has proved a robust means of considering the network of stakeholder relationships that organisations need to address. The diagrammatic representation has been especially useful in helping executives visualise the importance of various stakeholders. Further, the time dimension for the proposed relationship strategy, usually within a two- to three-year planning horizon, has been useful in determining the changes required in stakeholder emphasis. This addresses the concern of Dill (1975) regarding the need to take the time dimension into account.

Sunday, October 27, 2019

Decision support systems

Decision support systems Abstract Nowadays, Decision Support Systems has a significant role in almost all areas of life. These systems go further and use new technologies like data mining and knowledge and data discovery (KDD) to improve and facilitate human decision making. First of all we provide some definitions about decision making, models and processes. Afterwards, we discuss about knowledge and data discovery and also, Intelligent decision support systems. At last, as an empirical survey, we compare two different cultures in using decision making support systems. One of them uses decision support system in clinical environment to improve the decision making and reduce crucial errors significantly; while the other uses the traditional system and relies on the human memory and experience rather than using decision support systems. Keywords: Decision Making, Decision Support, Knowledge and Data Discovery (KDD), Intelligent Decision Support Systems Introduction Information system has a significant role in supporting decision making, and in some special environments like business, health and education, gets the mandatory part. Moreover, such systems go further and use data mining and knowledge and data discovery (KDD) techniques to improve their abilities in supporting decision making. One of the environments that need information systems support for making crucial decisions and have direct effect on human life is clinical and health environment. We are going to look through the effect of decision support system in it. Decision Making Decisions and models Decision making is undeniably an essential and vital part of the human life. A decision problem may consist of numerous smaller decisions inter-related together, and the results of multiple decisions can be consolidated together; or one decision can influence another subsequent one. This influence can be fed as the input to a subsequent decision, or as a decisional choice for the users in determining which decision to make subsequently. This bigger decision, and its smaller decisions embedded within, must be represented in a simple manner for decision makers to read, understand, and communicate with. Each decision can be represented in the form of a model, to represent, describe and depict the decision problem and its interaction under consideration, whether it is simply an abstraction schema, insights to the decisions rather than mere numbers actual model instance, or executable computer program module. Each decision model can be a permanent modeling scenario which can be retrieved and included as part of a bigger scenario. Alternatively, it can be a temporary modeling scenario that is aggregated or pipelined within a bigger scenario. Such model integration treatments are subject to the discretion of users at the time of making such decisions. Even though each of these decisions may have a direct or indirect bearing on other subsequent decisions and can easily influence the overall decision and conclusion, many decision making processes and systems treat these decisions as independent and unrelated. This obscures the users from seeing and discovering the true effects and influen ce of the decision problem and its interaction under consideration, whether they are interrelated and/or interdependent. The element of interdependence may not be discovered until the full picture can be seen and assessed. Even though many decisions do occur in a sequential fashion, there are also many decisions that occur in parallel, evolve over time and converge to a concluding decision, or eventually combine or are interwoven into a final decision. Therefore, the decision making process should neither be fixed nor predetermined beforehand so that the execution order can be created as required. Hence, modeling is an important process in understanding, capturing, representing, and solving these decision models especially in terms of their interrelatedness across multiple models and their instances over a period of time. Furthermore such models should ideally be able to capture functional, behavioral, organizational, and informational perspectives. Decision systems are intended to assist users in making a decision. There are several types of users involved in using decision systems and these users progress as they develop more confidence: from inexperienced/na?ve decision makers, to average decision makers/ analysts, to experienced decision makers/modelers. Each type of user has different needs and should not be restricted by the constraints of any decision system that dictates the steps and techniques behind analyzing and solving a decision problem. Some users may need more decisional and/or system usage guidance while others may prefer to have minimal guidance. Some may wish the decision system will take care of the entire decision making process including prescribing the order in which each set of data is requested as well as the order in which each decision model is executed; while others may wish to intervene to a greater extent in designing the entire decision making process and the execution order to suit, or to a lesser extent in specifying a particular solution method. There are a variety of reasons as to why a human intervention is warranted and needed from the perspective of an experienced user. However, it is interesting to note that the type of guidance may have an adverse effect on decision model selection and ultimately the decision outcome. It is unreasonable and impractical to expect decision makers to operate a different decision making system for each decision and to comprehend the full effects of the consolidation and integration from these decisions. A decision making process is not necessarily about concentrating on the decision itself, but should emphasize the ways in which decisions are made. Therefore, users should be able to choose an optimizing approach and solution as well as a satisfying approach and solution, and not be limited to only one approach and solution that is traditionally incorporated in decision systems. Due to the frequency and complexity of interrelated decisions, some users may recall an existing scenario as input to another scenario, or recall several existing scenarios for comparative purposes. Decision systems need to be built in a flexible way so that decision models and components can be easily assembled and/or integrated together to create new scenarios and specific scenarios can be built and tailored to meet the needs of particular user groups. With all these issues in mind, the framework and architecture of an ideal decision system should have independent components that enable components to be easily assembled and integrated together to form a decision scenario. They should be flexible enough to serves various types of users and accommodate various types of decision making processes. They should also be sufficiently versatile to handle decision problems regardless of paradigms and/or domains under consideration. Good decision making frameworks must therefore be in place f or system framework and architecture to exhibit modeling flexibility, component independence, and versatility in domain and/or paradigm. To overcome the issues and fulfill the requirements discussed above, we first propose a converging decision analysis process, an optimizing?satisfying decision model, and a cyclical modeling lifecycle. Normative decision making processes Decisions can evolve and converge into a concluding decision over time. This can occur within re-evaluating a decision problem, or evaluating across multiple decision problems that are similar. This iterative decision making process is known as the convergence process. As decisions evolve and refine over time, decision makers are able to concentrate on essential factors and eliminate nonessential ones in order to narrow down the scope of the decision problem. Such attention-focused method provides a cut down version of the problem. A decision is subsequently made from these remaining factors of the reduced problem. Such decision-focused method provides an actionable result from the given problem. Since there can be many decisions within a decision problem, several iterations of attention-focused and decision-focused methods are applied while intermediate decisions within the decision problem are made and converged. Such revision and refinement occur irrespective of paradigms and doma ins. This notion of applying the attention focused and decision-focused methods within a convergence decision making process are depicted in Figure 1. Figure 1. Converging decision analysis, as in an 1D-CSP scenario One-Dimensional Cutting Stock Problem (1D-CSP) was used for illustrative purposes in order to design and implement the proposed framework and architecture. 1D-CSP is about cutting strips of raw material into desired sizes according to customer order widths. We often do not have unlimited supplies of raw materials and would therefore need to formulate and decide on which cutting patterns are used. 1D-CSP is a resource management problem with a traditional goal of minimizing wastage. Besides wastage, there may be other objectives that must be considered. For example, minimize machine setups through the changing of cutting knives, minimize machine setups through reducing the number of cutting patterns used, or minimize the number of disruption in the sequence of cutting patterns used. Even though 1D-CSP is considered to be a simple problem in pure mathematical terms, it becomes a reasonably complex decision problem once one considers all the real world constraints and objectives, and th e interrelated decisions involved within its decision making process. The 1D-CSP can be used as a decision problem to illustrate the converging decision analysis process, as depicted in Figure 1. The first decision is a pattern generation heuristic that generates combinations of cutting patterns. This decision concentrates only on generating those cutting patterns that are relevant to the decision problem under consideration (an attention-focused method). The second decision is determining which cutting patterns among the generated ones should be retained or discarded (a decision-focused method). This can be based on specific rules such as an allowable number of cutting knives per cutting pattern. It can also be based on the decision makers personal experience on whether certain cutting patterns should be discarded. The third decision is the creation of linear programming constraints that identifies the feasible area of the problem under consideration (an attention-focused method), while the fourth decision is finding an optimal point within the feasib le area (a decision-focused method). Neither of the focused methods has to produce an optimal or a satisfying solution necessarily. It is entirely up to the decision maker to decide on what sort of solution is desired at the time. Each decision and solution can be encompassed within a decision model that consists of both the optimizing model and satisfying model, as depicted in Figure 2. In a decision problem that consists of multiple interrelated decisions, the result from one model may be fed into another model continuously until an ultimate result is reached, and the result from a model can take on a different solution option. Each decision model may return to itself for refinement, or return to the previous model for additional processing, or feed to the next model for further processing. This return may be due to an infeasible solution, or a better understanding of the model which eventually leads to a change in the parameters of the model. The 1D-CSP can be used to illustrate the optimizing?satisfying decision model, as depicted in Figure 2. The first decision model pattern generation heuristic is a satisfying model that produces only those cutting patterns that are relevant and desirable to the decision problem under consideration. The second decision model is also a satisfying model in selecting or deselecting among the cutting patterns already produced. The third and fourth decision models are optimizing models that optimize using the linear programmings simplex method. Figure 2. Optimizing?satisfying decision model Decision modeling lifecycle The approach of Simon to the decision making process in terms of intelligence, design, and choice is very decision-oriented. However, as Glob has suggested it is about the way in which we model the decision. Therefore, we propose to integrate Simons proposal with MS/ORs modeling proposals that attempt to support every phase and aspects of decisions and modeling lifecycle. Such a design approach is crucial to support the modeling and decision environments and ensure that non-predetermined decision making processes and interrelated decisions characteristics can be modeled. This proposed modeling process is cyclical and iterative, and enables continuous adjustment and refinement especially in storing and retrieving decision problems as decision scenarios, as summarized in Figure 3. Despite the fact that the modeling lifecycle progresses step-by-step in a cycle, it can return to any earlier steps and not just the previous one, and can skip some steps in the later iteration if it has already gone through that particular step earlier on. It is however more difficult to represent these possible movements visually in the modelling lifecycle and is therefore not illustrated in Figure 3. The lifecycle is valuable not only from the point of view of modeling the decision itself but especially for highlighting the role of the system components of the decision, whether it is a data, model, solver, or scenario. Once a problem is understood it can be represented in the form of a model which is then instantiated with data and integrated with solvers so that it can be executed. Such a model is especially beneficial if it is storable and retrievable for later use and comparison. Once a model is represented, a solution can be derived through analyzing and investigating as well as comparing with various model instances. The derived solution is then reviewed and validated. If it is considered unsatisfactory such information can be used to modify and reformulate the decision model. Figure 3. Cyclical modeling lifecycle Even though the decision system will progress through the entire modeling lifecycle in producing the end result, it is important to note however that not all users will execute all the steps of the modeling lifecycle. Depending on the competencies of the decision makers and their permissions, they may interact with certain steps in the modeling lifecycle. For example, the inexperienced decision maker may interact with only step 2; the average decision maker may interact with steps 2, 3 and 4; whereas the experienced decision maker may interact with all 6 steps in the modeling lifecycle, as shown and contrasted in Figure 4. This decision modeling lifecycle provides a sound basis for the decision support and modeling framework and architecture. Figure 4. Interaction between 3 types of user groups and the modeling lifecycle Intelligent Decision Support Systems While IDSS (Intelligent Decision Support Systems) have been receiving increasing attention from the DSS research community by incorporating knowledge- based techniques to provide intelligent and active behavior, the state-of-the-art IDSS architecture provides little support for incorporating novel technologies that serve useful DSS information, such as the results from the knowledge and data discovery (KDD) community. Data Mining and Knowledge Discovery In recent years, the terms knowledge discovery and data mining (commonly referred to as KDD) have been used synonymously. They both refer to the area of research that draws upon data mining methods from pattern recognition (Tuzhilin, 1993), machine learning (Han et al., 1992) and database (Agrawal et al., 1993, 1994) techniques in the context of vast organizational databases. Conceptually, KDD refers to a multiple step process that can be highly interactive and iterative in the following (Fayyad Uthurusamy, 1995): the selection, cleaning, transformation and projection of data; mining the data to extract patterns and appropriate models; evaluating and interpreting the extracted patterns to decide what constitutes ?knowledge?; consolidating the knowledge, resolving conflicts with previously extracted knowledge; making the knowledge available for use by the interested elements within the system. A number of KDD systems are similar to IADSS data miner agents in spirit and in technique. Such work in designing and implementing practical KDD systems is crucial to our research in the sense that their results provide solid KDD pragmatic technologies ready to be integrated into our IADSS architecture. However, the current state of using KDD techniques for decision support remains in its infancy, as preliminary applications that use exclusively KDD techniques. It is our point of view that such isolated applications have limited scope and capabilities, while future KDD techniques will play an integral role in complex business systems that incorporate a wide range of technologies including intelligent agents, multimedia and hypermedia, distributed systems and computer networks such as the internet, and many others. From a DSS perspective, a simple DSS architecture that consists of a single decision maker with single information source knowledge discovery functionality lacks the ability to deal with complex situations in which multiple decision makers or multiple informatio n sources are involved. Most existing DSSs with data mining and knowledge discovery capability fall into this category. Intelligent Agents The concept of intelligent agents is rapidly becoming an important area of research (Bhargava Branley, 1995; Etzioni Weld, 1994; Khoong, 1995). Informally, intelligent agents can be seen as software agents with intelligent behavior, that is, they are a combination of software agents and intelligent systems. Formally, the term agent is used to denote a software-based computer system that enjoys the following properties (Wooldridge Jennings, 1995): Autonomy: Agents operate without the direct intervention of humans. Co-operatively: Agents co-operate with other agents towards the achievement of certain objectives. Reactivity: Agents perceive their environment and respond in a timely fashion to changes that occur. Pro-activity: Agents do not simply act in response to their environment; they are able to exhibit goal-directed behavior by taking the initiative. Mobility: Agents are able to travel through computer networks. An agent on one computer may create another agent on another computer for execution. Agents may also transport from computer to computer during execution and may carry accumulated knowledge and data with them. Furthermore, there has been a rapid growth in attention paid to developing and deploying intelligent agent-based systems to tackle real world problems by taking advantage of the intelligent, autonomous and active nature of this technology (Wang Wang, 1996). Intelligent Decision Support Systems Intelligent decision support systems (Chi Turban, 1995; Holtzman, 1989), incorporating knowledge-based methodology, are designed to aid the decision-making process through a set of recommendations reflecting domain expertise. Clearly, the knowledge-based methodology provides useful features for the application of domain knowledge in decision making. However, the knowledge stored in the knowledge bases is highly domain-oriented and relatively small changes in the problem domain require extensive intervention by the expert. Powerful information communication channels, such as the internet (information superhighway), are continuously changing the decision making process. When decision makers make decisions they not only rely on brittle domain knowledge but also on other relevant information from all over the world. As a result, the challenge of discovering and incorporating new knowledge with existing ones requires us to introduce new techniques (such as intelligent agents and knowledg e discovery) into DSSs. Research into IDSS includes the work by Rao et al. (1994), who presented an intelligent decision support system architecture, IDSS, that stresses active involvement of computer systems in decision making, on the other hand, the work by Sycara at CMU LEI (Laboratory for Enterprise Integration) proposed the PERSUADER (Sycara, 1993), which incorporates machine learning for intelligent support of conflict resolution and the work on NEST which incorporates distributed artificial intelligence (DAI) with group decision support systems by Fox and Shaw (Shaw Fox, 1993). The proposed IDSS architecture is similar in substance to our proposed IADSS, which incorporates distributed artificial intelligence and incorporates the principles of co-operative distributed problem solving in the decision-making process. However, as we have pointed out above, it is necessary for the incorporation of data mining technology which extracts important information from vast amounts of or ganizational data sources in order to provide additional information that may be crucial for the decision-making process. IADSS architectural configuration As we have pointed out in our introduction, there exist numerous obstacles that remain to be overcome in today?s DSSs to fully achieve the vision of IADSS. The integration of intelligent agents with DSSs will be able to address most, if not all, of the articulated issues. However, even within the application of an intelligent agent-based architecture, there exist two different forms (or configurations) of the decision-making process that the particular architecture will be able support: Single decision maker-multiple miners and multiple decision makers-multiple miners. Single Decision Maker-Multiple Miner DSS Processes We have argued in the previous section that a possible configuration of IADSS architecture, namely the single decision maker-single miner form, has severe limitations when it comes to extendibility and the ability to be integrated into an overall organizational decision support framework. However, in many real life cases, the single decision maker situation is still of importance. In today?s organization, there may exist a myriad of organizational information sources on which useful data relationships and patterns may be discovered to support the singular decision maker?s decision process. As a result, the IADSS configuration of a single decision maker with multiple data miners warrants attention and analysis. Under IADSS, the architecture of such a single decision maker, multiple knowledge miners assisted DSS is shown in Figure 5. Figure 5. Multi-Agent-based DSS Figure 6. A Multi-Agent-Based GDSS There are three classes of intelligent agents (we call them decision support agents or DS agents) contained within this architecture: Knowledge miners that discover hidden data relations in information sources, user assistants that act as the intelligent interface agents between the decision maker and the IADSS and a knowledge manager with repository support that provides system co-ordination and facilitates knowledge communication. Further details about the functionality and internal structure about each type of agent is elaborated in the next section. Multiple Decision Maker-Multiple Miner-Assisted GDSS Process The single decision maker configuration discussed above can be easily extended into a group decision support system (GDSS) architecture (as seen in Figure 6. by the introduction of additional user assistants for each additional decision maker). Compared to the single decision maker configuration in Figure 5, each user assistant agent is further augmented to provide support for group-based communication between different decision makers. It is important to observe that with the introduction of each additional DS agent; only an extra knowledge communication channel between the new DS agent and the knowledge manager is needed. This enables a manageable linear increase in the number of knowledge communication links corresponding to the increase in the number of agents in the IADSS system, rather than the quadratic increase in the number of direct communication links in a direct agent-to-agent fashion. Furthermore, our proposed IADSS is an open architecture with potential for the integration of future technologies by the incorporation of additional classes of intelligent agents. IADSS architecture at a glance Intelligent Decision Support Agents As described above, there are three types of intelligent agents in an IADSS system: Knowledge miners, user assistants and knowledge managers. This section will provide a more detailed description of such agents and their internal architectures. Knowledge Miners. The role of knowledge miners in IADSS is to actively discover patterns or models about a particular topic which provides support in the decision-making process. There are four components in a knowledge miner. The IADSS interface component manages the communication between the miner and the knowledge manager. When a knowledge miner receives messages that are represented in a common representation, the IADSS interface translates these messages into the local format based on the common vocabulary. On the other hand, when the knowledge miner sends messages out, the IADSS interface translates them into common format first, then sends them to the knowledge manager. In order to carry out the mining task, the necessary control knowledge as well as domain knowledge is kept in the knowledge base component, while the data interface component serves as a gateway to the external information sources. The knowledge discovery is usually done by discovering special patterns of the d ata, i.e. by clustering together data that share certain common properties. For instance, a knowledge miner may find that within this week, a number of stocks are going up. There are two different types of knowledge mining agents, event-driven knowledge miners and tusk-driven knowledge miners. The event-driven knowledge miners are agents that are invisible to the decision makers, and their results may contribute towards the decision-making process. Based on the specification of the IADSS, such event-driven knowledge miners start when the IADSS starts up. When a particular event comes, an agent will start its knowledge mining. Events may be temporal events, e.g. every day at 1 a.m., every hour, etc. Or, events may be constraint-triggered events, e.g. every 10,000 customers, when a certain type of customer reaches lo%, etc. Usually, such event-driven knowledge miners work periodically. They follow a sleep-work-sleep-work cycle and will be destroyed when the entire IADSS system termina tes. On the other hand, task-driven knowledge miners are created for particular data mining tasks based on requests originated by the decision makers. After a knowledge miner completes its task, it sends the mining results to the knowledge manager and is then terminated automatically. From the view point of decision support, knowledge miners play the role of information extractors which discover hidden relationships, dependencies and patterns within the database, whether the information is discovered by an event-driven knowledge miner or a task-driven knowledge miner, which may be utilized as evidence by decision makers within the GDM process. User Assistants. Interaction between a particular decision maker and the IADSS is accomplished through a user assistant agent. The architecture of a user assistant contains four components. The multimedia user interface component manages the interactions with the decision maker such as accepting requests for a task-driven knowledge miner, while the IADSS interface manages the knowledge communication with the knowledge manager. The necessary knowledge such as the common vocabulary, decision history and others are kept in a local knowledge base component. All three components are controlled by an operational component that provides the facility of differencing, multimedia presentation and collaboration. With regard to the role the user assistant plays in the decision process, it enables the decision maker to view the current state of the decision process and to convey his or her own opinions and arguments to the rest of the decision making group. It also enables the decision maker to i ssue requests for task-driven knowledge miners to attempt to discover some particular type of organizational knowledge from business data. The user assistant will then relay the request to the knowledge manager and interpret the mining result if it is deemed appropriate. Knowledge Manager: The knowledge manager provides management and co-ordination control functions over all the agents in the IADSS architecture. The internal component-wide architecture of the knowledge contains four Components: The decision maker interface, the operational facilities, the miner interface and the agent knowledge base that provides support for localized reasoning. From the functional standpoint, the knowledge manager provides the following functionality in the IADSS architecture: Makes decisions concerning the creation and termination of knowledge miners as provided by the miner interface component of the knowledge manager. Mediates requests from user assistants through the decision maker interface, analyzes these requests through the localized knowledge and inference engine and then initiates an appropriate group of task-driven knowledge miners to collaboratively perform the requested task through the miner interface. Mediates the discovered knowledge from knowledge miners (whether it is an event-driven or a task-driven miner), stores the knowledge into the repository for possible future usage and forwards the relevant knowledge to interested decision maker users through the decision-maker interface. Manages and co-ordinates the knowledge transactions with each individual decision support agent such as common vocabulary, available decision topics, existing mining results and strategic knowledge, as provided by the operational facilities component. Manages the synchronization between the collection of decision support agents such as the progress of the task-driven knowledge miners and the notification of the decision makers when crucial knowledge is discovered. Mediates all other types of communication among decision support agents including the communication among user assistants and supports the retrieval of appropriate evidence from the repository by user assistants. In terms of the decision support process, the knowledge manager plays the role of manager and mediator between two decision makers, between the decision maker and the corresponding task-driven miners and between all decision support agents and the repository to address the issue of knowledge sharing. Current prescription process at the hospital The prescription process is shown in Figure 7. This description is based on interviews (questions 1?3 in the interview guideline, Appendix A) and observations by the first author. Figure 7. Current prescription process in the Ekbatan and Boras Hospital (UML activity diagram) The process starts as the physician in charge takes the patients history, performs physical examinations, and reviews available medical documents, including progress notes, laboratory findings, and imaging. These data sources guide the physician(s) to a set of differential diagnoses or a definitive diagnosis, which help the prescriber(s) to select appropriate treatment for the patient. The prescriber will then register medical records

Friday, October 25, 2019

Macbeth And Lady Macbeth Switch Roles :: essays research papers

Throughout the play "Macbeth", two of the main characters, Macbeth and Lady Macbeth gradually exchange roles. Macbeth is the kind, caring one of the two in the beginning, but completely changes as the play goes on, as with Lady Macbeth. She starts out as an evil, vicious beast. She is an evil woman who is bond and determined to kill Duncan. At the end of the play this character feels guilt for what she has done and has taken the personality, which was that of her husband in the beginning.At the beginning of the play Lady Macbeth speaks and shows her shows how cruel and heartless she really is; "And fill me from the crown to the toe top-full of direst cruelty". This shows she has no good in her, what so ever. Macbeth on the other hand, began as a good respectable character. When Lady Macbeth speaks of killing Duncan, he gives many reasons for reasons that he could not do so. Some of the reasons he gives in that speech are, that Duncan respects him, and trusts Macbeth. Duncan is also related to him by blood, and if he were to kill him he would never be able to rid himself of the guilt; wash the blood from his hands.At the climax of the play Macbeth makes plans to kill Banquo, with out Lady Macbeth, without anyone. This is a turning point because up until now, Macbeth was a respectable man, who didn't feel the need to kill for the crown. But suddenly he decides he is going to go against everything he has believed in up till now.As the play comes to an end, Macbeth has gone mad. He kills Macduff's whole family, all the children, and even the young, innocent babies. He loses control and doesn't care about anyone or anything. He is now pure evil.Lady Macbeth has now realized her wrong doings. She realizes how cold and dark she once was. She now wants to carry a candle with her at all times, to have the light with her always. She is now trying to get the stench of blood off her hands, but is unsuccessful. The guilt of murdering Duncan eats away at her.

Thursday, October 24, 2019

Albert Pujols Bio

Jose Alberto Pujols Alcantara was born on January 16 1980. He was born in the Dominican Republic and was raised there also. He was raised by his grandmother. At a young age he wanted to follow in his father’s footsteps and become a great baseball player like his father, he had a dream to play in the majors. In 1996 his family immigrated to New York City. Pujols attended Fort Osage High School as a sophomore. In his first year at Fort Osage his batting average was over . 500 and he hit 11 homeruns. He received All-State Honors. In his junior year of High School with only playing one season of high school baseball he started to attract the attention of pro scouts. In his junior year other teams avoided pitching to him as much as they could. With 55 walks in 88 at bats he still hit 8 homeruns. Now the pro scouts advised him to leave High School and find a collage that could get him better exposure. Pujols played in the All-Star game for high schoolers there he drew the attention of Maple Woods Community Collage coach Marty Kilgore. He recruited the 18-year-old star. His main priority was to increase his stock in the upcoming draft of 2000. In his first collage debut he did amazing things. He stared at short stop and batted . 461. He hit a grand slam in the regular season of future all-star Mark Buehrle. He also turned an unassisted triple play. The unassisted triple play is the rarest thing that can happen in baseball. The player turns a triple play by himself without the help of the other players. For his freshman year of collage he hit 22 homeruns and 80 RBI’s. During the Junior College World Series the scouting report on Albert Pujols said it was better to put him on base than to pitch to him. Even though they did not pitch to him anymore the Major league teams had seen enough. Among the teams watching him was the St. Louis Cardinals. The Cardinals had been watching the hard hitting infielder the closest out of all the teams. The Cardinals selected Pujols in the 13th Round of the draft. They offered him a 10,000 dollar bonus for signing but he turned it down and decided to play in the Jayhawks League. There he joined the Hay Larks. It was 4 hours away from where he was living so he moved in with his manager and his wife. In 55 games he topped the Larks in homeruns and in batting average. At the end of the summer the Cardinals finally started to appreciate Pujols and offered him 60,000. He accepted. During the fall ball season he started to learn a new position, Third base. In the winter he returned to his home and married his wife Diadre. She already had a child named Isabella. After that moment they were never separated. In 2000 he was assigned to the Peoria Chiefs a Class A League, his new wife and Isabella followed him. At the Chief he played as their Third baseman. He was named the circuit’s top defensive man at the hot corner, with the best infield arm. During that season there was seven no hitters thrown. Still even with that Pujols finished second in the league with a . 324 batting average, and added 32 doubles, 17 home runs and 84 RBI’s. He only struck out 37 times it under just 400 at bats. The Peoria Chiefs finished under . 500 but Pujols was named League MVP. After that he made his way through the Cardinals farm league teams. He earned a promotion to the Potomac Cannons, then an affiliate of the Cardinals in the Carolina League. After a strong month by Pujols at the Double-A level the St. Louis brass wanted to see him against Triple-A talent. He was promoted again to the Memphis Redbirds, who were preparing for the Pacific Coast League playoffs. In seven games, Albert hit . 367 with two homeruns, as Memphis nipped the Albuquerque Dukes to advance to the PCL championship series. The Redbirds faced the Salt Lake Buzz, a Minnesota Twins farm team and defeated them for the PCL crown. Albert was named the league’s postseason MVP. With injuries on the Cardinals they were able to keep Pujols. To his surprise he found himself on the line-up against the Colorado Rockies playing left field. At three at bats he managed to get one hit. The next game they were on the road. The Cards traveled to Arizona, where Pujols destroyed the Diamondbacks with a homerun, three doubles and eight RBI’s in three games. Included in his offensive barrage was a ringing two-run double off Randy Johnson. In 2003 he injured his elbow, which enabled him to make long throws. He ended the season batting . 359 with 51 doubles, 43 homeruns and 124 RBI’s. He struck out just 65 times in close to 700 plate appearances. In 2005 he was put on the disabled list and missed 15 games. He started playing first base in the all-star game and has been playing first base for the Cards since then. His batting average is . 269 for this year and has hit 7 homeruns.

Tuesday, October 22, 2019

Annotated Bibliogrophy

Thomas Aguiar WRT391 11/18/2012 Al-Fadili, M, Hussain. , & Singuh, Madlu. (2010). Unequal moving to Being Equal: Impact of â€Å"No Child Left Behind† in the Mississippi Delta. (91),. pp. 18-32. This article looks at 3 specific elementary schools tracking the achievement level index of said schools in the Mississippi Delta from 2003 to 2007. They analyzed the teachers of these schools and looked at what is needed to make the NCLB work. Upon further research of the authors they have written a plethora of scholarly articles many concerning education; furthermore, the data published in this article if very clear and informative.Although this article is based on a very small sample group it gives a look at the educator’s point of view on how to make the NBCL work better. Also the data was collected very recently. Again being that this article is a very small sample size I would conclude that it is bias toward these three specific school’s needs but they do represent a larger population of lower income schools all across America. This will not be a main source for my research but this article will be useful in that the NBCL is criticized for hurting smaller, low income school systems in which this article was written.Dee, S,Thomas. , & Jacob, A,Brian. (2010). The Impact of No Child Left Behind on Students, Teachers, and Schools. Brookings Papers on Economic Activity, (2),:pp. 149-207. This article studies how the NCLB act has changed accountability in our school systems with new testing. Furthermore their studies indicate that at lower grades we are finding gains but at higher grades there are little to no gains. Both Thomas and Jacob are affiliated with two major universities making this article both scholarly and relevant.With over 5 pages of graphs and other forms of research this article is broadly based and the statements made have sufficient research to back up said statements. Because of the recent data that this article provides I will u se the studies as a major research in the NCLB act and testing in general. Hoikkala, T. , Rahkonen, O. , Tigerstedt, C. , & Tuormaa, J. (1987) Wait a Minute, Mr Postman! -Some Critical Remarks on Neil Postman’s Childhood Theory. Acta Sociologica, (30),. 1: pp. 87-99. In this critique the authors assess Neil Postman’s views and theories on how children learn in a technologically driven society.The author’s points out in many instances were Postman contradicts himself throughout his works as time and technology changes. The leading authors of this scholarly article both hold major positions at a University levels making this critique a worthy article to cite. Written in 1987 I feel this article was written at a time in America were technology was changing from television to computers making this an interesting view on how children in America are learning and growing up in a different world than the birth of television.While the article feels bias towards Postman i t still has very worthy points on education, testing in America, and how children in our society grow up with new forms of technology. This article while helping my research on the effects of the NCLB act and testing in general will not be a primary source but will provide me with a view on our society concerning this subject at the time right before computers were in every household and therefore I find it very useful. Lohmeier, L, Keri. (2009).Aligning State Standards and the Expanded Core Curriculum: Balancing the Impact of the No Child Left Behind Act. Journal of Visual Impairment & Blindness, (103),. 1:pp. 44-47. This article addresses the vision impaired learning process concerning the NCLB act and how law makers can merge laws concerning the teaching the vision impaired to better work with the NCLB. Keri L. Lohmeier, Ed. D. , sits as a cochairman of National Agenda Goal 8, board of directors, Division on Visual Impairments making her more than qualified on the subject at hand .The charts and tables she sites are well organized and easy to follow giving the reader an idea why her ideas on changing the way we teach the vision impaired at a governmental level. Written in 2009 this article is recent and relevant. The subject of vision impaired education points out how major acts such as the NCLB have difficulty in helping all of our students and although this article will not be a major part of my research on testing the fact stated above proves how general testing has major problems reaching all students. Mayers, M, Camille. (2006).Public Law 107-110 No Child Left Behind Act of 2001: Support or Threat to Education as a Fundamental Right?. Education, (126),. 3:pp. 449-461. The article looks at the goal of the NCLB concerning helping lower income students having the opportunity to the right of a fair education. Camille is works as an Educational Guidance and Counseling at the California State University making this article scholarly and or worth in my researc h for testing and NCLB. I would deem his research trustworthy as his points and statistics backing them up are up to date and relevant.His conclusion is one that I share in that the NCLB does not help lower income students as intended so I may be bias but his sources are scholarly and his arguments are not biased. If this article covered more than just lower income students I would defiantly consider this as a main source of research but unfortunately it is not. Pederson, V, Patrica. (2007). What is Measured is Treasured: The Impact of The No Child Left Behind Act On Nonassessed Subjects. Clearing House. , (80),. 1:pp. 287-291. In this article the author studies the impact that the NCLB act has played on the arts and humanities subjects in our school systems.Upon further research of Pederson she has many published scholarly articles concerning education making this article worthy or research. The tables and data shown in this article are gathered from 2001 to 2005 and are well detai led and comprehensive. As with most of my previous articles chosen for this research it is written our current times this one being 2007. The article is very clear in that it does not delve in subjects that the NCLB was intended for but how it takes away from other important subjects that law makers overlooked in the act.This article will not serve as my primary source of evaluating the NCLB act but it is very important to understand how this act effects studies in subject in which it was not intended and why law makers feel they are not as important. Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Knopf. In this book author Neil Postman analyzes technology from the viewpoint not often looked upon, the negative effects it has on society. From the mid 60’s to present day Neil Postman has been writing and teaching his views on technology making any of his works a worthy topic in this field.Technology is changing every day. Being that this book was published over 20 years ago one would infer that it is out of date but on the contrary many of his theories on the subject are still being analyzed. The author’s thoughts on testing in an educational form make this book very useful for analyzing and critiquing the NCLB. Postman, N. , & Weingartner, C. (1969). Teaching as a subversive activity. New York: Delta Books. In this book the authors take a look at the problems as they seem them with the education system in America and propose solutions to this problem.As I have already stated in this bibliography Postman is more than a worthy source to analyze concerning education and testing. This book has theories and opinions stated by the authors that some may agree or disagree with but in my humble opinion it is the problems that are pointed out in this book that are most concerning especially considering that it was written in 1969 and we still have many of these problems. While the authors are very opinionated in their idea s they promote a new way of thinking about our problems with education in America.Even though this book was written in 1969 I feel the ideas and solutions to education make this book worthy of a main resource. Powell, Deborah. , Higgins, J, Heidi. , Aram, Roberta. , & Freed, Andrea. (2009). Impact of No Child Left Behind Act on Curriculum and Instruction in Rural Schools. Rural Education, (30),. 1:pp. 19-28. This article examines a number of rural elementary schools concerning how the NCLB has affected their curriculum and how it will further shape what is taught in rural schools in order to help students pass tests created by said act.While the authors are unknown to myself the journal in which it is published concentrates on specific government acts concerning education. The data portrayed in this article varies from negative to positive making this source unbiased and being that this article was published in 2009 makes this data up to date and useful. While still up in the air in which role this article will play in my research the article directly delves into a topic that hits home as a future educator in a rural school system; furthermore, what and why the NCLB act changes what we teach our youth.Ross, M, S. (2009). Postman, Media Ecology, and Education: From Teaching as a Subversive Activity through Amusing Ourselves to Death to Technopoly. The Review of Communication, (9),. 2: pp. 146-156 The purpose of this review of three of Neil Postman’s major works concerning Education is to point out Postman and his sometimes co-author Weingartner’s theories, concerns and solutions to education and teaching. Susan Ross, an educator herself, writes this review while providing examples of how these books helped shape her career as an educator.Susan is an assistant professor and the Gulf Coast Speaking Center Director in the Speech Communication Department at the University of Southern Mississippi therefore giving her readers a valuable view at the subj ect at hand. This article was published in 2009 making it relevant to today’s standards. While this will not be my primary research on Neil Postman and his impact on education concerning The No Child Left Behind (NCLB) Act and the use of conventional testing Ross does delve into Postman’s ideas and concerns on testing and furthermore the article was written during the era of the NCLB.Tavakolian, Hamid. , & Howell, Nancy. (2012). The Impact of the No Child Left Behind Act. Franklin Business and Law Journal, (1),:pp. 70-77. This article is a direct look at the NCLB and its impact on the graduation rates of the American school system and how that in turn relates to young adults enrolling into a higher learning institution. The authors are concerned with how the NCLB impact on today’s demanding job market and whether or not our education system promotes an environment where children can compete in said market.The leading author is a Professor of Management at Califo rnia State, Fullerton making this work a scholarly. I find this article to be of worth because the overall objective of education should giving our youth the best opportunity possible at competing in the job market. Published this year, this article gives a very fresh look at the NCLB act and its impact on our educational institutions. Because of the articles specific purpose this will be a major resource in which I will use in writing about the NCLB act.