[ { "main_document": "This shows that mean flow speed is directly proportional to friction. Substituting (13) into (11) reveals where In turbulent flow, the products of turbulent fluctuations, These turbulent shear stresses, are called Reynolds stresses. Therefore the total shear stress in a turbulent fluid can be expressed as Most calculations and analysis are done under the assumption that the pipe surface is smooth. In real situations however, this is not true. For laminar flows, however, flow is unaffected as the flow merely passes over these protrusions. In turbulent flow, even the smallest roughness can have a major effect. Looking at the turbulent boundary layer, if the protrusions are so small as to lie within the viscous sub-layer, the flow is unaffected, and the surface is said to be hydraulically smooth. However, if the protrusions are large enough then they reach fully into the turbulent core. This causes the viscosity to become irrelevant and so Reynolds number plays no part in the flow, which is now determined by the size of the roughness elements. If the protrusions extend only into the buffer lawyer, the flow is dependant on both roughness size and Reynolds number. Nikuradse (1933) carried out experiments using different roughness of sand to determine surface roughness. Thus the wall friction coefficient He found the following three regimes of roughness, using a non-dimensional value of roughness height, Where At high enough Reynolds numbers, the friction factor of many pipes becomes independent of Under these conditions the Nikuradse equations and graphs cannot be used. An equivalent grain size for large values of Re can be specified for the pipe. These roughness values (in mm) cannot be measured, however using current observed roughness values, American engineer Lewis F. Moody (1880 - 1953) prepared a diagram which can be used with commercial pipes, based on results obtained by C. F. Colebrook. It allows engineers to find the friction factor of a pipe with given roughness Pipe pressure measurement is achieved using a manometer. In it's simplest form a manometer is a device consisting of columns of liquid in a U-tube set in a vertical plane, with one end connected to what is being measures, and the other end to the atmosphere. A denser liquid, which is immiscible to what is being measured, is used in the U-tube. From measurements of the height difference of this liquid between one length of the tube and another, a pressure difference between To measure the pressure between two points, the manometer can take the form of Figure 5 Because the mercury is incompressible, and points A and B are connected, they have to have the same pressure. Therefore PJT/NGS/MC - February 1998 From Bernoulli's equation where it can be shown that Therefore Likewise Subtracting: (15) therefore Finally, divide through by so where Figure 6 shows the flow diagram of the system. Table 1 shows where pressure is measured relative to the end face of the pipe. Figure 7 is a detailed view of the pipe tappings, intake and outlet. The apparatus consists of a horizontal pipe with tappings along the length", "label": 1 }, { "main_document": "carry it out. Medea is a figure who is truly 'determined to act' (Schlesinger: 1983, 295), which is presumably considered more frequently as a masculine characteristic rather than as feminine. In addition, Medea dominates the play completely: she is always in the centre of the events, that is, it is almost always her who makes something happen even before the beginning of the play. Again, in the real world of this age, it can be assumed that it is normally men who are predominant and make important decisions in everyday life. From this point of view, Medea could be seen as not just a typical woman but rather as a human being who, to some extent, is beyond the traditional category of gender. There is no longer any ways in which we can clarify what exactly Euripides intended to express through illustrating Medea as a masculine female, however, it might be one of the possibilities that Euripides, either consciously or unconsciously, in somehow realised that there were not as many differences between men's abilities and women's as the society suggested, just like some Athenian people understood at the back of their mind that the slaves who they treated as a wild beast were actually not very different from themselves. The final significant feature of the By the end of the play, Medea murdered at least six people: she killed Pelias indirectly by tricking his daughters, and then slaughtered her own brother in order to delay the pursuit while escaping from Colchis, killed Creon and his daughter with poison, and eventually even murdered her own two sons. In the world of Greek myth, it was a long established tradition that a person who killed somebody for whatever reasons must repay with his/her own life. Here, however, not only did Medea managed to flee to Athens without being punished in anyhow, but also at the end of the play she even appears above the stage in a magical chariot as if she became a kind of semi-divine figure. The finale of the play is, indeed, the complete triumph of Medea over Jason. Jason's word 'May you be struck down by our children's avenging curse and Justice who punishes murder' probably represented most of the audience's expectation at that time, but this was rejected shortly and completely by Medea saying 'What god, what spirits listens to you, the breaker of oaths, the deceiver of hosts?' and indeed nothing happened until Medea disappears with her dead sons leaving grief-stricken Jason alone on the stage, which was probably really shocking and unpredictable ending for many of the Athenian audiences. Presumably Euripides realised that this finale was more likely to be less popular than Medea actually being punished in the end, however, he rather chose to boldly express his scepticism and resistance to gods dominating world and man's fate. As we have seen in this essay, Medea is not presented not merely to frighten the Athenian man through showing how dreadful consequence awaits them if they make a wrong choice concerning their marriage, but her characteristics could communicate", "label": 0 }, { "main_document": "atop a plateau and containing the usually arranged monuments that you would expect to find in the ideal Roman town. Cosa is a good example of how the Romans built their towns when building them almost from scratch in their land of origin. However in Emporion, we see a difference since Emporion provides us with a prime example of how the Roman town looked when affected by the settlers who were there before. Much of the urbanism and the look of Emporion, although romanised, reflected the Greeks who had a colony at Emporion before the Romans. This comparison gives us as archaeologists an insight into the difference between Roman towns when there is a distinction in the way the two towns has been affected before the structures were built. Cosa and Emporion also are good examples of the difference there can be between the levels of research and study of two sites. On the one hand you can have a site like Cosa which has multiple resources on its many excavations throughout the past few years. On the other hand you could be looking at a site like the one at Emporion, which although it has been excavated numerous times, the level of documentation and sources on the site are sparse. This shows us how different the level of research and type of study in two separate Roman sites can be and allows us to understand the variation further between two towns that, although were within the same Empire, were, ultimately, a world apart.", "label": 1 }, { "main_document": "and scientific experimentation has great relevance to the current National Curriculum in England. Both the key stage one and key stage two science curriculum have a section entitled \"scientific enquiry\" (National Curriculum online). In key stage one, this section includes consideration of what might happen before seeing an experiment performed, and being able to communicate what happens to others. According to Goswami (1998), this should be well within the capabilities of children this age. Also, in key stage one, the emphasis is on asking questions about situations that are familiar to the child and occur in every day experience. This fits in with Piaget's idea that a child of this age can only reason about phenomena that are directly perceived. The child is not asked questions about theoretical concepts. In key stage two, the \"scientific enquiry\" section includes the role of manipulating one factor while controlling others in order to measure effect. Kuhn (1989, cited in Goswami, 1998) found that primary school children were not capable of this. Although this idea is too advanced for key stage two pupils to fully understand, it does not mean that it is not appropriate to be taught at this stage. If older primary school children are given this information at this point then they may be able to understand it and use it once they have moved on to secondary school. As well as focusing on what theories can be taught to children while they are at school, it is also important to identify what theories children have already developed from the home environment and how these can contribute to their learning. Pine, Messer & St. John (2001) gave different aged children a balance task where they had to balance a beam that was weighted at one end. Pre-school children were able to complete this task by trial and error, but six and seven year olds could not complete the task. This is because the older children had developed a na Children develop many na They suggest that in order for a child to understand a new concept the child's false beliefs need to be brought to their attention. The new concept then appears more meaningful and is easier to accept. Asking children what their current beliefs are is important in teaching and should be highlighted in a curriculum. It is very difficult to decide what concepts should be taught in a curriculum and when the appropriate time is to introduce them. Research into cognitive development can help to a certain extent but should not be strictly followed as there are many exceptions and no firm rules to follow. Children are always able to rote learn information, if they have the memory capacity, even if they cannot fully internalise the information. If teachers identify the extent of an individual child's knowledge and capabilities then they can produce work for the child that matches their capabilities and helps them to extend their knowledge without being out of their range of thinking.", "label": 1 }, { "main_document": "recruitment is not the right solution (Gonzalez & Tacorante, 2004; Lucas, 2004). In hospitality, recruitment & selection methods may depend on business strategy. If the company wants to gain competitive advantage, then increased investment in formal recruitment & selection is needed (Redman & Mathews, 1998). However, if a company's strategy is to maintain market share and steady profits, a new approach towards recruitment & selection may not be justified (Price, 1994). Finally, Lockyer & Scholarios (2004) suggest that if \"best practice\" is regarded holistically, establishing a relationship between external factors, strategy and the recruiter, then \"employee fit\" will be more achievable. However, current evidence of HRM practice remains weak and unbalanced.", "label": 0 }, { "main_document": "The main issue of vocabulary development for researchers studying Child Language nowadays and during many years in the past is to try to know how and how fast children can acquire words and their meanings in their native language (we will focus on the English language) and also logically what the limits to that acquisition are (correct sound, approximate concept...). We know that the most and most important part of it happens during the first years of the childhood, that is to say during the preschool years (from the first months of the life of a child until he/she is about three years old)) when linguists like Rice notice a rapid vocabulary development also known as \" Spurt \" after Barrett, when children learn in average eight to ten words a day. Still, words -or sounds associated with concepts and meanings- require phonological processes by progression and stages to be acquired. First come the phases of comprehension followed later by production, studied by Benedict in 1979, when the child begins to formulate his thoughts. After appear the child's first words or babbling that we could compare to an imitation of adult language but with no real concept (for instance a child can articulate what seems to be word, a syllable as 'gig' but it is related to anything in particular). Then once the word is known correctly enough, the child has to get its meaning (Clark, 1973). It is during that phase that there is a predominance of specific classes or categories of words in the child's lexicon (like nouns of objects ('table') or persons ('mummy')are preferred to concepts of actions). The acquisition of language is complex because of several parameters such as the child's personal abilities to learn language (e.g. the SLI or impaired children who have innate difficulties in learning words) . It also depends on the context in which the child learns and his or her existing knowledge of the language ( grammar, lexicon...), all that influenced by cultural and social factors that make greater differences between the children. But as a whole, vocabulary learning follows a sort of scheme leading the child from babbling to real meaning and from gestures to words. Thus, linguists have noticed a considerable development with robust abilities to map new words. The child first learns them as a procedure to simplify his/her speech and to be understood ; for instance, once they expose him/her to a new word, he/she under extends it : one single word corresponds to one particular example (Reich, 1976), to get later to a prototype word or central concept of a word attached with specific characteristics, after Rosch's Prototype theory in 1978, and finally on the opposite, overextends it (the child names several different things linked with a far root with a single word, e.g. 'bird' for 'penguin' etc, after Bowerman, in 1978). But the question here is, does the child need multiple exposures to start acquiring the meaning of a word? Or on the contrary can he/she get it quickly and in a little time as tend to", "label": 0 }, { "main_document": "excerpt from the Oxford Brooke's University's website on how to open a bank account for international students: I can tell from my own experience that it is even more difficult if you're not a student. Forging or stealing is not a problem, either, since the state-of-the-art protection mechanisms included in modern IDs render this infeasible. Although this has been a concern in the past, the possibility of storing biometric information on RFID chips is a great leap compared to the outdated smartcards used in France, for example. If anyone is worried about security, please stop using those credit cards - and paper money as well. On the other hand, there is no conclusive answer to the question whether the billion pound investment is really worth it. In addition to the actual \"hardware\" that has to be shipped, it is a considerable administrative burden to maintain databases. In order to access all the features of the new IDs, police has to be equipped with suitable devices, so consequential costs are likely to be higher than the initial investment. Fighting illegal immigration would be greatly simplified by IDs. Basically, the police or border guards could send everyone away who doesn't have a valid ID. So the debate comes down to whether or not we think immigration is a good thing. It is not uncommon that the most affluent countries take the strictest precautions to deter poor asylum seekers from getting a piece of the cake - while Pakistan was ready to accept 2 million Afghan refugees. Either way, being able to demand a form of ID at any time can be used for tighter controls and if this is what the government wants, it is a way to draw a line to immigration. But the UK should remember that it needs some immigrants and should avoid alienating them like the U.S. did with their visa regulations and border controls. A somewhat more convincing train of thought is the idea that by undermining our European tradition of freedom and democracy through state control we actually \"let the terrorists win\", i.e. if we decide to give up some of our freedom for the sake of temporary security, we will finally lose both. The ideal of the \"freeborn Englishman\" seems at risk. Though this certainly applies to things like freedom of speech, it is hard to see how carrying a card that states who you are can constitute such a threat. This would only be the case if government agencies started to collect and centralise more and more of the electronically stored data about individuals. Chris Lawrence-Pietroni ([1]) makes a point in saying that the right to privacy is not recognised in the UK - as opposed to countries like Germany who practice a more responsible handling of personal data. The buzzwords here are the existence of a A survey quoted in The Register a year ago found that only 10% of the population are very confident that the government would be able to hold personal information securely. Perhaps this should be considered as a more rewarding", "label": 0 }, { "main_document": "The first human colonisation of Cyprus has been a much debated topic in recent archaeological discourse particularly since the excavations at Akrotiri Further discoveries of Early Pre-Pottery Neolithic sites such as Parekklisha The following discussion is divided into three sections preceded by a semantic explanation of terms used to describe the stages of colonisation. Part 1 presents the archaeological evidence from Cyprus with reference to key sites, the locations of which are shown in figure 1. The question of the origins of these first settlers is addressed in the second section through the observation of similarities in material culture and technology from sites in the Levant and Anatolia. The third part explores the process of colonisation in the context of Cyprus but also within the general theoretical framework of island colonisation. Before embarking on a discussion about the evidence for the colonisation of Cyprus, it is important to define exactly what is meant by 'colonisation' and to distinguish it from other terms such as 'visitation' or 'occupation'. Although all three terms can be used to describe human presence on the island, each one encompasses different types of human activity and permanence of settlement as reflected in the material remains. Cherry defines occupation to be when \"an island has become for one or more groups the principal provider of its subsistence requirements and the focus of its residential pattern throughout the year\" (1981:48) in comparison to 'utilisation' which is associated with seasonal or short-term visits for the purpose of exploiting the natural resources, and for accidental or unsuccessful colonisation (1990:198). It is proposed here that there are three stages for discussion in the colonisation of the Mediterranean islands: 'visitation'; 'occupation'; and 'colonisation', not all of which will necessarily be represented on every island. Outlined here are the definitions as used in this study. 'Visitation,' which is based on Cherry's classification of 'utilisation' ( The term 'visitation,' with its connotations of brevity and impermanence, is preferred to 'utilisation' since the exploitation of resources may have been equally important during periods of occupation and colonisation. 'Accidental' and 'unsuccessful' colonisation are excluded from this category since a certain amount of inference is required to surmise whether colonisation was intentional or accidental or whether acquisition of resources was the only objective when visiting an island. It has been cited that the export of domesticates to islands can be taken as evidence of deliberate colonisation (Cherry 1990:198) but this excludes explorations prior to domestication and those groups reliant on a hunter-gatherer subsistence strategy. This theory also precludes alternative methods of the transportation of domesticates such as trade and exchange networks. 'Occupation' as defined by Cherry can be split into two categories, 'occupation' and 'colonisation', both of which suggest a certain level of permanence of settlement and year-round habitation with the island supplying all subsistence resources. Where they differ is in long-term continuity of settlement. 'Colonisation' is the final stage where humans finally inhabit and populate an island without withdrawal due to insufficient resources or unviable population while 'occupation' is the preceding phase in this process, perhaps lasting only", "label": 1 }, { "main_document": "Toyota is helping suppliers to increase their capacity. For example, the capacity of European plants was limited by the suppliers output and Toyota decided to help them increase their capacity (Automotive News, 00051551, 5/17/2004). For suppliers, Toyota is a role model, example of how to use cost and quality. With those objectives in the TPS process, Toyota concentrates more on customer value whereas competitors will only harass suppliers with price-cutting demands. Where Toyota cut costs of suppliers, competitors cut their prices. A research (journal of Management Development, 02621711, 1996) shows that \"in the case of Toyota, frequent visits by its personnel involve close scrutiny of the production process, management training programmes and in some cases financial assistance. Toyota, for its part, counsels against over-dependence by the supplier, encouraging instead a broad mix of contracts with other purchasers\". Not only Toyota assists its suppliers but Toyota also recommends them not to rely solely on it. This is another example where Toyota culture differs. For more buyer power, competitors would rather fight to get exclusivity from suppliers. One way to cut suppliers costs is to get them close to Toyota's plants. Moreover, Toyota is urging Japanese suppliers operating in the US to use fewer components from Japan imports. Being properly managed, suppliers like to work with Toyota. To illustrate that, one quote : \" With its increasing scale and car parts, Toyota is THE carmaker to get married with long term contracts. Obviously, Toyota expects faster responses than some companies are used to and takes a longer time than U.S. automakers to measure and consider a new supplier. \"In return, Toyota offers a long-term, collaborative relationship.\" (Automotive News, 00051551, 2/7/2005). Even when serious problems occur, Toyota does not blame entirely the suppliers. A supplier quality problem is often related to \" In fact, Toyota is very demanding but the suppliers find out that it is worth the hassle given the long return on investment. Unsurprisingly, Toyota scores high on nearly every supplier survey. By training suppliers, Toyota not only transfers the knowledge to its suppliers but also learns a lot from their processes, products and costs structure. Thus, the supplier service or product is no longer a black box for Toyota and the relationship can be a partnership rather than a supplier-buyer with underlying power on either side. Similarly, the supplier can improve and adapt considerably its operations strategy to close the gap of mismatches in the supply chain and meet Toyota expectations. For the Texan plant, Toyota nominated half of the suppliers within the existing partners. Saving on existing supplier trainings, Toyota can concentrate on new suppliers. Those suppliers are trained by teams with people from purchasing, logistics, quality, and technical support areas. (Automotive News Europe, 06/18/2001,). Research suggests that: - Within Toyota suppliers, a culture exists of continuous improvement or a process occurs of continuous reassessment of the company's objectives. - Suppliers share sensitive information with Toyota. This suggests a high level of trust between the supplier and Toyota, where firms feel secure within their long-term contractual arrangements, to openly discuss sensitive", "label": 0 }, { "main_document": "In this experiment the relationship between the Adenosine Triphosphate (ATP) in a sample and the number of colony forming units is going to be examined. It is going to be determined whether there is any correlation between these two. The sensitivity of the ATP method and the resazurin method are also going to be compared. The ATP method is finally going to be used to determine the cleanliness of two surfaces. ATP is found in all living cells but, upon cell death, will quickly disappear. The structure of the ATP molecule is shown below; ATP will react with Luciferin to form a photon (light). A machine may then be used to detect the level of light produced (which is at very low levels) and therefore the amount of ATP present. The following equation shows this; This means that the level of ATP on a food preparation surface can be used to give an indication of the cleanliness of that surface. Because this method detects ATP, it will only detect levels in living cells. The cells that are likely to contain ATP on a food preparation surface are likely to be of two sources; either from food debris or, from microbiological cells. This is the major criticism of the ATP bioluminescent testing method; it detects ATP other than that from microbiological sources. Having said this, however, food preparation surfaces should be clean and therefore free from any food debris as well as being free from microbiological cells. The system may therefore be used as a good indicator as to how well a cleaning regime has been completed. The ATP bioluminescent method will give a result as to how clean a food preparation area is within a couple of minutes. This is far more rapid than the traditional methods which can take between three and five days. The other advantage of the bioluminescence method is that very little training is required to obtain results in which confidence may be expressed. The only piece of equipment needed is the machine itself and no laboratory or competent microbiology professionals are required. Another fairly rapid way to estimate the number of microbiological cells on a food preparation surface is to measure the reduction of certain electron acceptors. This reduction occurs as a result of either respiratory or dehydrogenase activity within the cell. Resazurin is an example of a dye which will change colour when something goes from an oxidised to a reduced state. It may therefore be used to indicate the number of microbes present, the rate of change of colour of the dye also giving an indication. All procedures described below should be carried out in an aseptic manner, where ever possible. For example, the preparation of dilutions described below should be carried out near a Bunsen burner so that the rising column of hot air ensures that it is done in an aseptic environment. The first stage of the experiment was the preparation of a series of dilutions which were to be tested. 1ml of the sample culture ( The 1ml of Having thoroughly mixed", "label": 1 }, { "main_document": "shows the velocity in both polar and Cartesian formats. The accuracy of the results can be changed in working under the menu view > numbers & units. Measuring the position, velocity and acceleration of point C on Link BA is slightly more involved, and is described in Appendix 3 Posted on WebCT 08/11/04 message no. 1573, by Nicolas Brooks, Title: Point C in WM The velocity and acceleration vector diagrams can be drawn to scale using a suitable CAD system, allowing the relative values to be read off directly. In this report AutoCAD 2002 was used to draw the velocity and acceleration diagrams for the 4 bar linkage. The velocity diagram for the 4 bar linkage (Figure 2.2 ) was drawn graphically, with the velocity vector of each bar being calculated using Equation 2.1, this vector quantity can then be drawn to scale using the CAD functions in AutoCAD. When vector quantities can not be calculated due to unknown rotational velocities then Lines of Action (LOA) are used. The vector quantity of the links with unknown rotational velocities can be found where their relevant LOA's intersect on the vector diagram. To determine the vector quantity and direction for Link AO, the following procedure was used; with reference to Equation 2.1, Velocity for link AO can be found. where, The rotation direction is anticlockwise using the right hand co-ordinate system where a positive rotation goes from +X to +Y. The quantity V Its direction is at 90 A complete vector diagram of the linkage can be drawn using these techniques. It is important to note here that this vector diagram and therefore velocities are relevant only at the specified input angle. Changing the angle would necessitate a rework of the vector diagram. Full working of the velocity diagram can be seen in Appendix 1. The acceleration vector diagram is done in the same manner as that of the velocity equation, using CAD to draw a diagram that graphically represents the acceleration vectors of the linkage. To calculate the acceleration vector quantities Equation 2.2 is used. Each acceleration vector can consist of up to 4 separate components shown in Equation 3.1. Equation 3.1 - Acceleration Components The full working for the acceleration diagram can be seen in Appendix 2. To analyse the four bar mechanism using the equations derived from the vector loop equations the spread sheet package Excel can be used. This package allows complex mathematical functions to be evaluated according to the data input into the equations. This allows the mechanism to be evaluated for any crank angle, velocity, acceleration or force by changing the initial input values. The Microsoft Excel software uses cells references rather than constant values, this allows equations to reference the contents of a cell rather than a constant value which updates each time data input into a cell is changed. Figure 3.5 shows a typical screen shot from the Acceleration page of the mechanism analysis. Each cell can be assigned a name which can be used in the formulas to reference a particular cell. This makes the", "label": 1 }, { "main_document": "Circa 2000 BC the Babylonians developed a formula for solving the quadratic polynomial equation. Since that time, mathematical history is littered with successful and failed attempts to solve poly-nomials of higher degrees. In the four thousand years since the Babylonians solved the quadratic, the cubic and quartic were solved towards the end of the Dark Ages and the Renaissance respec-tively and there were numerous researches into polynomials. Then Abel and Galois independently proved the unsolvability of the quintic by radicals, early in the nineteenth century. Solving the quadratic equation is something we learn early in secondary school and is instantly recognizable to the mathematician and non-mathematician alike. We learn to solve the equation ax Example 1.1 This may seem inane but once the polynomials grow in degree, the question of the existence of solutions by radicals becomes more interesting and important. One of the main themes of this essay is the solution by radical of polynomials, leading to: Theorem 6.17 If n (The theorem is quoted from page 147, I.N Stewart's Galois Theory, Chapman and Hall/CRC (1973)). To prove this we need to become familiar with some Galois theory.) We will also investigate a method of solving the quintic equation not involving radicals. The Eisenstein irreducibility criterion, below, gives a situation where a polynomial f is irreducible i.e it is not a product of two polynomials of lesser degree. To get there we need the following: Definition 2.1 For a quotient field F, Theorem 2.2 Let U be a unique factorization domain with quotient field F and let nonzero Then Also, the coefficients of Proof: The coefficients of Then From Algebra II, let b be a highest common factor so that Then For uniqueness we have Then Then U is a unique factorization domain so r, s have no non-trivial common factor. If s is not a unit of U then it has an irreducible factor p. We know p divides the coefficient of This gives rise to a contradiction since r and s are relatively prime and p divides s so it does not divide r. Then p must divide the coefficients of Then The next theorem follows easily. Theorem 2.3 Let U be a unique factorization domain with quotient field F with Suppose Then Proof: We know The product of primitive polynomials is again primitive (Page 396, A First Course In Abstract Algebra, Fraleigh, Addisn-Wesley (2003)) and by the uniqueness proved in Theorem 2.2, Take a Then we can take Now to prove Eisenstein's irreducibility criterion. Theorem 2.4 Let Let Suppose there exists an irreducible a) Then Proof: Take We have k +l = n, a0 = b0c Let i be the smallest positive integer such that p divides b Then we see ai = We know p is prime and does not divide bi, c So i = n and since we know n = k + l and i < k we have k = n and l = 0. This means that we can only factorize in the form f(x) = hg(x) for This next basic example", "label": 0 }, { "main_document": "In Oct 2004 Google launched Google Print, currently Google Book Search (GBS), which aims at service of book searching and its excerpts displaying. Google first works with 5 main USA and UK libraries, e.g. University of Michigan, Oxford University etc, to digitize books in their collections and make them accessible via GBS. Google plans to digitize and make available through GBS service approximately 15 million volumes within a decade. Up to now this ambitious plan and its actual action have brought about a 6-month long controversial dispute, even lawsuit, involving authors, publishers, librarian, internet users and politicians all over the world. Copyright and relocation of its related profit, publication security as well as culture identity are the core issues concerned by various interest groups. GBS is a 10-year plan at its first stage. It is too early and too hard to predict what will happen to it in the late of its first 10-year. But, GBS and its related debate are displaying a phenomenon, i.e. Non-publishing companies are expecting to make profit through publication-related business. They may be the new entities and will change the business model of publishing. Its uncertainty leads that different interest groups have different attitude to it. Moreover, in the same interest group, they may have contradictory opinions, while others hold an ambiguous attitude. Publishers: After the lawsuit of the Association of American Publishers (AAP) on behalf of five major publisher members, there is a continuation of accusing GBS of copyright violation. Some publishers, e.g. Bloomsbury, strongly disagree what GBS has been doing. They are calling on internet users to boycott Google. (The Guardian, Mar 4, 06) However, other publishers, e.g. Blackwell and HarperCollins, have put 5-6000 titles into GBS (Bookseller, LBF Daily, Mar 7, 06). Authors: Authors Guild, representing 8,000 writers, at first joined individual authors in a class-action lawsuit alleging the GBS Library Project violated their copyright. However some authors, e.g. Cory Doctorow, expressed their firm support to this project. Government: GBS has hit a particular nerve in France. The French response from leading politicians, technology companies and the book world has as much as do with culture. They suspect whether GBS, a USA-based company's project, can well manage European culture. As a result, a move of European Digital Library (EDL) was put forwarded and supported by people, including Mr. Jacques Chirac, French president, who is already backing another Google rival in the form of Quaero - a multimedia search engine. EDL is planning to make 2m books, films and other files digitising and accessible through the portal by 2008. \"It is clear that the Frenchman's tract, which paints the digitisation debate in terms of Europe fighting American hegemony\"(Financial Times, Mar 7 2006 ) GBS is a new thing facing international publishing. It is not because of digitisation of publications itself (there are some similar projects, e.g. Gutenbery Book Project, OCLC Bibliography service etc) but the feature of the \"practitioner\", Google is an internet searching service company whose business is based on money earning through advertisement matched with the relevant searching results. Its operational idea and", "label": 0 }, { "main_document": "wisely. Aquaponics goes some way to reducing the environmental impact of food production in the following ways: The above factors also go some way to increasing the profitability of both the aquaculture and horticultural produce. Disadvantages in aquaponic production systems include high start-up costs and reliance upon skilled staff to create and maintain a nutrient-balanced system. Knowledge of both crop production and fish rearing is required, and thus most current systems are personal hobbies, small, family-run businesses, or academic demonstrations of the technique. The time required before the grower sees the benefits of an aquaponic system must also be seen as a drawback. It is currently estimated that growth in aquaponic systems will not reach its full potential until at least six, and ideally twelve months of operation (Savidov 2006). The intensive nature of large-scale food production in the UK probably limits potential for diversification into aquaponics to smaller growers. One possibility, is for aquaponics to be used to produce organic fish and organic hydroponic fruits/vegetables under glass. However, current rulings, in the UK at least, by regulatory bodies such as The Soil Association stipulate that produce must be grown in soil to be certified organic. The need to work in harmony with soil flora and fauna means that even organically-derived nutrient solutions are not acceptable. It could be argued however that soilless growing preserves the soil-borne organisms as the soil is left completely undisturbed. Furthermore, recent work at the Crops Research Centre in Alberta, Canada suggests that the micro-organisms may be key to the success of an aquaponic system, Savidov (2006) noting that the 200% increase in Genovese basil yield over four years was likely due to bio-stimulants released by populations of benthic organisms. Atkin and Nichols (2004), using manure-derived nutrient solutions in conventional hydroponics, researched the possibility of organic hydroponics in New Zealand but the results were disappointing. The major wastes from aquaculture and the major fertilisers for crop production are ammonia in its ionic form, ammonium (NH Other chemical by-products of fish production may also serve as plant micronutrients. This has meant a wide range of crops, including cucumbers ( A large proportion successful research, however has used lettuce and leafy herbs in the hydroponic component of the system, possibly because of its lower demand for potassium (K) (Resh, 1995), a nutrient not generally found in high concentrations in fish-culture water. Furthermore, with only one growth stage (as a commercial crop) lettuce does not generally require the change in nutrient formula of a fruiting crop such as tomatoes, as it begins to produce flowers and fruit (ibid). Lettuce is a widely grown crop in the UK and can be produced under glass all year-round. It is feasible that growers could consider using an aquaponic system to create a second income stream from fish farming alongside lettuce production, simultaneously reducing their fertiliser budget. It is possible to supplement the aquaculture water with plant nutrients harmless to fish (Rakocy Previous studies in aquaponics have often been carried out as aquaculture-based research projects, the hydroponic crop being seen as a by-product of", "label": 1 }, { "main_document": "The theory of the firm is based upon the assumption that all firms seek to maximize profits regardless of the market in which they operate. The implicit assumption in the question is that the ultimate long term goal of the firm is that of survival This essay supports the case that, to the extent firms face fierce competition and threat of hostile takeover, they can be observed to maximize profits for survival. However where there is separation of ownership and control, firms might decide to pursue other non-profit maximizing goals like those of sales and growth maximization and are still able to survive. A few caveats highlighted in this essay are that in reality it may not always be possible or desirable for firms to maximize profits and that profit maximizing firms may not necessarily survive. In this case we would take survival to mean staying in business Firms are defined as complex entities with at least three types of members. (1) Workers who are largely paid fixed wages and are told what to do (2) Managers (Agents Economic profit is the income that is remaining for the owners of the firm after paying for the factor of production that they utilize, and its calculation is shown in the following equation: Managers are primarily interested in the present value of the expected stream of profits. (Katz &Rosen,1998, pg198) Principal- Agent conflicts which explain the alternatives to profit maximization aims of the firms would be examined in greater detail in later part of the essay Total revenue= Sum of payments firm receives from the sales of its output, ie: TR=Price x Quantity Total cost= Firm's total expenditure on the inputs used to produce outputs where expenditures are measured in terms of opportunity cost. ( Factor price, technological possibilities, production characteristics are held constant) Accounting profit differs from economic profit in that it is an ex post concept based upon past transaction and historical fact. Unlike economists, accountants do not include opportunity cost in their calculation of profits. Profit maximization occurs where Referring to figure 1 below, beyond output level x*, MC>MR, firms can raise profits by cutting back their production. Conversely where MR> MC, by increasing its output firms can raise profits. Thus long run equilibrium is established at the profit maximizing level of output, x* where MC=MR. However the marginal output rule by itself is insufficient to determine firm's output choice for survival. If for every choice of output level the firm's average revenue (AR Hence other than producing at the profit maximizing output where MC=MR, it is only viable for firms to continue its operation if at this output level MR is also greater than AVC (average variable cost). AR= Total revenue/ total output According to Armen Alchian In general, behavior resembling something other than profit maximization is most likely to surface in firms where profits are expected to be ample enough to please stockholders, thereby opening the door for other considerations to influence managerial decisions. In highly competitive markets where profit margins are thin, security is shaky and ability of", "label": 1 }, { "main_document": "'we' live in'. Heywood 2003, 246 Kinsella 2003, p295 Heywood 2003, p246 Heywood 2003, p247 Moller Okin, Susan, 'Gender, the Public, and the Private', Chapter 5 in Phillips, Anne (ed), Oxford : Oxford University Press (1998), p122 In addition to attacking the social construction of gender, feminism makes the controversial claim that \"the personal is political\" Mainstream ideologies regard politics as activity confined to the 'public' sphere of government and believe that, in the interest of protecting the natural rights and freedoms of individuals, this field should be clearly separated from an apolitical private realm However, the same schools of thought have traditionally ascribed men to the public areas of \"non-domestic, economic, and political life\" This leads feminists to argue that restricting politics to the public sphere is a \"strategy of depoliticisation\": not only does it normalise and legitimate male domination, but it also silences any potential opposition to this status quo by entirely excluding women from political debate Feminists contend instead that politics exists wherever social conflict or \"power-structured relationships\" In the 1970s, for example, Milett's By striving to bring private power relations into traditional political considerations, feminists have considerably destabilised conventional understandings of politics and \"entailed the remarking of its boundaries\" \" 'The Personal is Political': Origins of the Phrase\". WMST-L. Internet. Accessed on 01 / 03 / 2005. Accessed at: Squires, Judith, 'Politics Beyond Boundaries: A Feminist Perspective', in A. Leftwich (ed.), Oxford : Blackwell (2004), p120 Moller Okin 1998, p124 Squires 2004, p122 Millet 1970, in Heywood 2003, p239 Moller Okin 1998, p124 Heywood 2003, p242 Squires 2004, p119 The public-private divide upheld by conventional political theory, feminists maintain, is only one of numerous ideological concepts perpetuating female oppression. Indeed, feminism perceives political reality itself as a social construction and claims that most political outcomes can be explained by examining the ideas - and especially the beliefs about sexual difference - underlying them In particular, radical feminists such as Miller have identified 'patriarchy' as the main belief system that pervades political, social, and economic structures in every society to ensure a \"systematic, institutionalised and pervasive process of gender oppression\" In patriarchal systems, all power relationships between men and women replicate the father's initial dominance within the family unit. Patriarchal paradigms have a gender-biased influence on most political arrangements, including on the structure of legal frameworks. The French Code Civil, for instance, Europe's first codification of private law, explicitly describes strength (\"puissance\") as a characteristic of the husband, while identifying dependency (\"incapacit This formally establishes the marital relationship as one of domination For liberal feminists, patriarchy continues to shape socio-political outcomes today, contributing to the under-representation of women in politics, professions, and public life. The concept of patriarchy thus issues a significant challenge to existing political thought: as stated by Carole Pateman, \"almost all political theorists have in fact, explicitly or tacitly, upheld patriarchal right\" Carpenter 2003, p299 Heywood 2003, p243 Vogel, Ursula, Chapter 3 in Randall, Vicky, and Waylen, Georgina, (eds), London : Routledge, (1998), p34 Pateman, Carole, Cambridge : Polity (1988), p19 Indeed, the feminist critique of", "label": 0 }, { "main_document": "attachments to caregivers has a direct impact on that individual's ability to form relationships throughout their life. This can affect such factors as the achievement of developmental milestones, formation of peer relationships and, in adulthood, their own parenting ability. The theory could be criticised for its lack of consideration of environmental factors which are also needed in order for a child to thrive, but it is generally accepted as an excellent framework to motivate parents and other caregivers to provide sensitive and responsive care to children (Pendry 1998). This child was removed from the care of his natural mother before he was three months old and Bowlby's theory would support the idea that the child should have been capable of forming a successful attachment relationship with his main caregiver(s) provided that the care delivered was consistent and sensitive to the child's needs. Whist talking about the difficulty in the parenting relationship, it was proposed that the lack of initial attachment could be a reason, the client's Mum responded defensively to this, claiming that she did form an early attachment relationship with him and this was not a problem; I would suggest that my own observations alongside the empirical evidence support her opinion. Both of the observed interactions appeared to indicate a secure attachment (Ainsworth et al 1978), with the child initially wary of the strangers in the room but comforted by the presence of the parents and becoming more confident throughout the progression of the meetings, using his parents as a base to explore the room and the other adults; finally feeling confident to initiate leaving the room with the confidence that his parents would still be there when he returned. From a systems theory perspective, we would be looking for patterns of behaviour or events and considering the impact that these could have on the client's current and future mental health. As we can see from the genogram provided for this client in the appendices, the family system is quite complex, however, it was evident from the interactions with the primary caregivers that they are trying to create a stable nuclear family within what could be considered a chaotic wider family system. This was illustrated by the fact that even though the boy's sister was not present at either of the meetings observed, she was included whenever the family situation was discussed and it seemed clear that the family did not regard the client as the problem but the illness and how it affected their family as a whole. Systems theory is concerned with looking at the influences of behaviour on individuals and the influence of their reactions on a situation. There are processes within the system that both prevent and promote change known as homeostasis which may be maintained by one family member exhibiting problematic behaviour when the family lack resources to adapt to change. It would be interesting to ask the family to keep a diary of when the problem behaviours occur and to consider whether the child could be consciously or unconsciously trying to maintain homeostasis. It appears", "label": 1 }, { "main_document": "The reduction of nitrate leaching is now an issue of increasing importance within intensive UK grassland farming and this is a trend likely to continue. Following a period of high energy costs forcing up the price of nitrogen (N) to Increased reduction of leaching and responsible use of N will not only help farmers to save money on inputs but help convince the British public they are responsible managers of the countryside and aware of the damage nitrate leaching does to biodiversity in streams and the costs it adds to drinking water cleansing. Leaching is the removal of soluble substances (including nitrates) from soil by the movement of water and most commonly occurs during the winter period when soils become waterlogged. This report examines a number of realistic and simple approaches that can be adopted to reduce nitrate leaching beyond current levels. It virtually impossible to accurately predict precipitation and circumstances accurately enough to eliminate all nitrates leaching however there is certainly scope to reduce nitrate leaching of grassland beyond current standards. Short term grass lays (2-4 years) are often included within intensive cattle farmers cropping rotations as an aid to break disease and weed cycles from previous arable crops. However use of rotational lays does not make very efficient use of N within soils when following cereal crops. Firstly the traditional use of ploughs to create a fine, trash free seed bed means much of the N that has build up in soil is lost to the atmosphere or leached if cultivated soil is exposed for a considerable period of time or rain. This is an even bigger issue at the end of the grass lays lifetime when only ploughing can be used to remove established grasses, as the uncultivated grassland has built up large reserves of N within the soil and roots is then exposed by inversion ploughing to leaching and gas losses which can typically be between 280-380 Kg/N lost in years to come. If minimum tillage has been used in previous cropping this should be used for grass establishment too as the stratified nutrient rich structure that has been developed over time is undisturbed and light cultivations will reduce gas N losses and risk of leaching in rain and reduced tillage techniques may help to a lesser extent in this reduction too. The lost quantities of N are usually replaced using higher quantities of artificial N to maintain maximum productivity within the lays first year. There is an increased likely hood of this N being lost through leaching too as the newly established crop of grass have shallow root systems and less dense growth than an established sward so are less able to utilise N as well. So farmers should firstly consider a review of their rotation crop system. Do the benefits of rotational short term lay yields really justify the inefficient use of N within them? Could existing swards life time be extended beyond current lifetime? Use of slot seed drills to seed over existing lays and use of selective herbicides work well to increase long and", "label": 1 }, { "main_document": "'Peak Oil' and climate change are the two most important issues facing today's society. Action must be taken now in order to successfully mitigate the forecast consequences. In relation to 'energy future' two possible strategies provide solutions; A continued growth rate with no adverse environmental effects is obviously favourable. But is this possible? Paul Mobbs is an independent environmental consultant who has explored the data, trends, projections and outcomes surrounding 'Peak Oil' and climate change. His views are largely objective and well researched. He is without doubt, however, an environmentalist; hence the presented facts are on occasion reflecting his own agenda. Mobbs concludes that the only viable future contingency is a reduction in energy consumption by 75%. In the process evaluation he dismisses nuclear power as viable primarily due to waste management issues. He also suggests that nuclear power could only be maintained, at current levels, for 100 years. This report focuses on proving these assertions wrong by analysing viable energy resources and waste management strategies. From the critical analysis the following conclusion can be drawn; Supplies of uranium suitable for fission are far higher than Mobbs estimate. Fast breeder reactors can utilise other Uranium isotopes, expanding 0.7% uranium usage to a potential 100%. Thorium is of great abundance in the earths crust (roughly 3x that of uranium) and is a proven nuclear fuel. Operational thorium reactors are well documented; further research and development of this technology would expand possible nuclear fuel resources by a factor of 2. Maintaining current levels, the total nuclear energy supply will last ~30000 years. Upon replacing all other energy sources, nuclear power has the potential to power the world for ~1000 years. Waste management is progressing rapidly, with several methods providing safe long-term solutions. Subduction zone disposal would provide waste disposal in the absolute sense. Nuclear fission has the potential to act as a 'stop-gap' between 'Peak Oil' and fusion. Energy is the impetus behind every aspect of modern life. It powers everything from transport to manufacture, providing us with the means to live a life which has been deemed normal by society and fuelled by consumerism and development. The devastating effects of this lifestyle are becoming more apparent each year; events such as Katrina increase public awareness of climate change. However few people acknowledge the significant role they play in the world's future and furthermore even on realisation of the evidence fail to adapt their lifestyles accordingly. Whilst the world's population continues to rise and capitalists continue to promote growth and development, disaster looms. 'Peak Oil' and global warming are the two most important issues facing today's society. Action must be taken now in order to successfully mitigate the forecast consequences. Recent climate change has been attributed to global warming. 'Global Warming' is the increase in global temperature because of natural processes and/or the \"greenhouse effect\". It is the recent increase in the greenhouse effect for which humans are responsible as a result of increased greenhouse gas emission (mainly water vapour and CO2). If these emissions are not controlled global warming will continue, leading to", "label": 1 }, { "main_document": "can be achievable over time. Allow staffs to search for an electronic document by keywords, or by authority can impact a dramatic improvement in convenience and productivity, which will enhance the use of documents and records as a corporate information source. Furthermore, the accommodation environment will improve as storage requirements reduce. Initially, the organisation could come up with plans and timetables, exam the business workflow and assess the functional requirements through a need-analysis. Obtaining organisation staffs' support is a key step here. Then, train users involved in the pc implementation to provide appropriate understanding and supporting of new work practices and new information management system in organisation. Secondly, the organisation possibly will seek expert advice in developing the implementation process which can be effective. However, it is a practical way to outsource the information management reinforce program to a committed agency. Additionally, organisation needs to set up effective back-up procedures to provide a security arrangement and a disaster recovery plan in relation to hard paper and electronic documents for minimizing the risk of loss, corruption and unauthorized access (Pearce, 2005, para. 50-90). Ideally, such a task of setting a good information management system should be undertaken as a clearly-identified corporate review at key intervals (ranging from an enhancement of existing 'good benchmarks' procedures to user participation and satisfaction) and a further reinforcement program (Jevec, n.d., para. 20-27). This report has examined the advantages of implementing an effective information management system which can underpin successful corporate governance through widely utilization of information. Managing information is an essential element to the competitiveness and accountability of an organization. It is imperative for international organisations to establish effective and efficient information management practice in order to contribute a sound corporate governance and organizational success as competitive advantages, meet accountability requirements as evidences, response for community expectation and support decision making. And consequently, this report advocates a reinforced program for further organisational development- Electronic Information Management System underlined by a governance framework.", "label": 0 }, { "main_document": "time. These background procedures had not been tested or fully documented by the time the training course for the Agency's Liverpool staff by the launch date, and as such, staff were provided with a basic on-site training by the Agency, together with a small amount of one-to-one training specific to their job role. Siemens trained the Agency's trainers in order for them to subsequently train their staff. The Agency conducted a review of this emergency training and concluded that, due to positive staff feedback, it had been successful despite being minimal. However the review did uncover that the training focused mostly on the software system itself with too little emphasis on the procedures needed to support the system. The Agency's timetable left it with little time to make a final decision about the roll-out to its Newport office, which was due to take place on November 16 Due to the Agency's short roll-out timetable, it was left with little room for manoeuvre once problems appeared at the Liverpool office and decided to continue the roll-out to its Newport office despite these problems. Siemens' digital data capture using OCR (optical character recognition) technology to process passport application forms was contributing to the delays, due to a high error rate in processing forms, meaning incorrect entries had to be manually changed. The Agency had been given advanced warning of this but failed to act as it thought this was because the forms used during the trail \"had not been completed with the same care\" as real passport applications. Also contributing to the delays was a problem affecting a small number of applications- the variability of the ink density and position of print on the application forms (produced by Security Printing & Systems Ltd) were causing difficulties for the OCR Scanning equipment. Things got progressively worse as time ran out for completion of the implementation by the target date. By the 14 The Agency was, in the end, forced to roll out the system to Newport before ready in a desperate attempt to increase passport output over the busy season. Liverpool was operating far below the expected output level due to the ongoing problems. The board of directors of the UKPA felt that the Liverpool office had not been fully prepared for the scale of change it underwent, and that the Agency was not fully prepared for its contractual relationship with Siemens. They were seriously concerned about Siemens's ability to deliver the required output in Liverpool and were doubting whether the Agency would have the ability to continue the roll-out programme without affecting service even more. Delaying any further, now five weeks into the roll-out programme, would have meant delaying the remainder of the roll-out to the following Autumn- having significant cost and operational implications, having to maintain two systems at once and making the old system millennium-compliant. Output at both Liverpool and Newport continued to fall for more than six weeks. By this time, as mentioned in the overview, the situation became critical, with several hundred people unable to travel, and phone lines being", "label": 1 }, { "main_document": "Equations (3) and (4) test for the significance of immunisation against DPT and measles respectively (besides income and education) both turn out to be significant even at the 1% level increasing life expectancy by 0.23 years with a unit increase in the percentage of vaccinations against DPT and 0.25 years in case of measles. However literacy remains insignificant even at the 10% level in both regressions. The R The intercept is highly significant for equation (4) at the 1% level, showing the average life expectancy would to be as low as 11 years in the absence of these three variables. Equation (5) includes an extremely important determinant of health other than GDP per capita, literacy, and access to safe water all three of which are significant at the 1% level, namely the HIV prevalence rate which is responsible for reducing life expectancy (indicated by the negative sign) by about 1.1 years for a unit increase in the rate of HIV prevalence. The explanatory power of the model immediately increases to about 88%. In equations (6) and (7) a number of explanatory variables are newly included to indicate health status. They are the urban population of a country, access to sanitation facilities, smoking prevalence rates, the gini index as a measure of inequality, and the expenditures on health by the government, both equations find GDP per capita, literacy, safe water, HIV, as highly significant either at the 1% or 5% levels. The intercept in equation (6) is found to be significant at the 5% level indicating life expectancy to be around 16 years in the absence of all other variables. After dropping the gini index variable in equation (6), in equation (7) health expenditure becomes significant at the 10% level. The explanatory power of the model is extremely good with an R However quite a few of the new explanatory variables like urban population, sanitation, smoking (which does show a negative effect) and the gini index which have insignificant t-statistics and do not turn out to be as significant as anticipated. Thus with such a high R However multicollinearity is essentially a data deficiency problem. Dropping a variable may lead to a specification bias. Other remedial measures could be centring (subtracting the mean from each offending independent variable) or ridge regression. The basic solution would be to increase the sample size, as in this particular case data infact was deficient in terms of its scarcity for some particular variables like smoking prevalence, the gini index and few others. Also the problem of outliers and random samples are not ruled out. In the data which was missing there could have been some variables which would particularly be influential. Also the data for developing countries for which the analysis should hold even more strongly by intuition, is even harder to get and is absent for a lot of countries. However, the each of the regressions is jointly significant, indicated by a p value of zero for the F- statistic for all seven equations. Few further statistical tests are conducted on equation (6) and its residuals.", "label": 0 }, { "main_document": "Finally, workspace is one more significant factor for enhancing creativity, but it seems that there are no certain rules for how it should be. Amabile (1998, p.82) suggests that \"creative teams need open, comfortable offices\", while a graphic designer interviewed by Andriopoulos (2003, p.382) states that his ideal working environment includes \"Music playing around, people shouting, clients being in the middle of the room...this all help creativity\". Thus, what is important for an organisation is to find workspaces that make employees feel comfortable enough to produce creative work. Having already discussed the importance of human resources, using them in order to format appropriate multidisciplinary teams is the next step for creative organisations. Amabile (1998, p.83) mentions that teams that are built up from people with various backgrounds -thus, different problem approaches- are most likely to be creative. In addition, team-members must \"share excitement over the team's goals\", must be willing to help their team-mates through difficulties, and last but definitely not least, they must recognise the individual value and unique knowledge of each team-member. Meyer (2000, p.12) goes one step further, by stating that each team-member must be recognised as capable of \"generating new ideas and appropriate solutions\". Moreover, cooperative behaviour is essential and the ideas of team members must be clearly stated and communicated. Finally, teams should encourage people to express their opinions freely and this can be achieved when team-members feel valued. Knowledge is the next key-factor identified. According to Dougherty (1996, p.182), the \"development and exploitation of knowledge\" gained from the interaction with the customers, is a crucial factor for creative work, since customer-orientated companies stand more chances to achieve product innovation. Furthermore, a more thorough understanding of companies' technical capabilities enables organisations to apply the gained knowledge more effectively. Hackett and Robinson (1997) support the abovementioned, and suggest that creativity can derive from a better and deeper understanding of customer's needs, of work's nature and of work practices. Furthermore, Andriopoulos (2003, p.384) mentions that \"gaining new knowledge can minimise mistakes that may occur in the creative process\", while M. Wilken ( (Meyer 2000, p.16). Finally, Dougherty (1996) highlights that an efficient linkage between market and organisation's operations and technologies is sine qua non for effective product innovation. These are the next two interlinked key-factors for creativity. Amabile (1998) argues that employees should be provided with the necessary freedom, but they must also have well-specified and stable targets. Such conditions enhance organisations' creativity. Providing people with freedom about the way they manage their tasks, should be integral part of every creative company. When employees are not expected to follow established processes on managing tasks, creativity arises since people come with new approaches and innovative ways of facing these tasks. Communication is the next identified element for creative organisations. Amabile (1998) suggests that creativity is fostered when people inside an organisation exchange ideas and data and expose themselves to various approaches to problem-solving. Thus, information sharing and collaboration among employees is the basis of creative work. Dougherty (1996, p.183) pinpoints that creative companies are characterised by communication in terms of", "label": 0 }, { "main_document": "The fall of the Bastille in July 1789 marked the beginning of what was to become a cataclysmic course of events in the history of France: the French Revolution. The political, social and cultural landscape of the French nation was so irrevocably transformed by the course of the Revolution that by 1795, the end of the period under scrutiny, it was barely recognisable from the absolute monarchical France of 1784. The effects of the Revolution reached out into all corners of life; culture as well as politics took on novel forms. The subject of artistic and cultural works from this period should not be treated as 'autonomous phenomenon' In undertaking an intensive analysis of the themes and principles given attention in the most influential paintings of this era, a social history of Revolutionary art can be formed. In contrast to traditional art history, such an approach can help the viewer to understand these pieces as both products and, at times, producers of the Revolutionary milieu. Boime, Albert, The period in question incorporates what might be seen as three broad stages of the French Revolution. The first, 1784-89, in truth preceded the commencement of the Revolution. However, the significance of these years cannot be underestimated, for it is during this time that the environment took shape into which the Revolution could explode in 1789. As such, debate rages between art historians as to whether the influential paintings of this period can be said to anticipate the principles that were to become those espoused by revolutionaries by the end of the 1780s. Republican imagery comes to the fore as the most crucial element in need of attention, arguably being used to uphold and promote the Bourbon monarchy or, by contrast, to indicate the need for an alternative form of governance. The main figures involved in this debate have been L.D. Ettlinger, Hugh Honour, Anita Brookner, Antoine Schnapper and Robert Herbert. A full discussion can be found in Roberts, Warren, The years 1789-92 represent the most optimistic of the French Revolution. The formation of a National Assembly incorporating the nobility, clergy and Third Estate, and the design of a constitution which handed sovereignty to the body of the people, inspired feelings of fraternity and patriotism amongst the nation's new political leaders. Painting similarly intercepted such ideals, reflecting the atmosphere of unity and optimism. The final stage under scrutiny here represents something of a turning point in the history of the French Revolution. Growing suspicions about the King's support for the Constitution, confirmed by the flight attempt aborted at Varennes, coupled with growing military crisis, led to the forced removal of the monarchy on 10 August. The subsequent creation of the National Convention marked the first step towards initiating the period known as the Terror, during which many thousands were executed by guillotine for crimes against the nation. Intense fears of counter-revolution meant these years became characterised by surveillance, suspicion and violence, all in the name of upholding the new French Republic. In response, painting was called upon to promote Revolutionary values with much greater intensity,", "label": 1 }, { "main_document": "Global economy, as an ideal type, exists in the minds of those who see the world through this frame. The question of it existence, therefore, should be answered by examining whether this concept helps us grasp certain aspects of the empirical world otherwise indiscernible. In the past decade, there emerged many radical elaborations of this concept, such as the notion of an emerging 'borderless' world where 'national' is no longer relevant (Reich 1991, Ohmae 1995a), as well as much criticism, such as the claim that global economy 'is much a myth' (Hirst and Thompson 1999). Contrasting the views from both sides, I found those skeptical voices valuable in pointing out some empirical errors of the radical elaborations prevailing previously, but faulted in confusing the radical elaborations with the concept of 'global economy' itself. Instead, I will argue that the concept 'global economy,' though often over-stated, still has its merits in helping us understand today's world. The term 'global economy' entered the core lexicon of academic and public discussions in 1990s in the discourse of globalization debate. Its popularity reflected the wide-accepted notion that, with the huge surge of international trade and foreign direct investment (FDI) through transnational corporations (TNCs) and global financial system, we are moving toward a world economy qualitatively different from the past. Carnoy and his colleague (1993) stress the geographical scale of economic interdependency and defined global economy as a world economy in which 'all aspects of the economy are integrated or interdependent on a global scale'. Castell (1996) further incorporate the temporal aspect by defining it as 'an economy that works as a unit in real time on a planetary basis.' This concept soon became a handy phrase widely used among opinion leaders with different degree of implications. Sometimes it was applied to refer to the increased international economic inter- dependence, or one step further to mark certain fundamental change in the economic order, but some went as far as to claim the irrelevance of national economies and domestic policies in the face of powerful global market force. Such perspective, later labeled 'globalism' by critiques, were politically consequential that it soon became widely cited to justify some less favored governmental policies (deregulation) and corporate decisions (outsourcing and layoffs). Partly in response to the more radical views and their political consequences, various skeptical voices emerged, primarily from the perspective of Hirst and Thompson (1999:2) proposed the following argument challenging the global economy thesis: (1) The present highly internationalized economy is not unprecedented. (2) Genuinely TNCs are rare, and most companies are based nationally. (3) FDI is highly concentrated among the advanced industrial economies, hence is does not produce a massive shift of investment and employment to the developing countries. (4) Most international economic activities are concentrated in the triad of Europe, Japan and North America, less from truly global. (5) Some nation states (especially the G3) still have the capacity 'to exert powerful governance pressure over markets.' As an alternative ideal type, they developed a model of 'inter- By doing this, they aimed to 'emphasize the possibilities of national", "label": 0 }, { "main_document": "short and long run. So in the end, long hours can be more ineffective than effective for an organisation. Trade unions in France experienced a dramatic decline in their numbers what the proportion of 9% of unionised labour force shows (Federation of European Employers 2004b). The reason for the decline can be seen in a growing unemployment rate which caused that employers could choose from a greater variety of workers and so, preferred to employ non-unionised labour force (Gordon1993). The percentage of workers organised in the British trade union is higher than in France and counts about 30%. However, it is presumed that the figures will decline further in the upcoming years throughout Europe what could enforce that employer and employees are called to look for solutions on a business level (Federation of European Employers 2004b). Since the last 30 years the population is ageing in the UK. This development will continue over the next decades (see Appendix 4). It means for the tourism industry a growing market as the income of retirees went up over the last years continuously and life expectancy have also risen due to better health conditions (see Appendix 4). Implications can be the creation of new jobs in this sector but also changes in customer-employee relations what again may lead to shifts in required skills. Focusing on the age structure of France it is like in the UK. The number of retirees goes up significantly and at the same time the working population falls due to an upturn in the birth rate (Whyte 2004) (see Appendix 4). Thus, the consequences are similar to those of the UK. Although the disposable income of retirees is slowing down in France they still form a strong-financial group (Moreau 2002). Another fact that needs to be mentioned is the shortage of labour force that will probably arise due to the rising gap between working population and pensioners. This again may cause tremendous problems for organisations looking for staff. Both the UK and France have a highly-developed educational system. The number of those who attend higher education has increased dramatically since the last decades (Whyte 2004, Whyte 2005). However, the education system in France has often been criticised In other words, the major emphasis is put on general education rather than vocational training. Furthermore, there is a lacking effort to provide courses that correspond with the needs of the market. Even though there have taken place improvements the French education system still prioritises formal academic teaching. This is also reflected in France's third rank in an international study looking at the number of people aged 20 to 24 who attend an educational institution (see Appendix 6). This means for the tourism industry a shortage of appropriate labour force as the example of the Rh Vocational training is also poor in the UK. Providing more apprenticeships is meant to improve the situation (Whyte 2005). According to the Hospitality Training Foundation (2005) 30% of vacancies in hotels and restaurants in the UK could not be filled due to skills shortages. This shows that the", "label": 0 }, { "main_document": "more than 11,000 unique visitors followed its blog. Those football fans spend an average three minutes and 17 seconds reading the daily postings (Joyce, 2006). This statistics just shows the importance and relevance of blogs in today's world of online PR. The blog reinforced Cooper tire's brand awareness among its fans and enhanced the company's exposure in public. In this case, blog is an effective model to generate income. Article is a powerful online public relations tool, as it enables the businesses to share knowledge with the world, obtain public respect and gain customers' recognition (Lica, 2006). Moreover, its efficiency is easy to be measured by the number of back links it generates. Unlike blogs, articles are more trustworthy and concrete as it is always or at least most often followed by some amount of credible source authenticating the facts. Blogs on the other hand are just various individuals' assumptions and notions about various things, which may or may not be authenticated. Considering the facts, the importance of both blogs as well as articles should not be ignored by the companies who are into online PR as peoples voices and opinions are ultimately form the customers of the respective companies products. Not surprisingly, the utilization of search engine was more common among smaller firms than larger companies, as they are more likely to pay search engines for higher placement (Asbill, 2007). Apple vacations have been successful in using SEM (search engine management) to maximize its visibility. When coming to searching for a vacation or trip online, Apple vacations is usually found at the top few of the list - 5 Although this is the first method they used to advertise outside its traditional ad, their paid search program performs about 200% above the company's break-even ROI (Jobannes, 2006). The key factor is to collaborate with specialists in SEM and maximize the effort on designing word search categories specific to customers' needs. To capture customers' eyeballs, building up a trusted brand image is quiet crucial when applying search engine tools. Another issue is that when people requiring some information, search engines can be time consuming, as those people may not be experienced net users and the information required is not specific (Hamill, 1997). Due importance needs to be given to getting the keywords associated with the product or the company right. Pay per click is undoubtedly one of the most cost effective and measurable way for companies to get people visiting their web site (Guava, 2005). Most leading search engines are selling sponsored listings to companies, which allow these firms to create ads and choose related keywords. When people are searching information by using one of these keywords, the company's web site and ads may appear in the sponsored results. At the same time, in most cases, users will read the brief summary about the company's products or services that appears with the search result. This is a simple way to acquiring customers and increasing the chance of the individual making enquiry for the services or purchase (Guava, 2005). Companies may corporate and", "label": 0 }, { "main_document": "the efforts to include active women population in the workforce, which have existed in some form or other since the second half of the 20 One of the most significant policy implementations designed to make women employment more widely spread and viable is the introduction and the improvement of childcare provision availability. As Rubery (2002:516) points out 'perhaps the strongest evidence of the response to the higher profile of equal opportunities is found in the almost universal improvements in care provision in the EU'. This argument is advanced by Ruberyal. (2003:478), who draw attention to the fact that, even under the review of the EES pillars, 'the quantitative targets set during the first phase of the employment strategy of (...) childcare coverage of 33 per cent for under three-year-olds and 90 per cent for over-three-year-olds have been retained'. Another area where notable progress has been made is marked by the attempts to desegregate the European labour market with regard to female participation. Rubery (2002:515) points to five sectors of improvement - programs which aim at diversifying the potential employment forms or training schemes which unemployed or women returners enter (introduced in countries like Portugal, France and Sweden), incentives to influence initial choices of women with regard to education, career and further training (in Finland, France, Austria, Portugal and Sweden), schemes to increase the employment of women in the IT sector (in countries like Belgium, Greece, Sweden and Germany), positive action programs for women employees in the public sector (in Ireland, Luxembourg and Germany) and efforts to introduce women to sectors where their participation is underrepresented. The wide geographical spread, as well as the quick take-up rate of these desegregation initiatives clearly demonstrate the commitment of EU countries to introduce measures to include more women in their workforce, in line with the European Employment Strategy. The measures aimed at facilitating the entry into the labour market and equal working opportunities for women have proved to be quite successful, both in qualitative (more diverse training and employment opportunities, including part-time and flexible working patterns, childcare provisions and measures for achieving work-life balance) and quantitative aspect (employment rates in pursuing the 2010 targets). The development of the former aspect is highlighted by Ruberyal. (2003:479) who claim that 'with respect to the equal opportunities pillar, progress has been most notable in tackling As far as the quantitative aspect is concerned, the 'Employment in Europe 2005' report confirms that the improvement in overall employment rates in Europe in 2004 has been very much driven by the steady rise in the employment for female workers (0.7 % on average for the counties-members of EU). It is visible that with regard to rising the activity rates and employment opportunities for women employees, the measures of the European Employment Strategy have started to yield results which, if not yet reaching the targets of the 2010 Lisbon Strategy, show the commitment of the parties concerned to ensure the adequate participation of female workers in the EU workforce. Initiatives to promote female employability have been long-standing and, as demonstrated, diverse and far-reaching,", "label": 0 }, { "main_document": "exactly the intention Browning want to create. Especially as that is what is occurring during the poem. A Duke is having a conversation with a visitor to his palace, recalling the memory of his previous wife, the memory of her triggered by her portrait. The portrait itself is an item that belongs to the Duke but it is clear through his recollection of the Duchess that she too was just a possession to him. The very opening line of the poem, The possibility of their being more than one is probably quite accurate, and this creates the impression that these women were easily disposable to the Duke, they are devalued. This reflects perfectly the materialistic, obsessive nature of the Duke, who wants to possess not only material possessions but also a human being. This obsessive nature is the trigger for spurring his jealously when he believes her to be flirting and believes her to be, He is even angered by the idea that she does not value his heritage or perhaps more specifically, His compulsion to own her and control her is even noticeable through the structure of the poem. As mentioned earlier, the entire monologue is based around the rhyming couplets; there is not one deviation or exception from this. It is utterly controlled, just like she is. His power over her is reflected in the poem, and once the reader becomes aware of this Browning reveals how the Duke finally rids himself of a wife altogether, This sinister line reinforces the Dukes attitude that if he cannot own her, then no man shall, though what he does not understand is that no human life can ever be 'owned'. There is such a clear lack of spontaneity or passion in his actions, the fact that he just gave a command and the deed was done makes the crime even more cold hearted. It also demonstrates the Duke's corrupt use of this power; here perhaps Browning is condemning this corruption amongst the courts. This is the reason too that the reader disregards the Duke's point of view, and supports and sympathises with the Duchess, acknowledging her innocence. The Last Duchess is a clear example of how a woman was perceived as an object or an item to be possessed. She is only ever mentioned through her portrait, she is never given a name or any kind of identity. There is certainly a clear consequence from this treatment, though it only appears to effect her, the Duke seems to be void of any consequence or moral conscience, he feels no remorse for his actions. Porphyria's Lover is a similar example of the above. Just as in My Last Duchess, Browning uses the structure of the form to convey the narrator's feelings and thought. The stanzas contain 8 syllables with a repeated rhyme scheme of A, B, A, B, B. However the extra line seems to turn what could be a straightforward rhythm into a slightly out of synch one which creates the tone of someone truing to convince or persuade the audience or", "label": 1 }, { "main_document": "amend them'. This use of the metaphor of shadows could be seen as a reference to Plato's cave although it appears that Shakespeare is cunningly suggesting that ultimately the shadows are the only thing available to human investigation. Theseus emphasises the power as well as the limitations of art and imagination. Another important element in the play's exploration of truth is the mechanicals' play within a play, which is for Schlegel 'an acute commentary on the nature of dramatic illusion' (qtd in Leon Guilhamet). The humorous deaths of Pyramus and Thisbe counters the effects of more serious threats in the play which could have culminated to produce a tragedy had Shakespeare's intentions been different. The play within a play could be seen as a parody of contemporary tragedy with its high born characters and attempts to conform to classical form. Instead however, the emotions of tragedy are banished from the play through ridicule. In opposition to the stillness and concord of death we have the discord of human experience and laughter. Shakespeare implies that life itself is inherantly contradictory and incomplete as humans are fragile non-absolute creatures but that this state of ignorance is something to be celebrated rather than struggled against. The reference to death seen within the mechanicals' play could also be a further challenge to Platonic metaphysics. Plato believed that in death, a person's immortal soul travelled to the realm of forms in which it gained absolute knowledge. In countering the sombre tone of tragedy with laughter, Shakespeare asserts the superiority of incomplete human existence over Platonic reverence to death and the subsequent Classical love of tragedy. The play's use of dreams also serves an important function its exploration of truth. All characters share in a transcendent dream in which they cannot distinguish between waking and sleeping. Like dreams plays can be seen as having a surface level, behind which lurks their latent meaning. Demetrius questions 'Are you sure/That we are awake?/It seems to me/That yet we sleep, we dream'. The irony of this phenomenon in the play is that there are very few actual dreams, the characters rather reject the validity of the sensory experience through attributing it to dreams. This skepticism is reflective of the views of another Renaissance figure, Descartes' and his project of doubt, an attempt to find truth. However, his groundbreaking discovery was that there is no answer to a skeptical challenge as to the existence of the external world. All Descartes' thinker can be certain of is his own existence, shifting the focus of truth from Classical absolutism to the human centred Renaissance view. This therefore gives the poet's reality more validity as they create through the centre of their own thoughts, the only thing which cannot be doubted. This link between dreams and truth is further displayed in Bottom's dream of which it is 'past the wit of man to say what dream it was'. The dream is reminiscent of St.Paul's promise, as Bottom recounts comically 'That eye of man hath not heard , the ear of man hath not seen'. The", "label": 1 }, { "main_document": "electrons having the same value of m The 3d unpaired electrons \"attract\" all the other electrons in the atom with matching spin. This leaves a superior number of electrons with the opposite m If the s-electrons, which have electron density centred on the nucleus (see This produces a large magnetic field at the nucleus, H The B This constitutes to a circular electric current generating a magnetic field H Above, and The triangular brackets indicate integration over the whole atom. Finally, the unpaired electrons behave as small bar magnets, due to their spin, generating a magnetic field, H In this equation, S is the spin angular momentum vector where and [4] This experiment made use of a 10mCi These rays were used to create absorption spectra for Magnetite [Fe The Cobalt source was secured inside lead shielding, with a collimator to direct the gamma radiation towards the absorber and detector. It was also mounted upon an oscillating loudspeaker coil, to produce the Doppler broadening. This vibrating device was connected to a control system with which the velocity Linked in with this system was an oscilloscope, so that the pulse characteristics and stability could be monitored and altered in order to ensure that the whole spectrum was detected. All of this is shown schematically in The absorbers Magnetite and Barium Ferrite were prepared specifically by hand, as thin discs of powder held together under high pressure. In order for an ideal spectrum that let just enough counts through as to be distinguishable we wanted the sample to be of the appropriate thickness so that the recorded intensity, By The absorption coefficients for Magnetite and Barium Ferrite were determined using the formula This describes the mass absorption coefficient Once this was determined by referencing From this, the necessary thickness of the sample This would correspond to a mass of Both the Magnetite and Barium Ferrite samples used a mass of powder of the order of 10 Boron Nitrate was added to the powder so that it would hold together more easily when compacted using a 10 ton hydraulic press. The Boron Nitrate would not introduce a significant absorbing power into the disc as it possesses a much smaller atomic mass than the heavier iron-based compounds used to experiment. Once prepared, the sample was placed in between the source and the detector, which was of the proportional counter type. Counts were registered in 1024 channels that covered the entire range of the velocity scale that was chosen. The counts and velocity were linked via the multi-channel store, and the information was finally displayed on the monitor as a spectrum of counts vs. channel number. Channel number could later be calibrated to the velocity values and finally energy values by reference to Doppler formulae. The calibration was performed by obtaining a spectrum for The spectra obtained, and stored in terms of number of counts per channel, were analysed using the Normos M This allows for retrieval of accurate data relating to the sample as follows. Estimated values of relevant parameters (most importantly source velocity (V), Magnetic", "label": 1 }, { "main_document": "This study investigates the attitudes of young people towards mobile phone use, before and right after seeing an advert identifying mobile phones as a threat to traffic safety. Participants. 128 participants were recruited from Leamington High and Leamington Scruffy. The participants were randomly allocated into an intervention and a control group. The groups were divided equally by school, age (14-16 and 17-18) and sex, producing a total of 16 groups with 8 participants in each group. Procedure. Each group was tested in two sessions, six weeks apart. In session 1 the participants completed a questionnaire that estimated their attitudes to mobile phone use. In session 2 the participants in the intervention group were shown the advert and then completed questionnaire. In the control group the participants completed the questionnaire and were then shown the advert. Three cases, two containing missing values and one containing an error value, were removed (in condition 1, 2 and 10), leaving 125 cases. The scores obtained at the first session were subtracted from the scores obtained at the second to yield difference scores. The means and standard deviations for each condition (outliers not included) are shown in table 1. ANOVA revealed a significant main effect of group F(1,109) = 65.17; p<.000. Removing three outliers (in condition 10 and 16) generated additional significant effects of sex F(1,106) = 7.32; p<.008, and an interaction effect of age and group F(1,106) = 6.709; p<.011. Data screening revealed that the distributions had very different variances (e.g. 3.238 in condition 1 and 0.214 in condition 7), and that some were skewed (condition 2, 7 and 12). However, square root transformation of the difference scores did not remove the significant effect of group or sex, nor did Log10 transformation, nor did squaring the scores. Analysis of the residuals revealed no systematic patterns. The present study aimed to investigate the effect, on attitude towards mobile phones, off an advert identifying mobile phones as a cause of traffic accidents. It is surprising to find that the advert seems to have produced significantly more positive attitudes overall in almost all the control groups. These scores may be the result of an inversed Hawthorne effect; the purpose of the advert cannot have escaped anyone in the intervention groups, arguably to the extent that the obviousness of the attempt at psychological manipulation may have annoyed several participants. This may have caused them to give more positive answers. It also seems that female participants were more prone to such acts of protest. To exclude possible effects of protest, and allow for any generalizations outside of the experimental setting, future studies should perhaps make use of adverts and questionnaires addressing a variety of topics or in other ways try to conceal the purpose of the study (Not knowing what the fictional advert looked like I can't know, but) it is possible that the experienced appeal of the mobile phones and mobile phone users (perhaps even the cars) featured in the advert had an effect worth mentioning on the attitudes of the participants. Future studies should make sure the articles and", "label": 0 }, { "main_document": "processes to try to bring about change. One of the most important changes was there move to appoint Vandevelde as chairman, from outside the organisation. This represented a move away from an internal focus and opened up the culture to change. Another important symbolic change was a physical aspect which all members of the head office could identify with. This was the move out of the Baker Street headquarters which represented \"a static environment with many closed offices and long corridors, redolent of a company that has not fully embraced modern management techniques and working methods.\" The re-location was to a new building in Paddington and symbolised its intent to reform its culture. The most dramatic of the symbolic processes was the complete over-haul of the M&S brand, which took place in March 2000. This was a change that not only affected the internal culture at M&S but was something that communicated a change to customers. The changes included an alteration to the image of its stores, uniforms, packaging and labelling; they also stopped using their famous green carrier bags and relegated the St. Michael brand to the inside of clothing labels. Vandevelde commented on these changes as being \"evolutionary rather than revolutionary.\" The changes were a natural progression with the changing market environment. They did involve a transformation of the organisation but it was not a sudden change and it did not have a sudden impact. The final 'lever' which M&S used to bring about change is the This is the way in which M&S went about delivering change and includes: choosing the right timing, managing job losses and delayering, and using visible short-term wins to provide evidence of change. One of the problems which M&S encountered when withdrawing from the overseas market was the impact of job losses in France. The impact of withdrawing from this market was under-estimated and caused the suspension of restructuring plans as well as the generation of negative headlines and employee demonstrations. This could also be seen as a symbolic process to represent the difficulties M&S were facing in their efforts to bring about changes in the organisation's culture. To counter-act the negative aspects of the change process the top managers were interested in achieving visible short-term wins. Strategy is often about long-term direction and major re-structuring decisions but to motivate members of the organisation and provide hope for the future and belief in the changes short-term wins are important. Vandevelde initiated some improvements to the customer service (an extra 4,000 staff were placed on the shop floor) and image changes to the stores and brands \"to create a mood which 'looks to the future with anticipation of creating change'.\" These initiatives may not have been significant aspects of the new strategy but were visible indicators of a new approach. They were small changes which were put in place to stimulate commitment to bigger and more significant changes in the culture. At the end of the article it states that: \"Commentators felt that although quick results for sales and profits had been achieved, underlying structural", "label": 1 }, { "main_document": "order may influence how parents treat their children, and this in turn tends to cause differences in personality and social behaviour (Buckley, 1998). After all, parental style has been strongly associated with a child's characteristics (Durkin, 1995). An example of this is provided by Carlson and Kangun (1988) who suggest that birth order affects socialization because eldest children are socialized by adults while youngest children receive less attention from their parents and are socialized by their older siblings instead. Thus, firstborns are more achievement oriented due to increased adult influences while lastborns are more independent of authority because they are peer-socialized. Another argument is that eldest children are more controlling in later life because parents tend to view them as stronger and more capable and therefore allow them more control in relation to their younger siblings (Buckley, 1998). Although many reports propose that development is affected by socialisation differences due to birth order, only a handful of studies have investigated parental behaviour towards children of different ordinal positions (Rothbart, 1972). Some researchers such as Putter (2003), who explored the relationship between birth order and depressive symptoms in early adolescence, have utilised parenting style as a mediating factor but most others like Nadler (2000) have only looked at birth order and either personality or parenting in isolation. This study seeks to distinguish between the conflicting conclusions that have been drawn from preceding research. In addition, it goes beyond merely establishing a correlation between birth order and personality by exploring parental treatment as a causal link between these two variables. The experimental hypothesis in this case is that birth order has an effect on personality because parents treat their children differently according to their ordinal position in the family. Unlike previous studies such as that of Sulloway (1996), which investigated a combination of personality characteristics, this research concentrates solely on extroversion/ introversion. The degree of extroversion people display is assumed to be influenced by their upbringing. It is also assumed that the effect of parenting style on personality is stronger than other environmental factors that may render the results insignificant. Based on research by Singh (1985) and Segal (1978), it is expected that youngest siblings will be more extroverted than eldest siblings. It is also predicted that eldest siblings will be more overprotected and perceive less emotional warmth from their parents. 83 university students (40 men and 43 women) were recruited to participate in the study. The sampling method used was an opportunity sample of volunteers. 34 participants were eldest siblings, 30 were youngest siblings and 19 participants were neither eldest nor youngest siblings. All participants were unaware of the hypothesis of the experiment in order to preserve the validity of the results. A structured questionnaire comprising three sections was used (see Appendix A). These sections were: the s-EMBU Parental Treatment test (Arrindell et al, 1999), the Big Five personality test (Goldberg, 1992), and birth order assessment questions. The s-EMBU Parental Treatment test consisted of 23 questions with three scales (Rejection, Emotional Warmth and Overprotection) to measure perceived parental rearing behaviour. Of the three", "label": 1 }, { "main_document": "\"Most of the matter in the universe, is dark, so what makes scientists think that it exists and what could it be?\" The mass of a galaxy can be predicted from its size and the amount of light that it gives off, however, astronomers' measurements about the mass of galaxies found by looking at their speed of rotation, show that the mass required in order for these galaxies to rotate at their current speed, would be ten to one hundred times more than we can see. Since this mass is not visible it is called dark matter. It is believed that \"stars make up less than 1% of the universe's mass; all the loose gas and other forms of matter, less than 5%\" (1), and the rest is made up of dark matter and dark energy, 25% of which is dark matter. However, nobody knows what dark matter actually is, even though there is much evidence for its existence. This is because it neither reacts with any known form of matter or emits any electromagnetic radiation, in fact the effect of dark matter can only be seen by its gravitational pull on visible matter. There are two different types of dark matter, hot dark matter and cold dark matter. Hot dark matter travels close to the speed of light and only makes 0.3% of the total mass of the universe; it is likely to be made up of neutrinos. Cold dark matter on the other hand is much more sluggish and makes up anything up to 25% of the total mass of the universe, however nobody knows for certain what cold matter is made up of. In this report, I will be concentrating on cold dark matter I will be looking at the numerous types of evidence for the existence of cold dark matter and what it could be. The existence of dark matter was first considered with the discovery that the stars in the sky are accelerating at a greater rate than would be expected for the amount of gravitational pull that occurs from other stars. Therefore scientists deduced that there must be some kind of matter between the stars that does not react with anything or emit electromagnetic radiation, however does provide the required mass to cause the gravitational pull in order for the stars too travel at their rate of acceleration. It is this gravitational pull factor of the dark matter that provides much more of its sources of evidence. It was then realised that the expected mass of the universe was much less than was actually predicted and therefore the idea of dark matter really kicked off. At the beginning of the universe, before the period of accelerated expansion, the whole universe was uniform with no clusters of mass. However, after the period of accelerated expansion, local irregularities had occurred due to the expansion rate being greater than the speed of light, meaning that matter at one end of the universe would not be able to interact with matter at the other end in order to stay within", "label": 1 }, { "main_document": "system, but when the logic is explained to them, and they understand it, they will accept the rule-based system. From this point of view, the two systems of reasoning can exist and people of greater intelligence are more likely to use the rule-based system. Hardman (2000) disagrees with this argument because people will not always accept the logical system even when they understand it. He accuses Stanovich and West (2000) of not applying their theories to the correct type of task. He proposes that if the tasks were everyday, real world tasks then people would be less likely to accept the purely logical approach. Intelligence is a continuum. It is not possible to separate it into two groups, one of high intelligence and one of low intelligence. Similarly, perhaps it is not possible to separate systems of reasoning into two distinct groups as Sloman (1996) has done. Many aspects of the two systems that Sloman (1996) has proposed may be considered to lie on a continuum (Newstead, 2000). The example given earlier, of change giving in a shop, highlights this. Here the use of the associative system provided an automatic response, whereas the use of the rule-based system required the shop assistant to consciously work through the problem. In real life, for actions to become automatic, a person must gradually learn the action and improve in various stages. An example that Newstead (2000) provides is that of driving a car. When first beginning to learn a person has to consciously think through all of their actions, and only after practice and gradual improvement do the actions become automatic. An entirely automatic response and an entirely logical, conscious response therefore are two ends of a continuum and not two distinct entities. This is also the case for the speed of processing. The associative system is considered to be fast and the rule-based system slow, but speed is a continuum. Are there two systems of reasoning? Sloman (1996) has provided a very convincing case for the existence of two systems and is not alone in trying to derive two distinct systems as discussed earlier. The evidence suggests that problem solving is not approached in the same way by everyone, implying that there cannot be one system available. Evidence also suggests that there are many factors influencing what system of reasoning is used in any situation such as familiarity with the task, intelligence and how the task is interpreted. It is however difficult to distinguish two distinct systems of reasoning because many of their features are continuous. This leads to a conclusion that many systems of reasoning exist and are employed where viewed appropriate by the person performing the task and they vary in degree rather than specific kind.", "label": 1 }, { "main_document": "system and reduce the accuracy of the result. Even when the system is balanced, the wire resistances changes with temperature will also result in errors. A more precise apparatus can be chosen to measure the lengths of the cantilever rig since the sensitivity of the mechanical system is dependent on the geometrical dimensions of the rig. Especially t in the denominator should be recorded carefully otherwise it will give a big difference in result since it is inverse squared. The system is configured to measure a low-level force since the geometrical dimensions are small. The dimensions of the cantilever rig gives an applied force range is small, only a few Newton. For the strain gauge, the change of resistance is only expected to vary with the strain produced as adding forces to the system. However, the resistance is not only responding to the strain but also temperature changes. A half-bridge circuit configuration used in this experiment which is to use two active strain gauges can reduce the temperature effect. And an even better solution is use the full-bridge circuit configuration hence the temperature effect can be minimized. For the amplifier used in the experiment, the gain is chosen as 1000 since higher gain will produced a considerable noise level, but the lower gain will probably not amplify the output to an expect level to observe. A low-pass filter is widely used to cut off the high noise frequency of the system. The output voltage and applied force give an approximately linear relationship in this experiment. However, the result is expected to be a straight line. The errors produced in the result due to many factors such as temperature effect and measurement misplay. Those errors can be reduced manually however they can not be eliminated.", "label": 0 }, { "main_document": "meet the target as close as possible. The independency has improved the credibility and transparency of the monetary policy. If the people have rational expectations and they believe the government inflation target, the deflation policy will have less cost by the Phillips curve. September 9, the MPC revealed its target policy rate of 4.75 %. After having hiked three times in four months in an aggressive attempt to stave off inflation, the MPC decided to hold rates constant due to the concerns of the UK's rising Produce Price Index. In the IS-LM mode (see dig7), the higher interest rate will cause additional demand for the money, which will shift LM curve upward hence lower price in good market. The statistical evidence suggests the UK inflation rate has been among the lowest in the EU since the start of 2000 and enjoying the longest period of sustained low inflation since the 1960s Balance of payment is an accounting record of all transactions made by a country. The most frequently used component is the merchandise trade balance, which is defined as the difference between a nation's exports and imports. Based on the data provided by the national statistics, The UK's deficit on trade in goods and services worsened in October 2004 to stand at In an open economy, the government can affects trade balance and exchange rate. In the case of fiscal expansion, such as growing public investment and the government subsidies, the corresponding upward shifting IS curve (dig7) will engender a increasing aggregate demand at a higher interest rate R1. In the short-term, the current account maybe worse off. This is because, firstly the demand for imports is proportional to aggregate demand, where the extend depends on the marginal propensity to import, and secondly, the raising interest rate pushes down the exchange rate, leading to the more expensive exports. This effect of fiscal policy was central to discussions of the \"twin deficits\" (budget and trade) theory. Nevertheless, if the competitiveness of the UK exports can be enhanced as a result of the developing technology and labour productivity in the long term, it should give a boost to the export demand due to the cheaper cost and the better quality. Besides, the government can also raise tariff barriers against foreign good. But sometimes it might not be feasible, e.g. against WTO regulation For the monetary policy, the central bank may consider depreciate the domestic currency by lowering interest rate. As the LM curve shifts downward (dig 8), the higher exchange rate indicates the dearer importer and cheaper exports. But it doesn't necessarily reduce the trade deficit because the expanding aggregate demand also implies more demand for the imports. So the effectiveness of the policy is ambiguous. It depends on the marginal propensity to imports and price elasticity of the exports. In comparison to the monetary policy, the time required to approve and deliver discretionary fiscal policy make it less effective as a tool for stabilisation. If the conditions change before policy's impact is felt, then it may end up destabilising the economy. Obviously, the", "label": 0 }, { "main_document": "circle of the wheel is: = 192 The centre distance is: According to equation: = And Therefore, The simplified bending stress equation is writing in the form Where Y is the Lewis Form Factor. Table 1 shows the Lewis Form Factor at 20 From the table, Y is 0.29 in this specimen manual calculation. And practical values for face width ( Assuming Therefore, bending stress is The modified form of contact stress is shown as below: Where velocity factor =6/(6+4.5) =0.57 And In this calculation, E is assumed as Young's modulus of grey cast iron with 105GPa, so Then, Design using the Gears Program. The input was set up to the gears program using the design specification and the manual calculation, like values of power, pinion speed, gear ratio, minimum teeth and duty. Those data put into the Gears Program was similar as the manual calculation I did in the last section, the result is shown in Figure 3. Since the face width in the manual calculation was assumed as 27 Such as pitch circle diameter, outside diameter and root diameter. The face width from the Gears Program was 13 Therefore, the face width was too low, material quality, module or centre distance had to be decreased to satisfy the face width. Trying different module and material of gears for many times, a satisfied result were obtained. Figure 4 shows the suitable design. The module was decreased to 2, and the material was choosing mild steel instead of cast iron. The three shafts are assumed to be made of mild steel, carbon steel and alloy steel. The lengths of the shafts are fixed by the face width of the gears and appropriate clearances, the severe constraint of gearbox size is considered as well. The shafts are required to transmit the torque, as well as withstanding the bending stresses due to the gear teeth loads and bearing reactions. There is no axial loading on the shaft because the gears are spur gears. Assuming the length of the shaft within the gearbox housing is 130 Three shafts had to be analysised, and two sets of bending moment diagrams for horizontal and vertical planes were considered. From the result of Gears Program, the tangential force for the input shaft is 397.9N, radial force is 144.8N, which denotes that: F And torque is equal to the tangential force times pinion radius end, Well, the reaction force R Therefore, the bending moment and shear force diagrams for the two planes are described as below: From those diagrams, the maximum bending moment of two planes were achieved, and = = According to maximum-shear-stress theory: Then We can choose 10mm. From the result of Gears Program, the tangential force for the input shaft is 943.1N, radial force is 343.3N, which denotes that: F3=943.1N, and F4=343.3N And torque is equal to the tangential force times wheel radius end, Well, the reaction force R5, R6, R7 and R8 can be calculated respectively, regarding to each fixed ends. Therefore, the bending moment and shear force diagrams for the two planes are described", "label": 0 }, { "main_document": "Usually the organism remains in natural reservoirs involving arthropods, lagomorphs and rodents. When humans disrupt these lifecycles they contract Less then ten organisms are required for tularemia to be contracted, which has generated significant scientific interest, especially for its potential use as a biological weapon. This essay broadly describes the current information available concerning The history of the discovery of The essay concludes by summarising why Soken observed a connection between the consumption of hares and the onset of a condition characterised by lymphadeonopathy. In 1911 an outbreak of a plague-like disease occurred in Tulare, California USA. McCoy investigated this outbreak and failed to isolate the causative organism of plague on agar, which was initially believed to be the culprit. McCoy concluded that a new bacterium had been discovered and it was named 'Bacterium tularense' in 1912. Francis and Mayne, (1921), determined that 'Bacterium tularense' was the origin of hare-fever, deerfly fever and the rodent plague-like disease seen in Tulare. In 1947 Dorofeev determined that 'Bacterium tularense' was a new genus of bacteria which has since been confirmed by 16S rRNA sequencing, he named the genus Four subspecies of It is estimated to cause 70% of tularemia cases in humans in the US After numerous tularemia outbreaks in Europe during the Second World War, Japan, America and the Soviet Union observed the damage this organism could inflict upon enemy troops and began to study it as a possible biological weapon. America and the Soviet Union began to stockpile biological weapons including those capable of disseminating Thankfully, America destroyed its stockpiles in 1973 but the drive to research Such research hopes to better prepare the military, medical personnel and civilians if a threat of a biological agent arises. It is a small gram negative coccobacillus as illustrated by Figure 1, the morphology of This pleomorphic organism is non motile, surrounded by a lipid capsule and an intracellular pathogen. It is a saprophyte meaning that it is able to utilise decaying organic matter and survive most conditions including low temperatures, moist soil, water and even straw. The first lifecycle is by far the most common in causing infection in humans. The tick-lagomorph cycle predominates in North America. The incidence of infection peaks in June and November when tick populations increase and when humans come into contact with lagomorphs and their ectoparasites respectively. Figure 3, the tick lagomorph cycle demonstrates how humans can easily disrupt this lifecycle and become infected with The second lifecycle involves the ingestion of contaminated water or a bite from an infected mosquito. This lifecycle is predominantly found in Europe and is known to involve rodents more so then lagomorphs. Epidemics of tularemia are associated with this lifecycle of Figure 4, the rodent-mosquito cycle demonstrates how humans can contract this disease. It is suspected that no primary reservoir exists as 100 species of mammals or more, 50 species of arthropod and many amphibian and fish species have been shown by Burroughs It is evident that Protozoa are found in water and as these organisms are similar to macrophages, it is suggestible", "label": 1 }, { "main_document": "The difference equation for the filter is found by The term \"Z\" refers to inverse past value. Hence Z raised to the -1 power refers to a past value, Z raised to the -2 refers to two past values, etc. The denominator of the Z transform refers to feedback paths. The numerator refers to feed forward paths. Feed forward paths are positive; feedback paths are always negative. For FIR filters, all the feedback terms in the denominator are zero. Then The block diagram implementation of this filter The difference equation for the filter is found by The term \"Z\" refers to inverse past value. Hence Z raised to the -1 power refers to a past value, Z raised to the -2 refers to two past values, etc. The denominator of the Z transform refers to feedback paths. The numerator refers to feed forward paths. Feed forward paths are positive; feedback paths are always negative. For FIR filters, all the feedback terms in the denominator are zero. Then Except for Butterworth, there are several other filter, such as Chebyshev, elliptic, Bessel and etc. Compared with them, the Butterworth filter has a slower roll-off, and thus will require a higher order to implement a particular stopband specification. However, the Butterworth is the only filter that maintains this same shape for higher orders (but with a steeper decline in the stopband) whereas other varieties of filters have different shapes at higher orders. What's more, Butterworth filter will have a more linear phase response in the passband than the Chebyshev and elliptic filters. There are several methods for mapping analog filters to digital filters, such as Invariant impulse, Invariant Step, and Bilinear transformation, etc. None of the techniques are perfect, but in general the Bilinear transformation maps the entire frequency response of the desired filter, the Impulse Invariant transformation preserves the impulse time response, and the Step Invariant preserves the step time response. So, if required frequency performance, such as filtering a variable in a dynamic frequency sensitive environment, the best choice is the Bilinear transformation. It is the BZT method we have been taught and I used in designing the filter. The discrete sampling of digital filters produces aliasing for frequencies at or exceeding half the sample rate. In the case of the Bilinear transformation, the entire continuous frequency spectrum is mapped to the discrete unit circle. This produces frequency warping errors throughout the digital frequency response. This error shows up as an incorrect cutoff frequency for low pass and high pass filters. It is generally a design critical practice to sample at least twice as fast as the highest frequency, ( With pre-warping techniques, ( In my filter design, I also used pre-warping method to modify the analog cutoff frequency to digital cutoff frequency. A low-pass filter passes relatively low frequency components in the signal but stops the high frequency components. The so-called cutoff frequency divides the pass band and the stop band. In other words, the frequency components higher than the cutoff frequency will be stopped by a low-pass filter. This type of", "label": 0 }, { "main_document": "In the past two decades there has been much discussion about the nature of globalisation -the project of which has had a profound impact on both men and women alike. The strategies which have shaped this process have long been regarded as being capitalist, gender and class bias. This paper is primarily concerned with the interrelationship between gender and globalisation. It will be argued that gender relations provide the base on which the global economy has flourished. Here, emphasis will be placed on the notion that gender has not only been shaped by, but also shapes our global world. Rather than accepting unequal gender relations as an outcome of globalisation, we will show how it is these very relations which characterize its project. In this way, the paper adopts Connell's (1987 cited in Bayes Thus after briefly tracing how concepts of gender and globalisation have been reviewed, the focus of this paper will then to concentrate on examining four main issues. Firstly, in what ways has flexiblisation of the labour force shaped the project of globalisation? We must bear in mind that for many this process does not always support and benefit women. This is true in both the initial implementation as well the final outcome stage. Secondly, how have structural adjustment programmes played a part in this process? Within both of these contexts one must consider the ways in which globalisation and capitalism have resulted in enmeshing women further into patriarchal structures. Thirdly, what has the impact of transnational migration been to globalisation? In this it is crucial to understand how gender identity has shaped these movements. We will evaluate how the nature of domestic labour has provided the base for the new global economy. Lastly are we now in an era of 'global imperialism'? How has gender shaped this terminology? Here emphasis will be placed on the importance of industries such as sex tourism and the increasing popularity of the bride trade. Consequently we will evaluate whether there has been a globalisation from below which has enabled women to resist the forces discussed above? How effective has this movement been? All of this must be well thought out before we can coherently understand the role gender plays in the project of globalisation. As Moghadam (2006) aptly states, globalisation is first and foremost about change: political change, economic change and cultural change. It is here, within the arena of cultural change that we can argue that the formation of gender regimes goes side by side with the project of globalisation. Gordon (1993 cited in Young, 2001: 34) classifies gender as a 'series of meaning systems that are socially constructed as sexual differences within the context of systemic male domination'. Hence gender is itself a hierarchical network of social regulations which place 'women on one side and men on the other' (Kreisky and Sauer, 1995 cited in Young, 2001: 34). However that is not to say that these orders are static. The concept of gender is constantly evolving with men and women renegotiating and in most cases struggling over the accepted construction", "label": 1 }, { "main_document": "as Gamble states, capitalism \"is characterised by profound frictions and conflicts...which threaten not just its own survival but the survival of the human species itself.\" Marx, K (1976) Miliband, R (1973) Gamble, A (1999) London: MacMillan Press Ltd p.103 The concept of determinism has been widely criticised among those of the post or neo-Marxist tradition. Gramsci's work, which emphasised the importance of ideology and the significance of agents However, it must be remembered that Marx himself acknowledged the role of the agent: \"Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly encountered, given and transmitted from the past.\" Determinism may appear to be a pessimistic constraint on an otherwise optimistic tradition. However, agents (individuals or groups) can achieve a positive control of their existence by appreciating the dialectical relationship between structure and agency which conditions determinism. Again, this rests on the notion of actuality: \"...only a being that has the faculty of knowing its own possibilities and those of its world can transform every given state of existence into a condition for its free self-realisation.\" This statement could hardly be more relevant to a student or academic of politics. By understanding the limits of an existence (for example of an individual, which may involve a complex relationship involving many factors, including education, location, up-bringing and culture), change of existence becomes a possibility. While this notion is more usually associated with 'class-consciousness', it is perhaps helpful to apply it to other areas of study. For example, by fully understanding the actuality of the World Bank, which may of course involve an analysis of its relationship with other institutions or individuals, one can move beyond the conflicts and constraints which comprise it. The continuing relevance of determinism, lies therefore in one's ability to realise and understand its significance in the construction of a subject. Marsh, D (1999) London: MacMillan Press Ltd p.322 Marx, K (1984) Marcuse, H (1941) The emphasis on the means and ownership of production is an element of Marxism which has emerged as a result of the methods of study described above. Dialectical and determinist studies of politics have led Marxists to conclude that: \"The mode of production of material life conditions the social, political and intellectual life processes in general.\" It is argued that those with the means of production are the ruling class in a society, that is to say, they are the origin of political power. If Marxism is dead, this theory should be proven wrong. Yet there are numerous examples which illustrate the fact that, in our day and age, where the nature of capitalism has supposedly changed dramatically since the time when Marx was writing, power resides within the class which controls capital. Marx, K (1977) McLellan, D) as quoted in Howarth, D (1998) This power can manifest itself in many different, complex and often unobservable ways. To take one example, programmes on television are often replicated from one channel to another with little variation or", "label": 1 }, { "main_document": "is seen to accumulate around microtubules located at the centromere and kinetochores, (Fodde Aneuploidy resulting from aberrant APC activity is a possible cause of colorectal cancer. So, the possible mechanisms by which The role of The mutations are non-randomly distributed within the protein, all but six mutations are found at the C-terminal residues (residues 1-50), a strong indication that mutations are encouraging tumour growth or survival, by conferring a survival advantage to the cell. 'Hotspot' mutations include an S45F missense substitution, that is seen in 38 out of 167 samples, and a truncation deletion, A5-A80, that is seen in 15 out of 167 samples. The role of However, a study that investigated the mutation frequency of certain genes in colorectal cancer found It would be sensible to conclude from these findings that This pathway is one of the best characterised signalling pathways and is known to be disrupted in almost every type of cancer. There are two components of this pathway that are considered to play a major role in colorectal tumorigenesis; K-Ras and B-Raf. K-Ras (Kirsten-Ras) mutations have been located in 22% (8497/38496) of all human cancers and in 30% (3331/10814) of colorectal cancer samples. Wild type K-Ras is a small GTPase located downstream of tyrosine kinase linked receptors, K-ras associates with phospho-tyrosine residues on the intracellular tail of these receptors and transmits the signal into the cell. In the inactive form, GDP (guanine diphosphate) is bound to K-Ras, in the presence of TKR signals, GTP displaces GDP and activates K-Ras. K-Ras then initiates a phosphorylation cascade which serves to amplify and transmit the signal. Raf-1 (e.g. B-Raf) is directly downstream of K-Ras, and is activated by GTP-bound K-ras. Raf-1 then phosphorylates, and thereby activates, MEK (MAPKK), which then phosphorylates and thereby activates MAPK, which then phosphorylates and thereby activates the transcription factors c-fos and c-jun. These transcription factors, as has been mentioned, regulated cell proliferation, cell survival and differentiation. The K-Ras signal is switched off by the hydrolysis of GTP to GDP by the intrinsic GTPase activity of K-Ras (Colicelli, 2004; Wenner et al., 2005). Activated Ras, through mutation, leads to increased Raf/MEK/MAPK signalling and therefore deregulation of proliferation and survival. These processes are the cornerstone of tumorigenesis, which is why deregulation of this pathway is observed in many tumours. The mutations determined in K-Ras are clearly non-randomly distributed, with the majority being clustered around residues G12 and G13, and almost no mutations beyond G13. B-Raf is a commonly mutated oncogene that signals downstream of Ras. Mutations in B-Raf are found in approximately 15% (477/3195) of human colorectal cancers and are non-randomly distributed. All mutations are located within the kinase domain (kinase domain comprises residues 458-722), all but one of the mutations being mis-sense substitutions. A 'hotspot' mutation, V600E, is seen in 445 out of the 477 samples containing mutations in the B-Raf gene, strongly suggesting that this alteration is producing an oncogenic effect. Constitutive B-Raf activation, through mutation, is likely to produce effects similar to constitutive K-Ras activation, as both proteins signal along the same pathway (Davies PI 3-kinase", "label": 1 }, { "main_document": "organizational change. Application school argues that culture is a crucial ingredient of organizational success and it is the with the strongest appeal to managers since \"it allows the firm to marshal the commitment of its members to achieving the firm's goals\" (Fincham and Rhodes 2005, p547).As Wilson and Rosenfield stated: \"for once it would seem that organizational behaviour has come up with a topic with few ifs and buts, which is readily comprehended and can be applied immediately\". (Rollinson, Broadfield and Edwards 1999, p547) The notion that a strong culture leads to organizational success has been criticized. For example, Gordon and DiTomaso argues, \"a strong culture might only be a good predictor of performance in the short term\" (Rollinson, Broadfield and Edwards 1999, p547) and that there is no such thing as one best culture but rather right culture. Thus, it is right to assume that a strong culture is an aid to success if it is also one that is suitable for coping with the conditions faces by an organization. As Schein argues, \"you must not assume that more or stronger culture is better. What is better depends on the stage of evolution of the company and its current state of adaptiveness. Instead of seeking a strong culture, try to understand and use the strengths of the existing culture. (Weiss 1996, p359) There are three aspects of the cultures that should be recognized Culture is Culture is Also, cultures can be positive and facilitate goal achievement; Culture is Above elements are interlocking, where culture is deep rooted in unconscious sources but is represented in superficial practices and behavior codes. The nature of Japanese culture has been regarded as a successful by Morgan (1977); the cultivation of harmonious relations at all levels in the organization, the merging of individual with common goals, and a reliance on worker responsibility (Bruno 1993 p,663) that is in contrast to Anglo-Anglo-American culture, which is obsessed with bureaucratic system, the need to distinguish winners from the losers. It was found from a research that both innovative and supportive subcultures have a clear positive relationship, while bureaucratic subcultures have a negative relationship. Indeed, bureaucratic subcultures which has received a lot of criticism for being an ineffective form of organization, where often they are unable to achieve their goals in a flexible way or to respond to changes in their market environment. (Fincham and Rhodes 2005, p334) Then there is a question whether this complex culture is indeed manageable. As mentioned before, the role of the manager is more than rational analyst, where he constructs the social reality of organization for members and shapes values and attend to the drama and vision of organization.(Ellis and Dick 2000) However, Peters and Waterman, Moreover it is argued by Morgan that Managers can influence the evolution of culture by being aware of the symbolic consequences of their actions and by attempting to foster desired values but \"they can never control cultures in the sense that many management writers advocate\" (Fincham and Rhodes 2005, p429) Various Theories; Hierarchy of Needs, Theory X and", "label": 0 }, { "main_document": "As genomic DNA methylation is one way to encode epigenetic information that control development. There may be some connection between floral dimorphism in Methylation-sensitive restriction enzyme digestion coupled with inter-simple sequence repeat (ISSR) PCR as well as a novel method that coupled enzymes digestion with real-time PCR were used trying to detect any methylation polymorphism between immature pods from chasmogamous and cleistogamous flowers. However, as chasmogamous, cleistogamous, DNA, epigenetic, (Balsaminaceae) is a native annual herbaceous plant that grows mostly in shady damp places in woodlands, and it is probably only native to the Lake District and Wales in the UK (Hatcher, 2003). The plant produces both chasmogamous (CH) flowers (Figure 1.1) that attract pollinators, and closed cleistogamous (CL) flowers (Figure 1.2) that are self-fertilized, and it is known that in open and sunny sites more CH flowers will be produced whereas in shady conditions, it has mostly CL flowers (Hatcher 2003). The genetic mechanism that controls the dimorphism in floral development of In the past decade epigenetics has rapidly advanced and it addresses some of the genetic phenomenon that cannot be explained by Mendelian genetics as well as elucidates the mechanism of gene regulation in plant development. Grant-Downton and Dickinson (2005 and 2006) have written a substantial review about epigenetics in plant biology and they asserted that DNA (cytosine base) methylation is one of the three ways to encode epigenetic information, beside chromatin (histone proteins and their post-translational modifications) and RNA. DNA methylation in plants, as in mammals, has two functions, the first is in defence against invading DNA and transposable elements (Miura 2001) and the second is in gene regulation as DNA methylation can inhibit binding of regulatory proteins, and methylation of promoter and coding sequence of genes can repress transcription (Finnegan 1998). In plants, methylated cytosine occurs both in symmetrical sequences, e.g. CpG (5'CG3') and CpNpG (5'CNG3'), and in asymmetrical sequences, e.g. CpApTp (5'CAT3') and CpTpT (5'CTT3') (Staiger 1989). The relative importance of symmetric and asymmetric methylation in regulating gene expression is unknown, but methylation at symmetric sequences can be transmitted through cycles of DNA replication (Bird 1978b). There are many evidences to suggest there is a correlation relationship between gene silencing and DNA methylation (Jeddeloh 1998, Ye and Signer 1996, Paszkowski and Whitham 2001, Jacobsen and Meyerowitz 1997, Staiger 1989, Grant-Downton and Dickinson 2005 and Finnegan and McElroy 1994). Richards (1997) suggested that up to 20-30% of the cytosines are methylated in the nuclear genome of many flowering plants. However, the methylation patterns, as well as the methylation levels are not static as the methylation level of DNA from young seedlings are approximately 20% lower than in mature leaves of both tomato and 1998). 2005) and loss of DNA methylation affects plant development (Finnegan 1996) as methylation play an important role in plant development (Finnegan 2000). Thus, it seems logical to make the assumption that there may be some connection between DNA methylation and floral dimorphism development in There are two objectives in this project. The first is to develop a novel methodology to detect methylation polymorphism in genomic", "label": 0 }, { "main_document": "'The Holy Church keeps for herself an army to come to the aid of her people.' The Church was justified in using military force to protect itself. The In fact, Jacques de Vitry specifically warned against those who said that the crusades were not in line with Christian teaching as being from the devil. The Church therefore emphasised the history of the Holy Lands to justify why the crusades were not a contradiction to Christian teachings. Guibert de Nogent Robert the Monk Gregory VIII Jacques de Vitry The crusades were further justified by a need to liberate the Holy Lands. Awful atrocities were being carried out there; the land was historically very significant for Christians, so the land needed to be liberated. Pope Urban II urged the people to 'liberate the eastern churches', and that their motivation for going should not have been a 'desire (for) earthly profit' but the 'liberation of the church'. The crusades were justified because the Christians were going to the 'aid of (their) brothers' As the Eastern Church 'prays to be liberated' Biblical references were again made to support this argument, as Jesus commended people to 'lay down one's life for one's brothers' The Church justified the crusades by explaining them as liberating Christians conquered by a barbaric race in a land that belonged to God. Fulcher of Chartres Robert the Monk Baudry de Bourgeuil The main justification used in the It was God who called people to go on crusade and it was He who would guide and protect them. It was not the Church, they argued, who were asking for help, but 'a battle-cry summoning you to war...brought from God.\" People had every right to go and fight because it was what God wanted, and in fact it was their duty to go. 'Consider that perhaps the almighty has provided you for this task, so that through you he may restore Jerusalem'. Christ is described as 'standard-bearer and your inseparable guide', which would have given people increased courage as they had God fighting on their side. Biblical quotations again were used in this argument, as Christ had commanded people to 'carry his cross and come after me' This argument is supported by Augustine who said that, 'your physical strength itself, is the gift of God.\" Such justification by the Church won a lot of support as if it was God's will, then the crusaders believed that they were assured victory. Robert the Monk Guibert de Nogent Baudry de Bergueil Jacques de Vitry The Church also used the idea of salvation as a means of justifying the crusades. Although it was also used as motivation, the crusades were justified because they were some people's only way of assuring their salvation. The The crusades were a way to absolve themselves for these sins, 'God has instituted in our time holy wars, so that the order of knights...might find a new way of gaining salvation.\" In contrast to fighting against their own brothers for selfish reasons, they could 'pursue their own careers' for a noble cause. There was", "label": 1 }, { "main_document": "replaced it with crop plants. It is thought that this would have reduced the stability of the ecosystem (Goudie 2006). During the early Neolithic period, the climate was good; the soils were fertile; there was lots of rain and the British population boomed. There was plenty of food for the population because most of the crops grown were introduced and therefore had few or no pests that reduced the yield. The large areas of forest were burned in order to make space for more agriculture and for building material as well as other uses and. This left nutrient rich soils for farming on. It was not until the mid Neolithic period that there are signs of things starting to decline. The climate began to cool, blanket peat grew more widespread and there is indication that the settlements began to be abandoned and that scrub began to take over these areas. It has been indicated with the use of pollen diagrams that, during this period, a major vegetational change occurred. This involved a dramatic decline in the Initial reasons that were attributed to this was that the climate changed, and became colder and wetter. This has recently been largely dismissed, and it is now thought that the role of humans, the progressive soil degradation and the spread of disease are more likely to be reasons for the decline that occurred (Rackham 1980). It is also thought that the rise of domestic animals added to the decline. This is because these animals were fed on the elm tree branches that were gathered, as these were known to be nutritious (Troels-Smith 1956). Due to the decline in soil quality and the change in vegetation, man began to depend more and more on animal products. This affected the soils and the vegetation further, because it meant that the soils were abandoned. The late Neolithic saw the stabilisation of the population at a lower level than in the early stages. The farming involved that of both animal products and crops, but due to the earlier soil degradation, there was considerably more emphasis on the meat products. Neolithic man also directly altered landforms and soils that were present. An example of this occurred in East Anglia where antler picks were used in order to dig out deep pits in the chalk land. This was done in order to obtain good quality flint to make tools, but had the effect of depression and water logging on the acidic, light, and sandy soils which are associated with chalk. These pits can still be recognised today (Goudie 2006). The different phases of the Neolithic period saw the development of an array of different habitat types that are known to man today, such as heath land. It also saw however, a reduction in the soil nutrients and a reduction in most tree species at that time. The Bronze Age dates back to 3,700 years ago. This period of time had more open landscape which was much more heavily cultivated than it is today. This time also shows a great increase in the", "label": 1 }, { "main_document": "his theory of human emancipation\" (1989: 14), which further reduces work to labor. \"Once men start to view the products of their own hands as having no other end but to make more products, it is a short step from there to treating all the world, both natural and man-made, as the means to man's infinite making\". Labor for labor's own sake, signifies for Bradshaw, \"the mastery of (1989: 18) To conclude, Bradshaw connected Arendt's critique of labor with Marx and work with Plato. And both of them overshadow the dignity of genuine politics and pose threat upon action. Bradshaw's interpretation is reasonable to some extent in that he points out Arendt's concept of action is a conscious rebellion against the tradition and the past, but he failed to take the historical background of Plato's philosophical thought into account and therefore his imputation of Plato is unjustified. What's more, he underestimates the impact of Marx's theory of labor upon Arendt's concept of action. The connections of Marx-labor and Plato-work are also illegitimate. It should be made clear that Plato was born after the downfall of Greek polis. Therefore, his discovery of contemplative life for philosophers is not the cause for the decline of action. Instead it is the remedy to its predicament. Arendt has no intension to connect the concept of work with Plato's philosophical teachings as she mentioned \"...in the initial stages of the modern age, man was thought of primarily as Hobbes's project is to \"guild purposes and aims to establish a reasonable teleology of action\". (Arendt, 1961: 76) The Leviathan is artificial man-made mortal god in order to guarantee the security of people, distinct from the previous whose legitimacy derives from God or Nature. It will also be oversimplified to equalize Marx's labor theory with labor in Arendt's terms. Marx's labor theory concerned more than the biological needs of human beings, based on which Arendt develops her own concept of action. We will take a close look by comparing Marx and Arendt in the next section. Towering over the work of Arendt is the spectre of Marx. Marx, according to Arendt, is a conscious rebel against tradition. He inverted Plato's hierarchy upside down to assign the highest position to labor, which has misled many critics to assume Marx's displacing is responsible for \"the rise of the social\", in Arendt's terms. \"Marx did not invent the world he criticized, and Arendt believes that the declining importance of the objective world was well under way before Marx ever wrote\". (Ring, 1989: 433) She warns us, \"To hold the thinkers of the modern age, especially the nineteenth-century rebels against tradition, responsible for the structure and conditions of the twentieth century is even more dangerous than it is unjust.\" Their rebellion is noble and \"their greatness lay in the fact that they perceived their world as one invaded by new problems and perplexities which our tradition of thought was unable to cope with\". (1961: 27) The new problem for Arendt is the same as Marx's: She defined the modern age in its \"theoretical", "label": 0 }, { "main_document": "Carolingian scholars The patronage by the emperors is also held to be a crucial factor in Carolingian identity and a distinctive feature of the period To summarise, three of the most crucial aspects that identify the Carolingian period are: Matthews Sanford 1944: 23. Trompf 1973: 12. Brown 1994: 34; McKitterick 1980: 47. These factors would therefore be relevant to the history of Latin texts. The patronage of the emperors allowed new monasteries to be founded where classical texts would be stored for centuries to follow. These new centres of learning allowed the clergy to be educated and royal patronage allowed scholars of the period to recopy ancient texts believed to be of use to Carolingian development and innovation. These features appear to give the period its distinctive qualities. However, this conventional view of the Carolingian period held by Carolingian scholars Sullivan argues that recent scholarly work has undermined the former notion that the Carolingian period was either culturally united or distinctive He states that: Such as Brown, Innes, McKitterick, Moreland & Van de Noort, also Reynolds & Wilson. Notably Eisenstein, Metcalf, and Sullivan. Sullivan 1989: 279. Sullivan 1989: 279. Therefore, let us attempt to compare and contrast the periods immediately preceding and the later periods succeeding the Carolingian age, (which is commonly dated to the eighth and ninth centuries), to determine if the period is indeed as culturally distinctive as has been suggested. Let us begin with the Dark Ages; this era has been seen as a disruptive and unstable era following the fall of Rome The period from 550-750 AD is said to be a time of 'unrelieved gloom' for the Latin classics, the copying of which virtually ceased Economic and cultural stagnation appear to be the main features of this desolate period, or were they? Bridbury 1969: 534; Reynolds & Wilson 1991: 79. Reynolds & Wilson 1991: 85. After the fall of Rome the people of Western Europe were freed from the grueling effects of taxation and finances could be redirected towards more local concerns Furthermore, the coinage of the Merovingians, (predecessors of the Carolingians) was extremely similar Bridbury 1969: 531, 533. Metcalf 1967: 351. Bridbury 1969: 531. Upon looking at the Dark Ages, one also discovers that one supposedly distinctive Carolingian feature was already occurring, the founding of monasteries. Classical Latin authors were already being placed under the care of the monasteries before the empire of Charlemagne came into being. What is more, two foundations occurred before the period of 'unrelieved gloom'; Vivarium in 540 and Montecassino in 529 Vivarium was founded by Cassiodorus on one of his estates and he equipped it with a library. Most importantly he put a strong emphasis on the copying of manuscripts. Moreover, he foresaw the need for translations of Greek work on exegesis before such work began in the Carolingian period In addition to the work of Cassiodorus in the sixth century, the work of another in the seventh has relevance here. The Reynolds & Wilson 1991: 82. Reynolds & Wilson 1991: 82. Matthews Sanford 1944: 32. Furthermore, there were a number", "label": 1 }, { "main_document": "the people of Chavina, Peru (fig 7) (Ortner & Putschar 1981). Circumferentially deformed skulls, including the Aymara (Anton 1989) and circular ( This form of modification, probably through the use of bands (Hrdli An extreme example of this form of modification was that of a female, around 30 years of age, from Patallacta, Peru (fig 9) (MacCurdy 1923). MacCurdy found a reduction in the dimensions of the foramen magnum in deformed crania, 3.3cm long x 2.7cm broad, to that of undeformed examples, 3.7cm long x 2.9cm broad (1923: 230); evidence suggests this form of modification restricts growth of the foramen magnum and could result in the constriction of the spinal cord as will be discussed later in this paper. Through localised pressure on the posterior cranial vault, the securing of an infant to a cradle- board (fig 10) produced a cranium with either unilateral or bilateral flattening to the occipital or lambdoid region (Kohn et al 1995) and have been used by the people of Albania to produce their characteristic flat heads (Hasluck 1947). Growth restrictive pressure in the region between the frontal and occipital bones induced by the use of a cradle-board results in fronto-occipital reshaping and compensatory growth of the parietal in a mediolateral direction (Cheverud et al 1992). The cradle-board, use of which was widespread throughout the American south-west (Kohn et al 1995), may also have produced compensatory posterior lateral growth (Cheverud et al 1992). Care should be taken on examination of presumed artificially modified skulls as mild although frequent pressure on the occipital bone from a cradle-board could result in values equal to hyperbrachcephaly (Blackwood & Danby 1955). Occipital or lambdoid deformation resembles that of accidental occurrence (Hrdli The appearance is that of a fronto-occipitally shortened but broadened skull with a high forehead (ibid) and vertically elongated occipital (Ubelaker 2000). An extreme example such as that found in a south-west American Pueblo cemetery must display intentional modification opposed to unintentional deformation due to the severity of the deformation (fig 11). Children have been reported remained strapped to their cradle-board until they were three years old (de la Vega 1966) and developmentally there seems to be little or no affect on the child through the use of a cradle-board; this may be connected to the belief of some North American Indians, including the Navahos, that cradle-boarding and the immobility induced (Hudson 1966), produced a strong child, a plausible explanation when considering the affects of isometric, resistance, exercise (Hudson 1966), alternatively that it encouraged straight growth (Kohn et al 1995). The Songish North American Indians of Victoria, British Columbia, adopted the cradle-board and pads as the apparatus utilised during cranial modification (Cheverud et al 1992) and may have held similar beliefs. European cradle-boards allowed movement (Hudson 1966) and therefore did not provide similar resistance; future research into skeleto-muscular development of children displaying evidence of cradle-boarding may identify a difference between the North American tribes and European examples. Contraindications against the use of artificial cranial modification have been suggested to include the increased risk of cerebral disease and/or mental retardation (Blackwood", "label": 1 }, { "main_document": "Routinely admitted on Mr Whilst on a diving holiday in Mexico 5 years ago, he went scuba diving and experienced sudden and severe earache in his right ear. He added the pain was worst over the angle of his right jaw, and was very sharp. He graded the severity as 7/10, and said that despite surfacing, it lasted for several hours after. He did not feel anything else brought his earache on, and ascent to surface eventually relieved it. He tried no analgesia. He felt that there was a definite ache on his return flight home. Over the succeeding 5 years, Mr Mr He does not have a sore throat. He thought the lump did not move on swallowing. He has not experienced any loss of power in his muscles of facial expression. He is a former smoker with a 20 pack-year history, and drinks approximately 40 units a weekend. Mr He is hypertensive, but otherwise his medical history is unremarkable. On admission, he was prescribed atenolol 25mg od po. He has no food or drug allergies. Mr He lives with his youngest daughter following the death of his wife to ovarian cancer. A systems review was unremarkable. The history is of a unilateral, right sided neck lump that has been slowly growing for approximately 10 years. Common or serious causes of this description of a neck lump include: The most likely causes are emboldened. Mobile, non-tender, non-pulsatile and firm. Does not move with swallowing. Tympanic membranes intact. Oropharynx, and nasal cavity normal. The mouth was well hydrated, and not ulcerated. Nasendoscopy showed no obvious lesion in laryngopharynx. There was no cervical lymphadenopathy, and no hepatosplenomegaly. Fine Needle Aspiration was performed and sent for histological diagnosis. Neurological Cranial and peripheral nervous examination was normal. All other systems examinations were normal. The history and examination is of a slowly growing 3X3 cm mass deep in the in right parotid area. This makes the most likely cause of the neck lump to be: Physically, Mr Psychologically, he is concerned about the fears and stigma associated with cancer following the death of his wife, and is concerned about any permanent side effects of the surgery. Socially, he has had to take time off work, and may be temporarily reliant on his daughter's help for the next few days after discharge. The histology report returned the diagnosis of pleomorphic adenoma, meaning few other investigations were necessary. Other tests that may have been considered include: \"We have received back the report on the cellular basis of your neck lump, and can confirm that it is a benign growth called a pleomorphic adenoma. The recommended treatment is removal of the mass. All surgery is not without risks, and the major risk in this operation is damage to a nerved called the facial nerve, which is responsible for controlling the muscles of facial expression on the right side of your face. It is also involved in allowing you to purse your lips together and close your eyes tightly. There is an approximate risk of It is also probable that", "label": 1 }, { "main_document": "The ratio of specific heat at constant pressure to the specific heat at constant volume was measured using a variation on the R Values where measured for Air (1.608 ( 0.021 / 1.433 ( 0.023), Nitrogen (1.503 ( 0.020) and Argon (1.866 ( 0.024). Apart from one of the values calculated for Air these results disagree with accepted values for these gases, and their corresponding theoretical values, which are 1.4, 1.4 and 1.667 respectively. The errors in these results are discussed, and suggestions are made to improve the experimental setup. The specific heat of a gas (c) is defined as the amount of heat required in order to change a unit mass, or a unit quantity (i.e. a mole), of a substance by one degree in temperature. Specific heat can therefore be defined as a proportionality constant between the amount of heat added to a system (Q) and the consequent temperature change (t). Specific heat is an intensive variable, for a mass (m) it can be found by: For any gas there is effectively an infinite number of specific heats, depending on what external variables are held constant, however two notable ones are the value at constant pressure (C The ratio of specific heats is then defined as: This value is an important thermodynamic quantity that gives an insight into the structure of the gas molecules for which it was calculated. For instance appears in the pressure-volume relation of an adiabatic process, and also in formulas that describe the efficiency of cyclic processes i.e. an internal combustion engine. In principle may be obtained by independently measuring C Instead the value for a gas can be determined from a study of any adiabatic process in which the gas is involved. The earliest recorded method used to calculate is that of Cl The method employed in this experiment is the study of adiabatic changes caused by an oscillating piston in a trapped mass of a gas in a tube. This is a variation on the R In this experiment a cylindrical metal piston, placed between two columns of the same gas, is driven into oscillation by a magnetic coil. Using a frequency generator the piston is made to oscillate over a range of frequencies. The frequency at which the piston oscillates with the largest amplitude is recorded, this is known as the resonant frequency. A value for can then be derived from a simple calculation that includes the resonant frequency, and the parameters of the experiment. In this report I present the values obtained experimentally for three different gases, Air, Nitrogen and Argon. These values are compared to those predicted by theory, and values obtained from other experiments. The addition of heat results in an increase in a substances internal molecular energy. This molecular energy comprises a variety of different kinetic energies associated with the motion of the individual molecules. Depending on the type of molecule these kinetic energies can be divided into velocity components in translational, rotational and vibrational directions. The number of velocity components needed to describe the complete motion of a", "label": 1 }, { "main_document": "It 2:30 in the morning, lying back in my bed with laptop on my knee, I link to the library site to renew my book and download some journal articles. Meanwhile, I open another window to see the latest new about SARS from a 'local' newspaper in Taiwan. Suddenly Pansy sends me a greeting via MSN messenger. She shortly described the tense sense in Beijing and I share my feelings with her. We both feel we are witnessing an important passage of history. Soon after, Vika and Chi-chen also show up (as represented by the little blue head on MSN panel). I know that Vika just arrived her office in Taipei and Chi-chen returned home in Los Angles. I can see them and a dozen of other friends whenever I get online, yet I only talk with some of them occasionally. It just seems so easy to chat anytime that I see little reason to chat at any given time. This passage of my life, contrasted with the past, illustrates some implications of how a new form of media, the internet, is redefining the temporality and spatiality as I conceive them. In this essay, I will start from these personal experiences; relate them to the existing literature for a critical discussion of how the media in general are re-shaping the way we conceive and experience time and space. Three themes will be explored: (1) the sense of shrinking distance and the dissociation of social relations from physical place, (2) the blurred boundary between public and private and the lost traditional identities of time and space, and (3) the mediated identities of time and space. In general, I found most theorizations are generalized in a level too abstract and can only capture a partial picture of the changing landscape of time and space. The media, while restructuring the old reference frame of time and space in various ways, also provide a complicated set of new frames over-layering on the old ones, with which we may construct our conception and experience of time and space in diverse ways. Before entering further discussions, some clarification of the key concepts will be helpful. First, media, as discussed here, refers to the means of communication through which we experience the world beyond our immediate reach, which include the television, radio, newspaper, internet, phones and various personal communication devices. While money, language and power had been discussed as \"circulating media\" by Parson and Luhmann (Giddens, 1990:23), their implications are beyond the scope of this essay. Second, while the word \"time\" and \"space\" usually evoke the impression of the abstract, absolute dimensions of the physical world, here, apparently, they are discussed as social constructs, referring to our conceptions and experiences of them. Third, time and space are two distinctive aspects of our life, but they actually share much in common. As Bauman (2000) noted, \"'Far' and 'long', just like 'near' and 'soon', used to mean nearly the same: just how much, or how little effort would it take for a human being to span a certain distance.\" It also", "label": 0 }, { "main_document": "(Price, 1997; Hofstede & Hofstede, 2005). This will maintain Saint Fusion's PM/standardized product approach but also include some 'soft' HRM elements (commitment, development) so to also tie in with cultural needs (appendix 2). All operative staff can be provided from the local labour market, whereas unit and national managers should be a mixture of internal/external, local/regional markets (staff reallocation), as evidence of career progression but also to stimulate innovation (Price, 1997). Regional executives should come mainly from Saint Fusion's internal market and be third-country nationals (Asian) with regular visits from parent-country executives (in accordance with Saint Fusion's centricity-see Strategic Orientation). These recommendations complement Saint Fusion's orientation, reflect cultural needs for hierarchy, and consider type of product/skills needed and PM/HRM approaches. Because of operation size, work permit bureaucracy (appendix 6), and high costs, it is not justifiable to have permanent expatriates on the regional board. With this in mind, recruiting strategies will vary from suitability to flexibility (appendix 6). Operative level staff could be recruited on a 'best fit' approach, linking to the company's PM (Storey, 1992) and standardized service (Doherty, 1998), however, it should not be as strict as Price (1997) suggests, as once employed and corresponding to Korea's culture, employees go through training and teamwork, whilst integrating into Saint Fusion's culture (malleability strategy). Managers will go through development processes to fit in the organizational culture, improve performance, increase loyalty and guarantee lasting employment (Rowley & Bae, 2004), resulting in 'softer' HRM at higher levels (Storey, 1992; Price, 1997). Companies are increasingly concerned about motivation at work, given its close relationship to improved job performance (Hodgetts & Luthans, 1997). The two most common theories on motivation come from Maslow's hierarchy of needs and Herzberg's theory of motivation. Both support that once physical needs have been met, various psychological needs emerge (Hodgetts & Luthans, 1997). These views have faced criticism, by claims that they do not consider culture or the type of job performed. The understanding of cultural values is crucial for developing reward systems. Cultural differences impact perceptions of equity, i.e. balance between one's contributions (skills, knowledge, energy and functioning) and results (success, wages, respect, promotion and development) (Wheeler, 2002). Depending on distinct views, people can divided into benevolents, equity sensitives and entitleds (appendix 8), the first mainly found in cultures with high collectivism and feminism are high as PD and UA have a lesser impact on equity sensitivity (Wheeler, 2002). Therefore, and considering the challenge it is to determine what job motivators apply to which culture, career and development stage, an appreciation of both intrinsic and extrinsic values is advised as well as the emphasis on intrinsic or extrinsic depending on operatives or managers, and the different approaches of PM and HRM. Since PM principles prevail at operative level (appendix 7), and considering the Korean culture and a company's obligation to meet employee work goals, motivators should include programmed training and multi-tasking-see Resourcing. Also teamwork, group management and 'family-like' employer/employee relationships, as well as unity, concord and collaboration should be integrated to correspond to feminist and collectivist principles-appendix 2 (Hofstede &", "label": 0 }, { "main_document": "that minor changes in the surroundings can drastically affect helping. Levine's himself describes the importance of \"pockets of village cohesiveness in the most urban places\". Thus, it is surprising that the sampling method used in this study (i.e. sampling from downtown areas) was so crude. Not only the area in the city may affect the results but also the time of the year. Cities play different roles in each region, state and country. Some are business centres, others are touristic destinations. While Rio is famous for its beaches and beautiful women; Shanghai is famous for its skyscrapers and dynamic economy. The relevance of these differences is the following: in the summer, Rio is flooded with tourists, whereas Shanghai is not necessarily. So, it may be the case that the experimenter in Rio received a great deal of help from visitors, and not Cariocas. As a consequence, the helping behaviour in Rio would not be a property of the city, but a property of the type of tourists it attracts. Likewise, the behaviour in Shanghai could reflect the rush of the travelling business men in a strange environment, not the behaviour of locals. Comparing Sao Paulo (Brazil) with Shanghai would be a better choice. It is surprising that the study was carried out on a \"convenience sample\" without further regard as to the origin of the helpers. An easy solution would be to ask participants after each test where he/she came from. It would be very expensive for Levine himself to visit all those cities and carry out the trials alone. So, he took advantage of the international student community in his department to run the tests. For each country, a different experimenter of the corresponding nationality was allocated. By doing so, however, Levine ended up introducing a potential confounding factor in the design, namely, experimenter biases. In short, because each surveyed city had a different experimenter running the tests, the results may be due not to the city's personality, but rather due to the experimenter's personality (i.e., due to his ability to attract help). Rater improvement may also account for differences in the results. Over trials, some experimenters may have learned how to feign blindness better than other experimenters. Similarly, experimenters may have been influenced by the city's atmosphere. In fact, Levine pointed out the case of Tokyo, where the rater was so embarrassed with the surrounding norms of civility that he could not feign blindness. Tokyo was dropped from the final comparison, but the effect may well have been present to a lesser extent in other cities. One possible solution is allocating two or more experimenters in each location and average their scores in order to balance out confounds. Another possible criticism to the experimental design implemented by Levine et al. is the criterion used to classify a response as positive: it is simply too loose. A positive helping response for the pen-drop condition, for example, ranged from pointing to the pen to catching it and returning personally to the experimenter. And a positive response for the fake-blind condition varied from", "label": 0 }, { "main_document": "From the late 19 In 1911 Frederick Taylor published \"The Principles of Scientific Management\" It was an attempt to document some of these changes, as well as spread these ideas more widely. It is from this book that the concept of Taylorism evolved. Key to the question is that Taylorism was different to what had come before, and that it came about for specific economic reasons. This essay will try to explain what Taylorism is, where it came from, and why it came into any kind of existence at all. When Taylor published his book he believed that the working man had a natural tendency to 'soldiering' This meant that workers did less work than they were physically able to do (Taylor:1911:Chapter 1). Taylor noted two main reasons for this quasi industrial sabotage. Firstly it is that workers do not want to produce too much; as they risked of lowering the value of their, and their fellow worker's labour, in addition to decreasing the amount of labour needed for work. Secondly it is that the 'rule of thumb' methods are applied, men are not taught the fastest way to do a job. Taylor envisages 'scientific management' erasing these myths and inefficiency. Taylor distinguishes this as different to what has been before, as his new relationship between employer and employee is based on 'scientific' observation, thus making any outcomes rational. Following on from this initial systematic analysis of work Littler sees Taylor as having three principles: the fragmentation of work, the separation of planning and doing and the divorce of 'direct' and 'indirect' labour (Littler:1978:371) This means that the management takes away the conceptual thought process involved in work (Braverman:1974:45/46), and prescribes to the worker the way a task will be done. Because Taylor sees workers as inherently stupid (Littler:1978:371) jobs should be broken down to their simplest possible components. These methods were clearly taken up by Ford in his new factories where various ready-made jigsaws and later machines meant that work was deskilled and mentally unchallenging (Salaman:1992:342). Was this a new phenomenon though? Braverman argues that capitalism has always had this rationality, a progression towards more and more control and the methods that Taylor is commenting are the continuation of a rational, technical progression: The 'putting out system' and early factory work were all capitalist control methods. First controlling the product, then when the labour was done, and Taylorism was the final development of how work should be done (Braverman:1974:Chapter 2). Braverman sees the deskilling of the workforce as beneficial only for the capitalist employer, whereas Taylor says such management will allow mutual prosperity. The reality is that skilled labour costs employers more. To employ unskilled workers may be preferential, as not only are they cheaper but can also be replaced easily, which reduces their power. 'Scientific management' in Braverman's eyes increases capitalisms ability to exploit workers. Deskilling workers also removes any intrinsic value and satisfaction that was present in the work, alienating the worker from not only the product but from work also; Braverman sees 'scientific management' as further subordination of", "label": 1 }, { "main_document": "and only 2 convolutional codes. There are many others of equal merit, and further work could include these. Two examples raised in the Conclusion are DBPSK and DQPSK. Wireless LAN only uses convolutional coding, but other wireless communications models include other types of error protection (outer coding) to protect against different types of errors, e.g. sudden bursts of noise. There are also methods of interleaving, whereby bits from different packets are shuffled before modulation. Outer coding and interleaving can, in extreme circumstances, help to restore a packet of data from an extremely violent burst of noise. Further work could perhaps look at the DVB model, which includes both outer coding and interleaving. 802.11 is well known to not perform at the data rates that should be possible. Aside the reasons already discussed, the other layers of the 802.11 standard play a big part. A true 802.11 model would need to take into account the interaction between devices using the correct protocol. One final factor that would be taken into account while defining a system, is user perception of corrupt data. How many errors are acceptable to the human ear/eye. As previously stated, data compressed by an MPEG2 algorithm will suffer badly, as errors will be propagated for a period of time. However, the compression could be increased if this was likely to happen frequently. Uncompressed data may have the ability to degrade gracefully, giving the impression that it has been received correctly. However, human perception is difficult to model.", "label": 1 }, { "main_document": "fact that how good it looks she also considered the fact that how easy and convenient the dress would be for the beholder. Starting from 1958 she kept on making her skirts shorter allowing women to run to the bus and do their activities more easily. Even though quant most famous inventory is the mini skirt, even Andre Courreges developed it separately. The mini skirt was the fashion in the 1960s. There are still doubts as in to who came up with the idea first. Quant not only designed the mini skirt but also she designed many other dresses which had a huge influence on the society. She is also credited with inventing the coloured and patterned tights. She also accompanied the garment. Quant opened her second shop which was also named bazaar in Knightsbridge. After her first shop was a very popular hunt for fashionable. She was so successful and popular that by 1965 she was exporting garments to the USA. The demand was so much that she went in to mass production setting up the Ginger Group. She was so popular in the mid 1960's the time when she introduced the micro Mini skirt and plastic raincoats. She was also appointed an OBE( Her last big fashion development was in the late 1960s, she launched hot pants. These are short, tight shorts, usually made out of cotton, nylon, or some other common material. They are meant to emphasize the buttocks and the legs of a female. In the 1970s she concentrated more on household goods and make up. She resigned as director of Mary Quant Ltd. in 2000 after a Japanese buy out. She has a son, the two lived together in Japan while she set up her business there in the late 1980s.", "label": 0 }, { "main_document": "(Tac. 3.8-9). The reasons for Antonius' decision to advance without Vespasian's consent are not clear, but it seems that he was motivated by a desire not to let all the glory of the campaign go to Mucianus, and was able to march ahead since his location allowed him to assess the situation better than Vespasian (Greenhalgh 1975). After winning a number of small victories his advance led him to Cremona, and into a head on conflict with six Vitellian legions (Tac. 3.21). His actions in the upcoming battle were to be decisive, but before examining the battle I will first look at events that had occurred on the Vitellian side. After hearing of Antonius' initial success Vitellius sent Caecina with a force of troops to intercept him, a decision which was to prove fatal. By August of AD 69 Vitellius had lost \"the agreement of his lieutenants Caecina and Valens in his support. This weakness was to contribute substantially to his defeat\" (Wellesley 1975: 104). Vitellius had two preferred generals in Caecina and Valens, with Valens being his favourite of the two. Yet illness had prevented Valens from fighting Antonius and so Caecina was sent on ahead alone. This was a disaster for Vitellius since Caecina and Bassus, who had been given command of the Ravenna fleet, jointly decided to betray their emperor. Tacitus remarked that \"a jealous fear that rivals would outpace them in Vitellius' affections induced them to ruin Vitellius himself\" (Tac. 2.101), while Levick has also argued that the strength of the Flavians helped cause Caecina's desertion (Levick 1999). Bassus was easily able to convert his troops to Vespasian, since they had only recently fought for Otho against Vitellius (Tac. 2.101), yet Caecina could not do the same. Instead of loyally following their general over to the Flavians Caecina's troops mutinied against him and put him under arrest. Yet Caecina's defection was to have an important impact both on the battle of Cremona and in its aftermath. Having looked at events running up to the battle of Cremona it is now possible to analyse why the Flavians won. First and foremost the skill of Antonius was vital to the Flavians. Tacitus describes how Antonius was able to keep command of his troops and prevent them from running blindly into battle before the proper reconnaissance had been carried out, even though this was an unpopular decision (Tac. 3.20). Also throughout Tacitus' description of the battle Antonius is seen as acting as like the ideal general, keeping order amongst his troops and setting an example when the tide turned against them. For example, Antonius led the troops where fighting was most intense (Tac. 3.29). His skill and courage was critical in the Flavian victory. To go back to Caecina, although he had failed in his attempt to win his troops over to Vespasian he was still to have an indirect effect on the fighting. When the legions Rapax and Italica arrived, who had been under Caecina's command, they proved ineffective due to \"the lack of an able general... [and] while they", "label": 1 }, { "main_document": "The purpose of this laboratory was to analyse data produced from an experiment using MATLAB. The experiment investigated the lateral and longitudinal forces on road tyres when in the process of cornering. The data obtained included lateral force, longitudinal force, slip angle and slip ratio. The method of the laboratory involved typing simple commands into MATLAB. The commands were used for a variety of functions, from loading the data to plotting graphs from it. In the laboratory MATLAB was used to analyse these results and produce suitable visualisation of the data produced. At first MATLAB was commanded to produce graphs using the method of least squares as if with one polynomial. This produced an inaccurate graph as the data clearly had a curve and the graph produced had a straight line. This method was refined and a more complicated method was produced. This involved calculating new variables using MATLAB commands. However this produced a line of best fit which fit well through all the points. Using MATLAB the data from the experiment was visualised suitably. The data could now be analysed easily. An experiment was carried out investigating the lateral force, The tyre had a translational velocity of The tyre also had a rotational velocity of The tyre was subjected to a load F 4900N. The slip angle was given by The slip ratio was given by 2. The data obtained for this experiment is in appendix 2. This data had to be processed so that it would be easier for patterns to be discovered and arguments and conclusions to be formed. This data was processed using MATLAB. MATLAB had a range of functions which, allowed for easy manipulation of data. For example MATLAB can plot points on a graph and draw the line of best fit. The purpose of this laboratory was to use MATLAB to analyse the data and present the results in a way that was easy to understand. The data in MATLAB was analysed using various forms of regression. In the first few exercises the method of least squares was used to find linear lines of best fit. In this method the model However once it passes certain points there will a degree of error, since the line of best fit does not follow points individually but follows the overall pattern of the points. The method of least squares chooses estimates for the values To find and Solving these simultaneous equations gives the values to be used for The line of best fit can then be plotted. For non-linear regression the model used is The apparatus used to analyse this data was the software package MATLAB. The test data was imported into the MATLAB workspace using standard Windows XP functions. The data was contained in two files, lateral force data.txt and longitudinal force.txt. This data was loaded in using the commands The number of significant numbers was changed to short using the command The data was then called using the commands To produce graphs more easily the data was separated into its individual variables using the commands This", "label": 1 }, { "main_document": "The purpose of this dissertation is to compare and contrast magical techniques employed by the Ancient Egyptians with the moral values found in the Wisdom Literature, popular stories and religious texts. Lichtheim Yet, there is certainly reason to believe that the Egyptians did justify certain uses of magic within their own 'system' of ethical practice. Lichtheim, 1976, p119. I shall therefore examine the Wisdom texts and popular stories to provide a list of values the Egyptians held concerning the use of magic. The resulting criteria shall be compared with the magical techniques found in State Magic, Funerary Magic and Popular Magic, This introduction shall also serve as a discussion of several key Egyptian principles, such as Ma'at and Heka, which are crucial to the understanding of Egyptian magical practice. Once these ideas have been discussed and the ground rules have been established; I shall proceed to analyse each field of magic individually and then conclude in a separate piece at the end. Ritner, 1993, p204. There has been a vast amount of discussion concerning the definition of magic. However, I shall provide my working definition of magic, at least in terms of Ancient Egypt. The use of objects and texts that are designed to 'ward off the blow of events' I hope that by using an Egyptian definition of magic, that of Merikare, I shall avoid modern definitions as much as possible. Furthermore, much of the magical practice discussed in this work was indeed designed for such a purpose. Next I shall focus upon two key concepts of Ancient Egyptian magic, Ma'at and Heka. For several other definitions see Ritner, 1993, pp 4- 28. From the Instruction of Merikare, see Lichtheim, 1973, p106. Let us now move on to a brief introduction to the concept of Ma'at. The Egyptians believed that Ma'at was the cosmic order that kept the universe functioning. The word Ma'at itself can be translated into English as 'truth', 'order', 'justice' or 'balance'. Allen Ancient Egypt had no such codes: for the Egyptians the distinction was determined by practical experience'. Gahlin Gahlin, 2001, p212. J.P.Allen, 2000, p116. Gahlin, 2001, p212. On a higher level, Pharaoh had to maintain Ma'at as his royal duty. In his terms, Ma'at was to maintain justice, build or improve the temples and also protect the nation's borders from invasion. Gahlin All foreigners were considered to be elements of the forces of chaos and so they threatened to destroy Ma'at. .Allen He also makes the point that the Egyptians told stories for both entertainment and moral instruction, which legitimizes the use of such texts here to establish a set of ground rules. Gahlin, 2001, p212. J.P.Allen, 2000, p259. Concerning the idea of Heka, there needs to be a brief distinction between the modern day (and even the classical) opinion of magic and the Egyptian viewpoint. Lloyd provides us with an excellent summary of this difference in two articles. Whereas the Greeks and indeed modern society generally believe that magic is subverting the natural order; the Egyptians saw it as an integral force that bound the", "label": 1 }, { "main_document": "computer, known as the Colossus. The code was broken, and provided the Allies a much needed insight into the mind of the Third Reich. There are two main areas of modern cryptography, symmetric key and public (asymmetric) key cryptography. Put simply, symmetric key cryptography describes an encryption system where both 'Alice' and 'Bob' (names frequently used in Cryptography instead of person 'A' and person 'B' to aid understanding) have the same key. This was understood to be the only method of encryption until 1976. Block ciphers take a message and output encrypted text of the same length. This is deemed not to be secure enough for today's standards, as an encrypted text should never be the same twice. One such cipher is DES (Data Encryption Standard) invented by IBM in 1975. It was later disregarded as a single key could be broken in less than 24 hours by using a brute force attack. A stronger form AES (Advanced Encryption Standard) was first published in 1998. This proved so strong the US Government approved its use on classified information. It has been claimed that AES is breakable, but designers of AES looked at the proposed 'break' and were quick to comment on some of the hackers estimates for the time needed. One technique that was previously used on the internet to exchange secret information involved a 'double padlock' technique. Alice would lock her message with her own padlock and key and then post this package to Bob. Bob would lock this package with his (different) padlock and key and post it back to Alice. Alice would then unlock her padlock and send the package back to Bob. This would only leave Bob's padlock on the package, so Bob could unlock it with his key. At no stage could anyone read the information and also Alice and Bob did not have to exchange keys - a risky event. This method was slow and required many transmissions of the data, increasing the chance it would be intercepted. Another type of encryption was necessary. Public Key cryptography is also known as asymmetric key cryptography, because of the difference between the keys needed for encryption and decryption. It is hard to imagine a padlock where you need two different keys to lock and unlock it, but the idea is relatively simple. Alice would calculate the key to encrypt the data and using another method, calculate the appropriate related decryption key. Alice can then safely transmit the encryption key knowing it does not matter whose hands this information falls into. Bob would then encrypt his message using the encryption (Public) key and transmit the message back to Alice. Alice can now use her private key to decrypt the message. A popular cryptosystem using Public Key cryptography is RSA. It was published in 1976 although GCHQ in England later announced they had secretly discovered it in 1970. The name RSA comes from its inventors: Ron To generate the public and private key, Alice needs to do 5 operations: Choose two large prime numbers p and q such that p", "label": 1 }, { "main_document": "patterns, ideology, and earnings, followed by men's employment hours and ideology. Other predictors of men's relative share of housework, includes age, life-course issues, marital status, and children influence the relative share of housework performed by men. Research results on Germany and Israel reveals that the amount of time women spend in market work has a positive and significant effect on the sharing of essential household tasks. Time spent by the husband has an opposite effect, and longer hours are associated with greater segregation of household tasks. Results also show that neither dual-earner nor no-earner couples differ significantly from single-earner couples with respect to the sharing of household tasks, taking into account hours of market work (Lewin-Epstein and Stier, 2006). For these two countries the greater the dependence, the more hours women spend doing housework, in other words when the woman is least dependent on her spouse, as measured by the relative earnings gap, the more likely she will devote less time to housework. (Lewin-Epstein and Stier, 2006) In relation to gender ideology for Germany and Israel, a central finding of the study shows a consistent effect of gender ideology on the division of household labour. Research results indicate that the more liberal the wives' attitudes, the less time they spend on housework, practically there is no effect on men's attitudes on the time their spouses spend on housework. The degree of sharing household responsibilities is primarily affected by women's attitude, although husbands' attitudes also have a specific positive effect on sharing (Lewin-Epstein et al, 2006). In America and Sweden an analysis of economic dependency model shows that, the less economically dependent a wife is on her husband the less housework she performs (Evertsson and Nermo, 2004). In both countries, women living in couples in which, both spouses are higher educated perform less housework than do women in couples in which neither spouses had more than 12 years of schooling. In other words, the more the woman contributes to the total household income the less housework she performs (Evertsson and Nermo, 2004). The relative resource perspective receives support from both findings, because women whose occupational status are lower than that of their husbands spend more time in housework than do women whose occupational status is at the same level with that of their husbands. This is partly explained by the fact that some of these women were out of the labour force, and so do not have an option than to engage with household work (Evertsson and Nermo, 2004). The fact that highly educated women do less work might be because highly educated individuals more often hold gender egalitarian attitudes (c.f.; Kane, 1995; Knudsen and Waerners, 2001; Thornton et al, 1983 cited in Evertsson and Nermo, 2004). I would suggest that this might be because of changes in family values, leading to change in gender roles. More importantly, the economic revolution and the need for participation of both women and men in the labour market to fulfil family needs have resulted in new definitions of roles within families. In America and Sweden, women", "label": 0 }, { "main_document": "nature of his patrons. In Prussia, men achieved power through their relationship with the Prussian monarchy and state institutions, and Ranke energetically cultivated this relationship. Once he reached the University of Berlin in 1825, his research was largely bankrolled by the state. His training in the civil service and personal standing provided him with access to the state archives. Although his His body of work was amassed for the benefit of the state. With such deep influences of religious conviction and involvement with the Prussian state, Ranke's philosophical motivation to produce history can hardly be construed as 'scientific'. However, his 'Objectivity' and historicism were the main elements of his approach to history as a discipline. Firstly, 'objectivity'. In order to discover God's plan for humanity through understanding history, Ranke believed he needed perfect 'objectivity'. He put himself through the conscious process of shedding his preconceptions of the past, present, and future, each time he began work in the archives. Thus Ranke believed he approached historical documents with a completely open mind. Yet, an historian with perfect 'objectivity' cannot exist, because humankind studies himself. Secondly, historicism. Though historicism had been practiced by lawyers since the earliest schools of jurisprudence, Ranke was another after Fran Ranke believed that his commitment to objectivity and historicism would best place him to understand Translation of this euphemism has proven problematic for scholars of Ranke. While some interpret the phrase to mean Ranke aimed to discover how history \"really\" was, it is more accurate to translate to how it \"essentially\" was. Ranke's superficially 'scientific' intention to precisely recreate the past as it \"really\" was, is actually a deeply philosophical one. He looked for \"essential\" truths, with intuition rather than reason. Fran He advised that witnesses and documents should be treated with dispassion, but as suspicious. Sequences of events should be established by balancing conflicting accounts. Arnold, While objectivity and historicism provide an apparently 'scientific' approach, intuition, or historical imagination, was then required to actually recreate the past. Ranke was confident that if he detached himself utterly from his own values and preconceptions, and immersed himself deep enough into all possible sources, he would attain perfect empathy with his historical agents and so understand the past in its own terms. The necessity of historical imagination seriously compromises Ranke's claim to a 'scientific' approach. One cannot reconstruct history to the extent that the historian penetrates inside the head of an historical agent, and it is arrogant to argue otherwise. However, so long as the historian recognises that perfect empathy is impossible, through study or imagination, striving for perfect empathy is both a useful and 'scientific' approach, nonetheless. Ranke is most convincing as a 'scientific historian' with respect to his He aimed to examine Ranke used an unusually wide variety of sources for a historian of his age, including diaries, memoirs, letters, first hand accounts of eyewitnesses, and diplomatic dispatches. He immersed himself in state archives with a fetish-like energy, to gain a complete insight into the past. His Such was Ranke's commitment to the 'truth' that discovering so much as", "label": 1 }, { "main_document": "also fell in a few years back to more usual levels Although there were a number of technological advancements after the Black Death, many of the major breakthroughs of the time came before the plague such as windmills and complex field systems in agriculture and developments in wool and silk manufacture Gutenberg's printing press was an advancement that came as a result of the population shortage after the Black Death; however, printing did not really become popular until the 1470s when the population was increasing. Also, the most important centers of printing were Venice, Rome and the Southern German cities where there was the fastest demographic growth. This suggests that the Black Death did not have a great impact on the technological advancements of the time. Ziegler, Herlihy, Historians examining the Medieval period had previously over looked the Black Death. Henry only wrote fourteen lines on the subject in a study of twelve volumes. This indicates that the Black Death was not considered to have had a great impact on the history of Europe. Other explanations have been put forward for the decrease in population and the subsequent economic, social and political changes. Colin Platt suggests that 'it was famine not plague, after 1349, that remained the biggest killer' He also comments on the fact that later marriages and families with fewer children was becoming normal practice and that this kept the population down more than the effects of the plague. As there is limited evidence from the period, it is impossible to discern exactly what was killing off so much of the population. Undeniably the plague was present, but its effects may have been exaggerated. Platt claims that 'more monks undoubtedly died of over-weight or of liver conditions than ever succumbed to the plague' Graham Twigg, a zoologist, further suggests that the spread of anthrax was the predominant cause of death in 1348 Herlihy examined the He found that freckles, which were more common to other diseases such as anthrax, not buboes, were the most common symptom described Platt, Platt, Herlihy, Herlihy, There have been a number of different interpretations on how significant an impact the Black Death had on the history of Europe. The population and commercialization of Europe was already in decline and 'continued deterioration... would have been likely, even if... [the Black Death] had never occurred' It was a catalyst for events such as the Peasant's Revolt, the Reformation and the fall of manorialism, but was not a direct cause. It appears that 'the Black Death did not initiate any major social or economic trends but it accelerated and modified- sometimes drastically- those which already existed' Ziegler, Ziegler,", "label": 1 }, { "main_document": "correlation. As the length of time it takes to run 200m increases, the length of time it takes to run 100m increases. The data is based on sprinters whose best event is the 200m because to find the sprinters their 200m times had to be below 21.2 seconds. Therefore it may be acceptable to use the time of sprinters who are best at 200m to find their 100m time but not vice versa. The data is based on some of the best sprinters in the UK. This is very specific data and not representative of every athlete in UK. It is very difficult for it to represent hardly any athletes as the best athletes will obviously train much more and harder than an average person who runs for fun. The coach should be very careful who applies his predictions too. The data is specific to the year 1998. This is now 5 years ago and the data could be out of date. For example new training machines may have been brought in which could affect the relationship between the two best times. In order to gain a broader idea it would be useful to use the same data from a number of years up to the most recent year possible (2002). The sample only contains male sprinters. The coach cannot apply the prediction to female sprinters as the relationship between their 200m time and the 100m time could be very different. A coach should be very wary over which distances he applies this predictive equation to. 200m sprint and 100m sprint are relatively similar events. The equation cannot use a 200m time to predict a 1500m time and vice versa. The coach should stick to 200m and 100m predictions and carry out a similar process if he wishes to predict athletes' potential at different distances. For example use a relationship between 800m and 1500m times. We do not know the age of the sprinters involved (although it may be easy to find out). As the sample is probably made up of professional sprinters they will most likely be adults of 18 years or over. We should therefore not apply these results to children who may not have developed to their full potential yet meaning that the relationship between their best times at each distance may be different. Sample Mean = x = 0.803 IgM (g/l) (to 3 decimal places) Sample Median = x There is not a single mode. 38 children (the highest number) have 0.7 IgM (g/l) and 38 children have 0.8 IgM (g/l). 10% point of distribution = 0.4 IgM (g/l) 90% point of distribution = 1.4 IgM (g/l) A square root transformation reduces the kurtosis of the distribution slightly so producing a smoother distribution. The right-hand tail is also reduced meaning that the data is slightly more symmetric than with no transformation at all. The range of variation of the IgM values is decreased. A log transformation produces the most symmetric distribution of the three. Kurtosis decreases and the right-hand tail is also reduced considerably more than in the", "label": 1 }, { "main_document": "a phone. The fact that they had already stolen two phones in such a way would, if proven in court, greatly assist their prosecution also. However, difficulty lies in the fact that there was actually no phone in the pockets to steal; it was impossible to commit the intended offence. In determining liability in such a situation the courts appear somewhat ambiguous. In However, contrasting this decision is the case of Clearly, the potential liability depends on the approach adopted by the court in each case. If an act-centred approach is taken, whereby the defendants are judged on the consequences they intended to happen, liability may be escaped. However, it appears the trend Felicity and Amy obviously thought that there was a phone in Caroline's jacket pocket and took action based on this assumption to secretly investigate the jacket in the hope of appropriating it. If this could be proven, liability for attempted theft may be imposed. [1977] Crim.L.R. 609 'Criminal Law, Texts and Materials' - CMV Clarkson & HM Keating, p.494, 5th ed., 2003, Sweet & Maxwell. [1980] Crim.L.R. 503 per Smith at [1980] Crim.L.R. 504 Law Commission's recommendation of fault-centred approach, followed by sympathetic provisions in s1(2) and (3) of Criminal Attempts Act 1981 and ultimately decision in Another potential liability for Section 1(1) of the Criminal Law Act 1977 provides that a criminal conspiracy occurs when \" Crucially, the agreement need not be physical but a mere 'meeting of the minds'. The agreed plan between Felicity and Amy to steal mobile phones undoubtedly amounts to such an offence, although would be very difficult to prove in court. Criminalisation of these inchoate offences raises potent arguments in relation to principle of fair labelling. Whilst those attempting to commit an offence may indeed have a The scope of the law in this area can be criticised, as it purports to convict those who's acts are, somewhat loosely defined, as being 'more than merely preparatory', revealing a relatively broad approach to criminalisation in this area. The Offences Against the Person Act 1861, s. 18 , states that \" Felicity's action of stabbing Caroline in the eye clearly falls under this statutory provision hence it is likely that she would be found liable for such an offence. Felicity's aggressive conduct and frame of mind satisfy the elements of the statutory requirements in terms of Case law has aided the clarification of the terms 'Wounding' occurs when the skin is completely broken The Charging Standard refers to \" It is clear that Felicity, through stabbing Caroline, directly committed the offence with assault ( Adaptation of statute found in 'Criminal Law, Texts and Materials' - CMV Clarkson & HM Keating, p.593, 5th ed., 2003, Sweet & Maxwell. 290 as cited in 'Principles of Criminal Law' - A. Ashworth, p.314, 4th ed., 2003, OUP In terms of the requisite Felicity's intention is clear in this example. It cannot be denied that her conscious act of plunging an object into another's eye amounts to direct intention to inflict G.B.H. This goes far beyond satisfaction of the currently", "label": 1 }, { "main_document": "In particular, Then The angle between Thus and then To find the quaternion expression for the rotation Then rotation R1( given by an application of the results in the previous section. Similarly, Then the quaternion pair (r, As we stated earlier, there exists a relationship between SO(4) and SO(3) x So any rotation in four dimensions can be written as As previously, let Then we can write Quaternion multiplication is associative, so if we write R for the matrix representation of right multiplication by r and S for the matrix representation of left multiplication by s, then rqs = RS since So A = RS. The bigger problem is to demonstrate that this construction gives every orthogonal matrix. This is the problem of actually finding r and s when you're given an orthogonal matrix. We can write and These are easily seen to be isometric with the origin as a fixed point so they are rotations in We see From RS we can find a Writing we see that Theorem 6. Proof: It is easily seen that the sum of the elements of For 2-by-2 complementary minors Then So there exist a, So each matrix of a 4D rotation is composed of matrices Problem solved! If we look carefully through the above work, we can easily see how the quaternion pair (r, Example 2. We are rotating vectors in four dimensions so let us have co-ordinate axes 1, i, j, k with O the origin. Let Then we can see that The matrix of rotation Thus we conclude our brief exploration of the link between quaternions and rotations. Consider an expression of the form where all the Addition We write where There are n We have We obtain by rewriting A system, with addition and multiplication, as defined above is known as an The expressions are called The complex numbers, dual numbers, double numbers and quaternions are all examples of hypercomplex numbers. In all hypercomplex number systems it is always true that and for real a, A normed division algebra is a division algebra with a norm The familiar hypercomplex number systems are all normed division algebras, although this is not necessarily true for all such systems. We will now show that R, C, H and O are the only normed division algebras. Consider the First we note that the inner product This is easily seen since by Also, For This is easily seen since, by the previous paragraph, We have Then it becomes apparent that We let We now evaluate composition, conjugation and the inner product on the Dickson double algebra, S + iS. We have (a+ib, c+id) = (a,c)+ (b, d) since (a, id) = 0, (ib, c) = 0, Then so Lastly, so Letting Expanding gives So 19 We have For Letting Suppose Then So we can write the following, due to Hurwitz: Theorem 7. R, C, H Proof: The real numbers R are commutative, associative and have trivial conjugation. The complex numbers C is a commutative, associative algebra. The quaternions H is an associative normed division algebra. The", "label": 0 }, { "main_document": "Language has been defined as ''the institution whereby humans interact'' (R.A.Hall, 1964), as a ''purely human'' form of communication (E.Sapir, 1921). In recent decades however, experiments have shown chimpanzees able to interact with humans through signs. But does this mean they possess language? Popular opinion often thinks of language as a uniquely oral form of interaction, therefore impossible for a chimpanzee to acquire, due to it's lacking of a vocal tract. Most people tend to ignore the deaf, which make up a certain percentage of the population and for whom the development of language proves extremely difficult. It is for their sake that a non-oral form of language was developed: sign language. Moreover, conversely to common belief, it reflects all the complexity and richness of spoken language, even in its grammatical form. BSL (British Sign Language) and ASL (American Sign Language) are examples of this. In consequence, experiments were carried out teaching chimps sign language, to compare their ability of acquiring language with that of humans. Washoe, for example, in the 1960s, was the first chimpanzee to undergo such an experiment. Allen and Beatrice Gardner, who introduced him into a group of adult ASL signers, carried this out. The results were encouraging: in 4 years he managed to acquire 132 signs. Strong similarities were observed with child language acquisition: in the general word meanings (mainly general nominals) and in the way he was able to put signs together to express small sets of meaning. This was done however at a much slower rate. Following this success, other experiments were took place declaring massive achievements in chimpanzees understanding (necessary to fulfil the interaction role of language) as well as production of sentences and even abstraction, which is considered as characteristic of human communication. What Washoe and the other chimpanzees produced, although closer to language than anything else observed, still contains many differences. They may have succeeded in producing more than single word utterances, but these lack the complexity of the grammatical structure, characteristic of the human language. These are merely comparable to the utterances of a small child acquiring language. One must note that at this stage the child is said to be in the process of and not to have acquired language. Therefore how can we say that a chimpanzee, whose language is no more developed, has language? Furthermore, these experiments present many weaknesses: their standards were very generous, the evidence is merely anecdotal and the reports were on a particular animal in a particular experiment, when language is something widespread. Consistent evidence, in more controlled conditions is needed for these experiments to be considered as scientifically substantial. Finally, the explanation of the observations is not clear; the lack of grammar suggests that it is simply a sophisticated imitation (which language is not) of what they observe humans doing. It seems evident that chimpanzees can learn to imitate signs, put them into various sequences and use them in different contexts, but the explanation is unclear and more consistent results are necessary. What they have produced is also less complex and sophisticated than", "label": 1 }, { "main_document": "will lose jobs, which could potentially make earning an income very difficult. Less regrettably, logging companies will lose profit. Although opposing development in Africa seems inappropriate and unjust (Peterson and Amman, 2003) without placing some limitations on resource exploitation these valuable commodities may soon cease to exist. Due to inadequate implementation of legal policies, political unrest, poor communication and a lack of commitment from officials, so far most conservation efforts have contributed very little towards reducing the use of bushmeat. In March the CITES Bushmeat Working Group published a legislative review in support of their goal to harmonize wildlife laws in central Africa (Bushmeat Quarterly, 2003). However, communication within the Central African Republic has become so increasingly difficult due to political changes that reports from the area are no longer available. Independent sources confirm that poaching is increasing especially with the proliferation of weapons due to internal fighting. The markets of Bangui are flooded with bushmeat. Information coming from war zones within the Democratic Republic of Congo reveals a wildlife massacre from armed men. Large mammals including elephants are killed to feed troops, and the meat and ivory sold to finance fighting. In the latest report from the BCTF it was decided that solutions to the bushmeat crisis require international collaboration on policy reform, sustainable financing, long-term support for protected areas, developing protein and income alternatives, awareness and education campaigns (Gartland, 2001). Despite efforts from conservation organisations and governments to compose regulations considering all the above, high densities of wild animals are still being killed for their meat. People driven by survival and long-standing traditions, who have little else to rely for food or money, will continue to hunt. While the consumption of certain species may seem unacceptable to many in the western world, for people in central and western Africa all wild animals are regarded as a valuable source of meat (Peterson and Ammann, 2003). People battling to ensure the continued existence of their own families understandably may not share the developed worlds concerns about the death of another species. Therefore raising concern in the countries where it matters most may prove a tricky feat. Successfully banning the use of bushmeat not only seems an unlikely goal but in many ways an unsuitable one. Allowing the use of bushmeat to continue at a sustainable level may prove to be a more realistic aim. To do this human needs must be met without exceeding acceptable losses in biodiversity (Robinson and Bennett, 2000). However, considering that nearly half of today's two hundred and fifty extant primate species already face the threat of extinction (Mittermeier, Konstant, Nicoll and Langrand, 1992) and taking into account the prediction that one-fifth of all living species could disappear within the next thirty years (Washington Post, 21 April, 1998), controlling the amount of bushmeat being harvested, to a sustainable level, may also be quite a challenge. With three hundred and fifty thousand humans being born each day (BCTF, 2003) the consumptive requirements for protein will continue to grow. As a result of an unfair and unjust distribution of resources", "label": 1 }, { "main_document": "Emo-tion is an idea for a retail shop based in the town of Cirencester. Emo-tion takes advantage of a gap in the market for musical equipment and accessories aimed directly at 14-19 year olds. There are many towns across the UK with no retail outlet aimed directly at this age range; the scope for growth is large. Emo-tion is jointly owned by We are looking for The idea for Emo-tion was formed early this year when the need for a shop selling commodities aimed at 14-19 year old teenagers appeared in Cirencester. Local students at Cirencester College also felt that Cirencester needed an outlet where they could buy guitars, popular DC/Converse shoes and 'Emo' accessories. Emo is a name given to a fashion culture in teenage boys and girls which involves wearing dark clothing, studs, black make-up and customizing old clothes. Emo-tion is a shop aimed to fill this gap in the market within the town of Cirencester and surrounding villages, with a combined estimated population of 40,000. The market for the idea of Emo-tion is nearly completely open. There is no shop in Cirencester selling guitars or anything music related apart from CD's. There are however, shops selling music related products in Swindon (over 15 miles away). Market research has shown that the public would use Emo-tion over travelling to Swindon. In the questionnaire for Emo-tion, 100% of people said they would visit and buy items from the shop. Emo accessories are also hard to come by in the local area and placing all these products under one roof would provide a unique selling point to the company. Fashion trends currently have a life span of about 5 years. The company believes however that Emo-tion could adapt to any fashion change and still survive. A need for musical equipment will always exist and it is possible a contract could be set up with local schools to fulfil this need. Appendix 2 contains Emo-tion's proposed products. For the musical market, Emo-tion will have 100% market share within the town of Cirencester. Emo-tion will have 80% of the Market for DC's and Converse shoes but only 20% for shoes in general, as there is competition already in Cirencester. Due to this, Emo-tion will only sell 'Emo' shoes. Most of Emo-tion's customers will be aged between 14 and 19. At peak times such as Christmas, we expect parents of this age group to purchase gifts from the shop. The store would be contemporary in its looks. The front of the shop would change every month, as market research has shown this to be effective. By requesting local students to fill out a questionnaire, we have been able to discover Emo-tion would be used most in the winter. This enables us to plan our launch for mid-autumn. This launch time will ensure the shop is operating efficiently when winter comes. Financial forecasts also take this into account. There is no competition in the musical market in Cirencester. Some competition exists for shoes from Clarks - but Emo-tion offers shoes aimed directly at the age range", "label": 1 }, { "main_document": "The speech also contains interpersonal devices, of which the article is completely devoid, as well as more emotionally charged vocabulary and structural parallelisms. Its syntax and lexis are much easier to understand than those of the article, which contains more detailed information on the subject itself, including specialist terminology. All these factors work together to the effect that Blair's speech appeals to a wider audience. However, both texts are examples of genres moving from a closed, limited register (Halliday and Hasan 1989: 39) towards a more open register. This is a sign of a 'democratisation' of discourse genres, linked to the phenomenon of 'conversationalisation' discussed by Fairclough and Cameron (in Cameron 2001: 130-131), an instance of social change being mirrored and at the same time facilitated through language. Discourse communities that were formerly restricted to an elite membership, like academia and politics, are increasingly open to the wider public. These discourse communities increasingly seek wider support, in terms of votes and funding for example. In both genres, texts are becoming more 'promotional' (Fairclough 1992 in Cameron 2001: 130). Whilst this function is completely accepted in the field of politics, academic writing is generally still associated with objectiveness and neutrality, although it \"is not about performance, it is about persuasion\" (Murray 2005: 25). Text 1 and text 2 both 'sell' a political agenda, but their 'sales tactics' vary depending on audience and situation. Both texts are less formal than other members of their class of communicative events, but they retain a substantial amount of traditional features in order to enable their addressees to categorise them. As genres are evolving with social and cultural change, discourse analysis will increasingly have to deal with texts that borrow many features from other genres. However, it is likely that most texts will still aim to be recognisable in terms of genre, because 'categorisability' will remain an important factor in the addressee's identification with the text.", "label": 0 }, { "main_document": "motivation, researchers have claimed that opportunity variables including available resources for people (such as the quality of equipment and material), working conditions, leader behaviour, and co-ordination of team members are also the determinants of job performance (Waldman and Spangler 1989; Arnold et al 1998). Matsui et al (1987) showed group process may enhance individual performance, having group goals cause people to accept more difficult goals and develop a sense of shared responsibility for the achievement of individual goals. Moreover, leadership style has been considered as a factor affects job performance (Waldman and Spangler 1989). For example, leadership which emphasized the use of contingent rewards will enhance individual's performance-outcome expectancy beliefs and hence improve their performance. Therefore, based on the above analysis, it is fair to conclude that job performances are determined by a variety of factors. A happy worker might be a productive worker but a productive worker may also be caused by individual's ability and previous experience or may be motivated by the potential chance of getting higher pay and promotion. In addition to the determinants for job performance, scholars have also been interested in why some people report being fell satisfied with their jobs, while others express much low levels of satisfaction (Locke, 1976). According to Agho et al (1993), job satisfaction is defined as the extent to which employees like their work. It is employee's attitude towards their jobs or work environment; it can be either negative or positive. Moreover, two general categories of antecedent variables associated with job satisfaction have been defined, which are environmental and personal characteristics (Ellickson and Logsdon 2002). Environmental determinants refer to the factors associated with the work itself or work environment such as pay and promotion, supervision, fair evaluation of work etc. Ellickson and Logsdon (2002) indicated that promotional opportunities and pay have a positive link with job satisfaction. Brown and Mitchell (1993) found that there was significant negative links between organizational obstacles (such as insufficient training, unsafe work environment, uneven work distribution) and employee job satisfaction in the study of bank employees. In addition, the positive relationships between supervisors and employees contribute to the higher level of job satisfaction. Ting (1997) reported that government employees who enjoyed a supportive relationship with their supervisors experienced higher levels of job satisfaction than those who did not. Finally, Blau (1999) provided evidence for the positive relationship between employee performance appraisal satisfaction and overall job satisfaction. While personal factors refer to individual attributes and characteristics, such as age, gender and personality and so on. Although Ellickson and Logsdon (2002) found that gender and age had a weak relationship with job satisfaction, personality might play a role in determining job satisfaction. For example, optimistic people might be more easily satisfied and feel happier than those pessimistic people. Besides environmental and personal characteristics, social, cultural and situational factors may also affect job satisfaction. For example, Herzberg (1987) indicated that managers, skilled worker's satisfaction were caused by 'motivator' factor while unskilled worker's satisfaction appeared to be dependent on hygiene factors. Thus a productive worker might be a happy worker", "label": 0 }, { "main_document": "The conquest of the Inca empire in 1532 saw the creation of a nascent model of colonial government, beginning a period in which Spain would endeavour to establish control over its new territory. Owing to its erstwhile conquering successes, Spain had an established tradition of assimilation, bringing conquered peoples under the crown's jurisdiction through conversion and allegiance to the Catholic Church. The notion of a 'spiritual conquest' pertained to the degree to which, following initial military and political subjugation, the colonists were able to proselytize the various indigenous inhabitants of its newly acquired regions. By the time of the conquest of Peru however, enthusiasm for evangelization had declined markedly within the Church and governmental authorities, with circumstances in Peru itself further constraining the enactment of a theological conquest. Internecine feuding, sustained Indian insurgency, and an inauspicious set of geographical facets meant that indigenous groups largely retained their traditional spiritual tenets in the initial years. However, even during the proceeding years of greater royal colonial control and organization of the Indians under Francisco de Toledo's Viceroyalty, full evangelization became effectively marginalized by the priority of establishing control over a part of the crown's empire that had strayed unpalatably far into private hands. As such, whilst a varying degree of \"syncretism\" Congruently then, from the outset of Spain's presence in Peru conversions were sought as a means of acculturating and assimilating distinguished local caciques of the former order under a new religious framework, coinciding with the extirpation of that of the old. Using religious conversion as part of the integration of eminent native contingents, the Spanish were from the beginning able to insert themselves as overlords of an existing hierarchical structure in place of their predecessors. Evangelization became utilized as a political expedient, a noble-orientated policy of Christianization being adopted without an extensive aspiration to radically subvert the spiritual worlds of the politically-negligible masses of the lower strata. Whilst sixteenth century Peru did not then witness a widespread and absolute alteration of the Andean spiritual world, evangelization served as an efficacious instrument of requisite assimilation, propagation of the Catholic paradigm aiding the creation of a pervasive imperial structure. Edwin Williamson, (London, 1992) P.101 The conquest of Peru, like its Aztec precursor, provided the Catholic authorities with access to a highly populated area of the New World. Sixteenth-century Peru however was not to witness a pervasive conversion of the pagan masses, as for the Spanish crown and its conquistadores, motivation for conquest was more firmly rooted in secular, material aspirations. Belief in the efficacy of evangelization, formerly promoted as the justification and focus for Spanish presence in the Americas, had eroded substantially as a result of practical experience. Colonial authorities became somewhat less concerned with achieving the widespread, unequivocal conversion that had theoretically provided the impetus for previous conquests. Tellingly, there was no grandiose promotion equivalent to the Aztec conquest's symbolic deployment of twelve Franciscan friars at Veracruz in 1524. Whilst the institution had around 350 priests by the 1560s, its clergy had principally tended to the spiritual welfare of their countrymen, with missionary endeavour", "label": 1 }, { "main_document": "This flask belonged to an army sergeant William Belcher from Abingdon (Berkshire). It was used in the late 19 You would think that this is made of horn, but it is in fact an imitation. What does that tell us about the owner? The label has been design in a way to provide the reader with different level of information, bearing in mind that there are various kinds of museum visitors, with different expectations and levels of engagement. Thus the first line contains only the basic knowledge about the object, informing that it is a powder flask. This piece of information is for those who only want to know what the object is and move on with their museum visit. The second, informative part of the label is designed to provide the more interested visitor with some more in-depth information about the flask. This is, however, not too detailed, mainly because of the limited availability of the information about this particular object. This would not be a crucial part of an exhibition, therefore a thorough analysis is not so much essential at this stage. The third, interpretive part contains more specialised information, especially of use for people connected with museums or students. I have decided to use both informative and interpretive kinds of label and combine them to create one coherent piece of information. Using only one type would be over simplistic and aimed at one particular group of visitors, whereas combining two types provide a reasonable amount of information for everybody. I have used different sizes and styles of font emphasising different levels of information. Thus the basic information about the object is written in bold and using larger font. The main body of text was written using font size 20, because the object is small enough so hat people would look at it from a relatively small distance, therefore font 20 would be enough. In terms of type of font, I chose a fairly simple form, to make the text easier to read, but a different from usual Times New Roman. Adequate contrast should be maintained, especially in dim lightning, meaning that the closer the colours keep to black and white the better. Black type on white background is easier to read than light type on dark background. Moreover it is recommended to line up the text to the left (ranged left), as it is easier to read than type lined up on both the left and right (justified type). Since the label would be placed in Museum of English Rural Life its design should match the style of the already used labels. They are all fairly simple and mostly in black and white, therefore I kept my design in similar form. The text itself aims at answering the questions that the viewer might ask looking at an object he/she does not know. The content of the text depends on the kind of museum the object would be displayed in. As the potential place of display for the flask would be the Museum of English Rural Life, the text should", "label": 0 }, { "main_document": "and level of economic development between negotiating partners gives rise to the issue of adjustment costs, which means the political and socioeconomic expenditure invested to make or amend their standard of transaction to be consistent with the trading partner. Moreover, the asymmetry of bargaining structure between the United States and Latin America is likely to generate unequal costs distribution. In other words, larger economic gains by weaker partners might be offset or reduced by higher adjustment costs (Bouzas and Ros 1994, 27-28). In the bigger countries, although relatively lower costs are imposed on them, RTAs are not necessarily welcomed by domestic actors either. Industries negatively affected by the market integration are highly likely to require as much compensations as possible (Bouzas and Ros 1994, 28). However, unlike the European Economic Community (EEC), which tackled this issue through the establishment of adjustment funds, NAFTA has no redistribution mechanism to cope with the adjustment costs of developing partners (Wyatt-Walter 1994, 87). This means Mexico's adjustment costs might be much higher than that of European countries later accessed to the EU - e.g. Britain, Denmark, Greece and Austria etc. - , which had to accept the strict criteria imposed by existing members of the European Union. Such high adjustment costs may bring about developing countries' reluctance to pursue deeper integration with developed countries. As stated above, considering the political and economic motives of cooperative behavior between state actors, several questions arise from the intention of the United States and Mexico regarding the participation in the negotiation for NAFTA. Why did the United States accept Salinas's proposal without hesitation even though it already had overwhelming political and technological superiority over Mexico? Why did Mexico propose the market integration initiative with the United States despite a considerably high membership fee - adjustment costs - knowing well it could not overcome its relative backwardness to the United States? The realist answers to such questions begin with the following perception. Grieco's 'binding thesis' offers one of the relevant answers, explaining that weaker partners will pursue trade agreements to secure 'voice opportunities' to themselves (Grieco 1993, 331). According to his hypothesis, the collaborative arrangements that developed and developing nations jointly participate in, despite the gap between their bargaining powers, will be assumed to contribute to restraining the unilateral use of power by stronger partners. Similarly, economic disparities among state actors are also regarded as a catalyst rather than an impeder in promoting economic integration (Mattli 1999, 15). In particular, a higher level of economic development and a larger membership of existing trade agreements tends to stimulate more active behavior in participating in the regional group. The weaker outsiders, in this process, have an incentive to pay a high membership cost, such as radical structural adjustment at a domestic industrial level (Mattli 1999, 63). Although Mattli was affected by the strand of new institutional economics Mattli defines economic integration as 'the internalization of externality'. Such formulation shows the influence of the new institutional economics. In the light of their hypotheses, Mexico's adjustment costs caused by unilateral trade liberalization in the 1980s,", "label": 0 }, { "main_document": "This report aims to analyze the relationship between GDP and imports of goods and services of three countries with different economic status. The countries are - United States (a developed country), India (a developing country) and Sierra Leone (a less developed country). Countries with different economic status are chosen in order to facilitate the differentiation of the variation between the characteristics of countries with different GDP levels and growth rates. For example, a developed country, the US, has much higher imports compared to a developing and less developed country, although the level of its imports constitutes only to a small part of its total GDP. This difference in imports portrays the GDP gap between a developed country and a less developed country. GDP and imports are chosen as indicators of economic development. It is expected that countries with high GDP levels will also have high import levels, however not high enough to drive the country into a trade deficit. Whilst a less developed country is expected to show unstable levels of imports (because of an unstable economy and unstable exchange rates) and thus will be expected to show signs of having trade deficits. The data have been further investigated by applying correlation and regression analysis between GDP and import figures of each country. The GDP and import values are ranged from 1970 to 2003, which have been acquired from the World Bank database The GDP values and import values are in US dollar 1995 constant prices, which implies that the values between 1970 and 2003 have accounted for inflation, and thus have been converted to real GDP and real import figures. Likewise, to make our data comparable with each other, the GDP and import values of all three countries are in a single currency, US dollars. Economic and Social data Service, World Bank, According to economic theory imports can depend on multiple factors such as the level and dynamics of domestic income of economic integration. However, with respect to the aim of our analysis the emphasis will merely lie on the relationship between Gross Domestic Product (GDP) and imports. By definition, GDP is seen to be a measure of aggregate output in the national income accounts equalling the sum of incomes earned domestically by both nationals and foreign citizens working in the country In relation to imports, GDP has to consider the level and dynamics of the GDP components and thus differentiate between them i.e. investment, consumption, public expenditure and exports However, economic theory - ceteris paribus - suggests that when GDP rises imports rise as well, for example because disposable income increases. In real life on the other hand, there are many exceptions to this rule. Blanchard, Olivier. Piana, Valentino. Imports. Overtime the general trend GDP seems to follow is a steadily increasing one. The reasoning behind this is the following: as free market economies aim to maximize GDP they will produce at least the same output in subsequent years as they have in previous ones. In addition however they will aim to increase the level of output by improving the", "label": 0 }, { "main_document": "over 38oC for 2 or more weeks and extreme fatigue or night sweats. When CLL patients do require treatment, there will be few occasions when they would require in-patient treatment. Generally, a patient will require in-patient treatment only if they develop a severe infection. If an individual has no significant symptoms at the time of diagnosis, they will attend regular check-ups without going on medication. Currently there is no conclusive evidence that suggests early treatment prolongs survival. The criteria for beginning therapy are: In summary, CLL is a form of cancer affecting blood producing cells in the bone marrow, with increasing incidence as you get older. The majority of patients have a slowly progressing disease with a survival of 10 years or more. CLL is not considered curable and indeed patients are not treated until they become symptomatic or when blood results indicate that the disease is progressing. The mainstay of treatment is chemotherapy, usually taken orally, and the majority of patients continue to have a good quality of life for many years. Areas of development currently are ways in which to diagnose CLL and how this may correlate to disease progression. The important issues raised by Over the last year It is important to consider if his CLL is progressing how best to mange him whilst he is in hospital and what can be done for him in the community.", "label": 1 }, { "main_document": "In order to ascertain when modern science was born, a comprehensive definition of the word 'science' is paramount. One such example can be found in the Oxford English Dictionary, characterizing modern science to be 'knowledge involving systematized observation, experiment, and induction'. Therefore, science can be viewed not as a series of discoveries, but rather as the system by which these may be achieved. Using the key points in this definition therefore, the title of 'first scientist' should be given to the person with earliest records of these practises. The new method can be seen in comparison to customary techniques of obtaining general knowledge at the time, and as a revolutionary way of thinking. The suggested beginning for science is during a period of advancement in a broad range of fields, termed the 'Renaissance' by French historian Jules Michelet and meaning 'rebirth'. This progress, made through approximately the 14 Galileo Galilei (1564-1642) was born in Italy towards the end of the Renaissance era and became credited with such significant contributions to the field of science that he has become a candidate for the prestigious title of 'first scientist'. Such a notable claim however is most definitely controversial, with debate as to whether Galileo's work warrants such an accolade. Therefore when attempting to answer the question under consideration a variety of perspectives need to be considered in order to reach a balanced conclusion. Prior to the 14 This certainly would not be deemed even remotely 'scientific' adhering with views expressed in The Western European society of the early 14 This hindered progression during the period, with no scope for building on the work pre-established by others. It is easy to imagine then that the great structures such as the Colosseam and the Pantheon in Rome could seem intimidating to those whose understanding of how to construct anything of the like had been long buried. Consequently, much of the esteemed knowledge had been unchanged since the great days of Ancient Greece, and the inferiority of the people caused them to 'accept the teaching of ancient philosophers such as Aristotle and Euclid as a kind of Holy Writ' The key point to be noted here is that expertise on the ways of the world was not discovered by the modern method of observance, instead it was provided from the conjecture of great minds. Theories were founded on imagination and often endeavours for elegance in a system, and were successful when they appeared to fit to the world. The Renaissance was a significant time of change and described in John Gribbin's Within Aristotle's teachings, the universe was shown to be geocentric and this belief allowed the Christian creation story to situate 'humankind and hence the earth at the centre of a divine plan'. The clergy were in command of all universities in Europe and could be selective over the publishing of books. They are described in Therefore any theories or new ideas which could threaten the Church's authenticity could be filtered from the public domain. This effect can be seen in different locations based on the attitude", "label": 1 }, { "main_document": "themselves as much as they can from the countries they rate (Kerwer 2005, 465-8). Sometimes, they distance themselves too much, collecting less information in developing countries than in developed ones. As a result, those countries may receive less investment than they otherwise deserved (Ferri and Liu 2005). Most of the emerging countries have done their homework. Even without formal imposition, they have sought to adjust their internal structures and their economic activities to conform to the 'mental framework of rating orthodoxy', whose norms and policies converges around American ideas of best practice (Sinclair 2005, 17). Another crucial difference between IFIs' and RAs' conditionality lies in the room for manoeuvre accorded to governments. In the beginning of this century, the IMF and the World Bank replaced the SAPs with the poverty reduction strategy papers (PRSPs). This change was a response to criticisms about the effects of the SAPs on developing countries, which sometimes witnessed an increase in the number of the poor after the adoption of these programmes. PRSPs allow countries to invest more in social policies: poverty reduction became an important goal along with inflation control and fiscal deficit reduction. By contrast, RAs do not promote social policies publicly and it is unlikely that they will ever commit to poverty reduction, since they do not have to negotiate terms of agreement with countries - they must keep their distance to make their judgments; if the countries want access to international credit, they should accept the agencies' verdict or else look for another institution, maybe go back to the IFIs. The very fact that there is not a formal imposition of conditionality frees RAs from the need to justify the defence of certain policies or the rejection of others. If a government wants to raise investment in social policies, it will take the risk of being downgraded or it will have to reduce expenditures in other areas to offset the costs. Rating agencies will not send a third institution, as the IMF can send the World Bank, to promote social programmes during periods of economic hardship. A good example of the difference between IFIs' and RAs' attitude towards social policies is given by Brazil's recent agreement with the IMF. In 2002, shortly after the elections, the Brazilian president, Luiz Inacio Lula da Silva, negotiated with the IMF an increase in investment in social programmes, such as The project consists in giving a small wage to poor families, who in exchange must send their children to school and take them to the local medical centre to be vaccinated. It benefits over 40 million people with very low income. The programme has not hindered the fiscal deficit reduction - on the contrary, Lula's administration has produced surplus - or the inflation control because the government has cut investment in other areas, such as infrastructure. As Brazil has ended his agreement with the IMF and paid its debts, one would expect that the government would be free to improve its social programmes. According to the agencies, however, the budget cuts were not good enough. The increase", "label": 0 }, { "main_document": "2003b) and have taken this opportunity to look at ethical issues from different angles. I had never appreciated that other nurses could be the third party you would be attempting to influence in triadic advocacy (Wheeler, 2000) or the conflicts that this can cause (Hamer & Collinson, 2005) and had not considered how the lack of autonomy of nurses and nursing can affect the achievement of patient autonomy (Tschudin, 2003a). I am also more aware of the many potential conflicts of duties that are present in ethical reasoning and will aim to acknowledge these in my future practice. Patients are often vulnerable and require someone to 'speak up' for or advocate on their behalf to achieve their desired outcomes (Wheeler, 2000) and I feel I now have a greater appreciation of the nurses role as advocate to promote and protect the interests of patients who are unable to do so themselves (NMC, 2006) and hope this will have a positive effect on the care I am able to give in future.", "label": 1 }, { "main_document": "Travelocity is less popular than its competitor Expedia (Appendix 6). In the era of Internet, Web site is critical to the success of business (Jeong, 2003), developing and maintaining high-quality Web site should always be staying in agenda of all organisations. Besides easy-use and useful content, consumers also need interactivity. So that interactive helper, high quality photograph, feature comments from previous customers and personal recommendations are suggested by Morgan (2001) for a good Website design. Furthermore, a Web-based community can be built to enhance B2C & C2C communication, trust and social presence (Gefen & Straub, 2003) Many authors emphasize that understanding consumer behaviour is crucial for both existing and potential business (Fram & Grady, 1995; Lohr, 1999; Law & Leung, 2000; Lohse, et al., 2000). Organisations should make better use of personal information collected from registration in order to strengthening customer relationship and knowing their target audience. Trust is the foundation of successful customer relationship (Papadopoulou et al., 2001) and customer retention (Reichweld & Schefer, 2000), since brand recognition contributes to trust building (Turban et al., 2004) a trustworthy brand image is the best weapon for all organisations. Meanwhile, Adamal. (2001) suggest trust is also based on security. Considering the previous implications, organisations is better to raise customers' awareness of security issue when purchase (e.g., AirTran) and notify or highlight some important policies (e.g., Travelocity). Needless to say, all of these should be based on a completed and established legal and security system. To aggregator, a fixed package and price can hardly satisfy customers any more especially when price is not a competitive advantage in Internet era (Porter, 2001). After fulfill its value proposition, aggregator must customize everything includes pricing (Tapscott, 2000). Simply aggregation is hard to meet consumers' need, only adopt customization and flexibility can them compete with other intermediaries. To integrator, since they've already good at customizing travel products to customers, adding value in service, provide 'service-enhanced customer solution' should be their next proposition (Tapscott, 2000). In a word, consumers would prefer personalized, customized and individualized products (Popcorn, 1991), and 'one-stop-shop' of personalized tourism product will be key to success for all intermediaries (Buhalis & Licata, 2002). Price war is continuing and it's always the major issues for principals. Intermediaries and many hotel brands' Website offer 'best rate guarantee' to attract gain customers, but seldom destination, car or airline companies do. It is predicted that future theme parks and attractions will offer more value pricing (Milman, 2001), however, a long strategy is not based on pricing. Backup by Disney brand, Resort hotel would not be a commodity product. As aforementioned, brand is the best weapon especially to such a powerful brand as Disney. In addition, their distribution channel can be improved by assisting and cooperating with intermediary (e.g., Travelocity). At present, Disney attempts to aggregate travel products. 'Add ground/air transportation' is a good idea to 'focus on their core business and let partners do the rest' (Tapscott, 2001), even though it doesn't work well at the moment, a proper ICT support can help destination gain competitive advantage by either maintaining", "label": 0 }, { "main_document": "Environmental law inevitably affects every citizen of the EU. The issues transcend national boundaries, for instance regarding river pollution or global warming. Therefore, the EU has gradually formed its own body of environmental law, \"populated by a staggering array of actors and interests\" Now, \"the EC regulates the 'European environment' as a whole\" Yet, these have emerged out of an ad-hoc evolution, which can lead EC environmental law to be viewed as \"a bundle of paradoxes\" There is a conflict between the furtherance of the Community's economic aims and environmental protection. There are concerns that \"the policy for the environment is primarily directed towards the completion of the internal market\" Nevertheless, Lord Clinton-Davis claims that \"environmental policy has been one of the great success stories of the Union\" This could be pre-emptive, given the feared ignorance of environmental issues amongst the new member states. J. Peterson & E. Bomberg - \"Decision-making in the European Union\", page 173. Damian Chalmers - \"Inhabitants in the Field of EC Environmental Law\", in Craig & De Burca eds. - \"The Evolution of EU Law\", page 686. Damian Chalmers - \"Inhabitants in the Field of EC Environmental Law\", in Craig & De Burca eds. - \"The Evolution of EU Law\", page 653. K. P. E Lasok - \"Law and Institutions of the European Union\", page 784. Lord Clinton-Davis, in Han Somsen ed. - \"Protecting the European Environment\", page 1. At the outset of European integration, the priority was maintaining peace by way of economic integration and the creation of a common market. Thus, environmental legislation up to 1972 \"mirrored the economic approach of the EEC Treaty\" The environment was protected as a secondary and perhaps inadvertent consequence of some economic directives The Community's attitude to environmental protection at this time was \"incidental\", and \"inarticulate\" In 1972 international cooperation on environmental matters began with the UN Stockholm conference. This led to the Paris summit, where the Community declared that \"economic expansion is not an end in itself\" This assertion proved instrumental in the \"attempt to wrest EC government out of the narrow economic sphere\" It is essential that a credible environmental strategy is developed in conjunction with economic goals and not subsidiary to them. Ludwig Kramer - \"European Environmental Law\", page 80. 1967 Directive of Classification, Packaging and Labelling of Dangerous Substances 67/548 EEC. L. Brinkhorst - \"The Road to Maastricht\" (1993) 20 ELQ 7, quoted in Joanne Scott - \"EC Environmental Law\", page 41. Community Declaration at Paris Summit, October 1972, quoted in Wolf & Stanley - \"Environmental Law\", page 98. Damian Chalmers - \"Inhabitants in the Field of EC Environmental Law\", in Craig & De Burca eds. - \"The Evolution of EU Law\", page 653. The first EC environmental measures were founded on the approximation of laws, under Articles 94 and 308 EC. Article 94 allows measures that secure the harmonisation of laws affecting the common market, for example regarding the lead content of petrol The rationale is that differing environmental standards distort competition, thus jeopardising the efficacy of the common market. Industrialists would have to", "label": 1 }, { "main_document": "Cirilo Villaverde (1812-94) was a Cuban writer and political activist, ardently anti-colonial and abolitionist, who experienced imprisonment and lengthy exile for his activism. His best-known novel He originally introduced her in 1839 in a twenty-five page tale that primarily attacked sexual licentiousness. But by his final draft in 1882, Villaverde meant for Cecilia to be interpreted as the embodiment of the national tragedy that was colonial, slave-holding Cuba in the 1830s. It is no coincidence that his protagonist shares his birth date of 1812 as well as initials. Born at the dawn of a brief constitutional period and the same year as the most serious slave uprising, Villaverde struggled throughout his life for a new Cuba. In her way, so did Cecilia. Villaverde's 'tragic mulatta' is far more complex than the conventional archetype of the fallen multiracial woman, who becomes depressed, mentally ill, or even suicidal, for engaging in interracial liaisons. Villaverde's narrative of Cecilia's life is his sophisticated illustration of the cause of his nation's ills, and deserves further inquiry. How far Cecilia is a 'tragic' figure with respect to her family life, her racial origin, her class position, her gender, and her sexual licentiousness will be examined (which were all, of course, intrinsically linked). Then it will be concluded as to whether Cirilo Villaverde intended for Cecilia Vald As the novel begins with Cecilia's birth, we shall consider her family situation as the first element of her 'tragic' existence. She was conceived by forbidden interracial love rather than cruel exploitation, but Don C He was supportive at the birth, and exerted himself to ensure that his baby daughter would be baptised with the name Vald Cecilia's fate was controlled to a great extent by Don C Don C He was not able to protect his daughter from his son because of his own pride. With no father or mother, Cecilia lacked a meaningful disciplinarian or responsible nurturer. Her grandmother certainly was a positive role model and a moral voice, but her entreaties were largely empty threats, and she influenced Cecilia less and less as she grew older and past being afraid of stories, embracing isolation and religion instead of continuing to guide Cecilia. This somewhat 'careless upbringing' left the 'tragic mulatta' with a 'vagabond nature'. In the end, it is the Cuban establishment that saves Cecilia when the magistrate attempted to deter Don C Villaverde's portrait of Cuba as a wholly rotten society is not quite complete, then. The surname Vald The name was sufficiently 'white' to offer advantages to orphans of 'white' appearance. However, as the practice became more well known, it was of course more likely that an individual named Vald Cirilo Villaverde, Villaverde, Villaverde, The incest, a traumatic theme found in most Cuban abolitionist literature, is central to Cecilia's relationships throughout the novel. Incest represents ultimate moral debasement of a society, but in this case it is not of the most abhorrent sort - that between parent and child - but between the more distant half brother and sister. The incest actually has a slightly positive element,", "label": 1 }, { "main_document": "There are many references these days to e-something and this essay looks in particular at what e-learning is and how it can be used in the learning environment. The relevance of e-learning to meeting today's employment skills and the identification of which skills are sought after by employers are investigated. Not forgetting government policies, these are reviewed to ascertain whether the resultant initiatives to widen participation in learning can address the skills gap. Using the e-learning has been around for about 10 years (Pollard and Hilage (2001), but has become more prominent recently with the gathering momentum of ever increasing technological advancements, which drives towards an almost essential use of computers for the smallest tasks. However, there are differences of opinion about what constitutes e-learning. According to Elliot Masie (quoted by Cross 2002 in Pollard and Hillage(2001)) \"e-learning is the use of network technology to design, delivery, select, administer and extend learning\". What does all this mean? Pollard and Hillage(2001) provided the following explanation. In it's simplest form it can provide employees with online information eg information about a company's products or personnel or even information to support employees to do their jobs. Online learning provides new knowledge or skills which are learned via interaction, this might include induction programmes or business applications. Even more complex is multi-dimentional learning, which can provide a system for capturing and sharing expertise and knowledge so that it can be made available to other members of an organisation. These often need to be supported by online conferencing or e-mailing facilities between peers, tutors or coaches. IBMs management development programme is a good example of this. This is a multi-faceted Web-based training programme designed to teach managers coaching skills. Learners work their way interactively through set scenarios and are provided with feedback on their chosen solution. The increasing number of managers using it confirmed the programme's success.(Rosenberg (2001) E-learning also has an administrative aspect in that it can be used to register learners online and record learners achievements. The proliferation of computers in the home and workplace offers an excellent opportunity for the use of e-learning applications. To show the increase in computer use, compare the following - in 1990 from a class of 60 people only 10 had computers at home, by 1997 this was estimated to be 7 out of 10 - a marked increase (Hallam ed by Jane Field (1997)). By 2006 the e-learning Foundation aims to ensure that every school pupil has access to a personal portable computer. It is clear then that the need for up to date IT skills are an essential part of life. Looking at the world of employment, Appendix A describes the evident demise of manufacturing and manual jobs and the increase of more services related professions (Baldock et al 1999). This shift replaces traditional manual skills with greater demand for knowledge and people skills, as well as information and communication technology (ICT). There are few workplaces these days which are not supported by computerised systems, even working in the corner shop involves computerised cash registering, as does", "label": 1 }, { "main_document": "glorification of labor\" and a \"factual transformation of the whole of society into a laboring society.\"(1958: 4) It is characterized with the rise of the social, which not only blurred the clear-cut distinction between the public and private, but turned both of them into the social sphere. \"It also signified for Arendt a public arena in which human beings are assigned a common identity or a common characteristic\" (Bradshaw. 1989: 12), which substituted the public sphere of equality based upon plurality. Here, plurality means Arendt's account of the plurality being reduced into \"sameness\" and \"conformity\" echoes Marx's story in that use value hides behind the exchange value of commodities on the market. Money, or exchange value enables qualitatively diversified objects to be quantitatively measurable and comparable. Therefore, Marx's exchange value is also a historical concept, which is specific to the capitalistic production society. The concept of exchange value only emerges on the stage after the rise of the social sphere. Marx is definitely not an apologist for capitalist society. He is instead a great critic of capitalism, like Arendt herself. Both Marx and Arendt are aware of potential threat inherent in the rise of the social or in the dominance of the capitalistic production upon human freedom, but they provide different remedies in their own ways. Labor is the central theme of Marx's analysis. Marx's \"Labor created man\", in Arendt's eyes, is conscious break from the tradition of description of man as a (1961) However, it is unjustified to assume Marx's concept of labor to be the same thing as Arendt's. Arendt's labor, as we have discussed, relates to the metabolism with nature, to the biological needs of human beings. By contrast, Marx's labor is inherited from Hegel's version, especially in the Phanomenologie des Geistes. Hegel conceived man as \"the result of his own labor\". Their distinction could be made more clear by Marx's identification of human labor and animal labor: \"Animals 'produce only under the domination of immediate physical need, while man produces independently of physical need and truly produces only when free from this need' \" . (Suchting, 1962: 50) For Marx, labor is more than an autonomous activity of biological life. It marks a higher level of self-consciousness and by means of which, man can fulfill his potential. However, under the conditions of capitalistic society, labor is deprived of humanity and becomes alienated. Alienated labor signifies that in the capitalistic production, man is estranged from the product of his labor, from his life activity, from his species-being, and from his fellow people, which corresponds to Arendt's description of dehumanized labor. In Marx's view, human emancipation from alienated labor is immanent in labor itself, not by any transcendent power. By pointing out the paradox of capitalist production, Marx forecasted capitalism is digging its own grave and a revolution organized by proletariats will eventually transform the capitalist society into a communist one. In the future human community promised by Marx, alienated labor will be abolished. Labor returns to be man's intrinsic nature, a voluntary need of human species, free from any", "label": 0 }, { "main_document": "al, \"The Move to Artist-Led...\" Price Elasticity of Demand: the responsiveness of quantity demanded of a good to changes in price. The more elastic the PED, the more easily consumers can adjust consumption habits to changes in price. Furthermore, firms cannot make abnormal profits (as new firms can easily join the market), and production therefore occurs at price Pc and output Xc, where productive efficiency is reached and where welfare is of in Fig3. If it were to keep growing, the legal digital market would increase productivity, lower costs, and increase welfare - this transformation would cause beneficial changes in the music industry. However, the physical music industry has fought the development of the digital market, as the internet also creates an illegal market for music. The major record companies argue that internet piracy \"plays a significant role in the decline in music sales\" Illegal file sharing has mushroomed over recent years, with over three million individuals sharing more than half a billion music files simultaneously on just a single network in 2003 By providing consumers with substitutes Digital music offers consumers similar utility to CD's, as there is no loss in quality through the downloading process While file sharing places a high opportunity cost on users due to its illegality, it is free, and certain consumers may choose to substitute illegal downloads for physical records. Graphically, when the price of illegal digital music decreases relative to that of CD's, a consumer's budget constraint rotates outwards (E to E'). A consumer initially at equilibrium This substitution effect is represented by the distance Based on such an analysis, Pietz and Waelbroeck claim that downloading is responsible for a 20% drop in global music sales from 1998 to 2002 Peitz and Waelbroeck, \"The Effect of Internet Piracy...\" Oberholzer, Felix, and Strumpf, Koleman, \"The Effect of File Sharing on Record Sales: an Empirical Analysis\", Harvard Business School. Internet. March 2004. Accessed at: Goods with a similar utility Bockstedt et al, \"The Move to Artist-Led...\" Peitz and Waelbroeck, \"The Effect of Internet Piracy...\" However, a study by Oberholzer and Strumpf finds that file sharing has no such effect on record sales; indeed, they argue that 5,000 downloads are needed to displace the sale of a single album This finding can be explained if illegal music is considered to be an inferior good When the budget constraint in Fig5 rotates outwards (E to E') due to a drop in the price of digital music, a consumer may move to indifference curve IC'', where the new equilibrium point Oberholzer and Strumpf, \"The Effect of File Sharing...\" A good for which quantity demanded falls as consumer income rises. Here, physical record consumption remains unchanged at y. File sharing could even have a positive effect on CD sales: if the new indifference curve on E' is located at IC''', with equilibrium In the move from IC to IC''', the substitution effect (As illegal music is an inferior good, the price consumption curves in both scenarios are no longer downwards-sloping as in Fig4, but flat (PPC'') and upwards-sloping (PPC''')). Oberholzer and", "label": 0 }, { "main_document": "of this stage in production will obviously increase the score but is not considered decisive. Estimates of transport distances based on typical distances from industrial areas to ports were also included as an assumption. Moreover, the cardboard assumed for the original product was omitted in the redesign as its relative environmental impact was also quite high. The redesign assumes that instead of delivering these products unassembled and therefore in cardboard boxes, the new product could instead be assembled by the manufacturers or dealers and delivered intact so that there is no need for cardboard packaging. There are obvious financial implications both for the customer and manufacturer dealer but these considerations are beyond the scope of the current analysis. Comparing the overall scores for the original and redesigned product, it can be seen from the tables that the redesigned bookcase is 59% more environmentally efficient. Moreover, the overall wood used in the new product is 51% of that used in the original product, which is a 49% reduction of material use. The product was redesigned with minimal impact upon the original design. The aesthetics are essentially the same so that the functional unit for the second social purpose is met. Since the product was only optimised in terms of material use then we have also retained the functional unit for storage capacity. It is also import to mention that the redesign is theoretically valid but more detailed analysis may reveal that in order to retain the structural integrity of the shelves, for example, then for a given material of specific young's modulus and thickness, only a certain amount of material can be removed - so that there is some threshold. This also applies to the main frame of the bookcase, so that in order to retain rigidity only a certain amount of material can be removed for a specific material. The implication of this is that our product may be found to be less or even more efficient than the above estimates, although the theory still enables the environmental optimisation of a product to some degree. Other factors which will affect the accuracy of the assessment are assumptions that were made prior to analysis. This exercise has demonstrated that by using creativity, it is possible to redesign existing products and reduce their environmental impact substantially. The most obvious solutions are to use alternative materials (eco-friendlier), use less materials or to completely redesign the service while paying attention to the social purposes and functional units. The tool used, such as the Datschefski Analysis enable crude estimates very easily which can then be analysed in more detail using the Life Cycle Analysis. Although it can be difficult to escape making assumptions when using the LCA, this is because it is a relatively new concept and as with all concepts it may become more refined and an even more powerful tool in the future. In the meantime, the LCA is a useful tool which can assist law makers to formulate environmental legislation in order to help manufacturers analyse their processes and improve their products, and perhaps", "label": 0 }, { "main_document": "production, creating opportunities for learners to practice the target structure. However, an alternative instructional method involves structured input, which is specially designed to draw learners' attention repeatedly on a preselected linguistic feature. Planned Focus-on-Form instruction still relates to both input and production, but this time the primary focus is on meaning rather than form. This kind of instruction makes use of enriched input, which like structured input consists of a large amount of exemplars of the target language. Learners are required to respond to the content of the input through tasks which are communicative in nature, in order to induce noticing of the target form in the context of meaning-focused activities. Furthermore, focused communicative tasks elicit the production of a specific target feature and facilitate its incidental acquisition. The third type of form-focused instruction accounts for incidental focus-on-form, which can arise either because of a problem of communication or because of the occurrence of a form which is perceived as problematic. In these cases, the attention of the learner (or the teacher) switches from the meaning-focused communicative activity to features of the linguistic code. This process of noticing is triggered by the negative evidence (feedback) provided by teachers in response to learners' errors. In particular, it is implicit negative feedback, in the form of recasts or request of clarification, which seems to enhance noticing and assist the acquisition process. As it is clear from the brief description above, form-focused instruction plays an important role in second language learning and acquisition, especially from a pedagogic point of view. Indeed, studies on this topic can be very useful for the developing of effective materials and supports which will assist teachers in educational activities. In this last section of the essay, I shall be describing my learning experience of the Persian language, trying to relate it to the theoretical framework presented above. The value of input, output and formal instruction will be evaluated and the main reasons of failure in mastering the target language will be outlined. The learning experience has been developed on a period of ten weeks and it has been almost entirely based on self-study through a guided textbook and audio cassette. Three additional one-hour sessions of classroom practicing with a native speaker have been scheduled. The first issue I would like to concentrate on is the textbook, describing the overall organization and assessing its effectiveness in meeting the learners' requirements. The book is divided into sixteen units; each of them is focused on a different communicative task, which is brought in by an introductive dialogue available both as a text and on the tape. Four different kinds of input underlying different aspects of the language are available in each chapter: dialogues, linguistic information (phonetics, morphosyntax, and pragmatics), lists of words (vocabulary), and exercises. The dialogues, which appear at the beginning, in the middle and at the end of the unit are especially useful for the learner and serve different functions. The first dialogue aims at introducing the topic and it is an example of positive evidence that can be valuable for", "label": 0 }, { "main_document": "are palpably greater than, for instance, drawing wire from a diameter of 3 mm to that of 2 mm. Based on this observation the wire can be drawn in two stages; first using a 2 mm (hole) draw-plate and then using a 1 mm draw plate. The use of an extra lever positioned on the opposite side of the wheel system can be incorporated to allow double the amount of manual work produced. A furnace can be utilized to heat the iron wire increasing its malleability and workability thus aiding the drawing process. Since the heated wire facilitates the drawing process, less force is needed to deform the iron as a result. Each of the described methods has its advantages and disadvantages. While the advantages are obvious, the disadvantages need further explanation. The first method of drawing wire in multiple stages has the obvious drawback that the process will take twice the amount of time. The second method poses the disadvantage that an extra human operator is required. The third method of using a furnace entails the necessary set up and use of resources to produce a fire. This is not ideal particularly in instances where the machine is only briefly required. With the exception of the third then, these methods are incorporated into our solution. The possible methods of overcoming the problem of force and dimensions as suggested above are simple solutions that do not involve technological complexity. The methods suggested are within the capacity of Roman technology and it is known that two of these methods may have been used in some form or another. The first method of drawing wire in two or more stages is modern day standard practice and there is no reason to believe that Roman technologists did not also conceive of this idea. The second method of using extra human strength to produce the required leverage is simply a matter of common sense. The third method of heating metal before working it is known to have been used by Roman blacksmiths extensively. Indeed it forms the very basis of the concept of forging, a craft which predates the Romans. Having established the basic design details a detailed illustration of the system can be made. Components of the system such as the rail, wire carriage-clamp and draw-plate require separate illustration and these are also included: The function of the wire drawing machine is simple, requiring only the turning of the wheel by two human operators in a clockwise and anti-clockwise motion respectively. The ropes which are connected to the shaft of the drum translate the rotary motion of the drum into uniaxial horizontal motion. The ropes are in turn connected to the wire-carriage which is clamped to the wire to be drawn. As the drum is turned the torque produced exerts a tensile force onto the ropes therefore pulling the carriage and drawing the iron wire through the die. Before operating the wheel however, the wire must first be filed to a certain length (determined by the dimensions of the system components) then placed through the", "label": 0 }, { "main_document": "is written with capital letters and inverted commas. The effect of negative vocabularies choice encourages pessimistic image of Ronald McDonald. Overall the text tries to avoid becoming a boring text while using lexical cohesion. At the same time it enables to attract reader's interests more. Throughout the text present tense is used while modal verbs cannot be seen. As effects it is interpreted that these features emphasise the absolute truth statement. It often includes passive constructions for example or ' The effect is that Ronald is predominantly foregrounded as it says The agent also has a greater thematic role throughout the text. In addition sub-headlines uses nominalisation such as ' It enables to avoid repeating the obvious agent and make clear visible text. The text is a categorised genre as not only notice that McDonald has numerous negative points but also advertisement about statements by a NETWORK FOR KIDS AGAINST MCDONALD'S. Therefore the text provides advertising text style such as precise syntaxes and idioms while it is a type of notice as the mixing of genres. The third interpretation is associated with social context in the text. The text is a part of the following social frameworks, environments, business, employments, health, foods, family and education. We need to have acquired some social knowledge as follows in order to understand these social frameworks. Junk foods do not provide better nutrition while the fast food industry spread all over the world. People cannot help going to McDonalds because of its convenience, reasonable prices and friendly homey atmosphere in the place. Children would rather eat fast foods than vegetables because of the taste. A number of big companies in the world including McDonalds gains cheap lands or ingredients from developing countries and damage the environments. The industry is still strong position in the business world while having huge criticism. Finally deconstructive interpretation of the text is discussed. The text seems to be concluded with contradiction for the preferred reading. The text says It is fairly clear that the text shows terrible criticisms for the McDonalds business and social drawbacks by the industry overall. However the sentence basically says that we should establish better society by Here it seems to be a contradiction because the text is supposed to be in an opposite position from the McDonalds industry however it also suggests cooperation with the enemy, McDonalds at the same time. Therefore it seems somehow to undermine the preferred reading. To sum up CDA is a way to consider social situation while analysing texts. It makes me realise that texts are strongly associated with social and institutional contexts within society. Therefore CDA enables readers to provide deep understanding and some questions about social ethics by finding contradictions in texts.", "label": 0 }, { "main_document": "Biological control is an alternative solution to widespread usage of chemicals, which has caused a great deal of concern in recent years. Although it is not a new method, as it is probably as old as the history of agriculture, biological control has many advantages as a pest control method. It is defined as: Since, nowadays, there is enormous use of agrochemicals in modern agriculture, the use of environmentally friendly alternatives to chemical pesticides are absolutely required. Furthermore, it is a global consumer requirement for chemical reduction in agriculture (van Emden, 2004). Biological control is an attractive alternative to agrochemicals, although it is generally impossible to affect one part of the ecosystem without having indirect ecological effects (Goettel and Hajek, 2001). Widespread usage of chemicals involves problems of undesirable ecological side effects. Agrochemicals cause many environmental hazards, i.e. some chemical pesticides contaminate groundwater. Some chemical pesticides also enter food-chains and consequently threat human health and a wide range of organisms. Moreover, pesticides are not safe for the user spraying the chemical. Chemical treatment eventually is not always efficient, as well as it is a significant part of agricultural enterprise cost (Butt, 2001 and Strickland, 1960). It is obviously difficult or rather impossible to eradicate chemicals from pest management, but \"pest mortality following the use of chemicals should be additive rather than substitutive in relation to natural causes of death. It is here that the importance of natural enemies becomes clear, because the destruction of a parasite could lead to enormous increase in pest numbers.\" (Strickland, 1960:11) Nowadays, the increasingly concern of pesticide use has resulted in a more environmental friendly and sustainable agriculture. There are also globally acceptable regulations of chemical restrictions or bans, especially in the developed nations, as for example is methyl bromide ban. It is very encouraging that natural products that come from organic agriculture are becoming more and more desirable in the modern horticultural market. And the most encouraging is that these products are able to obtain better prices on market. This hopefully creates favourable conditions for the development of biological control methods (Whipps and Lumsden, 2001). Biological control has many advantages as a pest control method, particularly when compared with insecticides. One of the most important benefits is that biological control is an environmental friendly method and does not introduce pollutants into the environment. As Kok (1999) points out, biological control should be implemented whenever possible because it does not pollute the environment. We mentioned earlier several problems which are caused by insecticides and are related with environment pollution. The great advantage of this method is its selectivity. By this way, there is a restricted danger of damage to non target plant species. Weeden and Shelton (2005) underline that biological control does not create new problems, like conventional pesticides. According to van Emden (2004:149) \"this does not mean that side effects can be totally excluded, although they have been very rare in the history of biological control\". Selectivity is the most important factor regarding the balance of agricultural ecosystems because a great damage to non target species", "label": 0 }, { "main_document": "The growth of a plant is influenced by various endogenous and exogenous factors. The components of the environment in which the plant lives, such as temperature, the duration and the intensity of available sunlight, alter the favorableness of the growth. With time, the plant progress the developmental stages which starts from the vegetative growth of the plant body into the reproductive growth. The aim of this experiment was to analyse the processes of the development of a plant, the Dwarf French Bean (Phaseolus vulgaris). Week 0 20 fresh beans and 4 pots were provided to each pair of students. 4 beans were planted in each pot in vermiculite about 1 cm below the soil surface. Some dried and fresh beans were weighted as the week 0 sample. Week 1 to 9 The recordings of the following measurements of the vegetative plant were continuously practiced each week. The measurement was done for both fresh and dry samples. Several bean pods were prepared and were kept at five different temperatures of 14, 18, 22, 26, and 30 The measurement of the total plant and pod weights, shoo length, leaf area was taken place after 8 weeks. The numbers of leaves, flowers and pods were counted. The dried plant weight was measured after the 1-week drying period. The growth of a plant is regulated step by step according to the developmental stages of the organism. The Cotyledon which first emerged from a seed and starts autotrophic metabolism grew in week 1 but degenerated after the development of the first true leaves in week 2 and completely disappeared in week 5 (Fig 1). The negative figure of Relative Growth Rate (Fig 8) for the first 2 weeks were due to the deterioration of cotyledon and the actual growth of vegetative organs, i.e. leaves and shoots were high. The shoot had its highest growth rate at week 1 (Fig 7). Although it gradually decreased with time, the growth of the shoots was relatively constant. The shoot length reached at the maximum height at week 7 when the plants were fully grown (Fig 2). The higher rate of the growth of shoots during the final 2 weeks mainly occurred in the parts of apical and axillary buds where the fruits were to be formed. Thus, in the first vegetative growth period, the growing parts of the shoots were concentrated on the vertical and horizontal directions to increase the amount of incoming radiation to leaves and to accelerate the assimilation of sources, and shifted into the parts for the production of harvestable components. The growth of leaves was closely related to that of shoots. The growth rate continuously increased over time and the maximum rate was at week 7 (Fig 7). The Leaf Weight Ratio was very high in the weeks 2 to 4 when the assimilation partitioning was mainly on leaves (Fig 10). With an increase in the photosynthetic ratio of leaves, the growth of the sizes was stimulated. Leaf Area Ratio which corresponds to the capacity of leaves to catch the incoming radiation showed the highest value", "label": 0 }, { "main_document": "of their cultural heritage. Therefore, the archaeology of the Viking Age, weather settlements or ships, is highly developed in Denmark, Norway and Sweden. In addition, at least till the 10 Their early voyages were mostly targeted at war and conquest, therefore the safety at sea was very limited and the danger from the pirates caused decrease in sea traffic. The Vikings did not fight their battles on their ships, but rather preferred to attack on land from sandy landing places, which was the reason why we do not find many remains of ships sunken during sea battles at that time. There are, however, some wreck finds from other regions of north-west Europe, as all in all the Vikings did not manage to seize all the traffic at sea. In the Viking Age the predominant type of ships used in north-west Europe was the clinker-built, round-hulled vessel, both oared and under sail. They further divided into the keel plank boats associated with shallow waters of the rivers, creeks and coastal seas between Denmark and France and the T-shaped keel boats, associated especially with Scandinavia (Greenhill 1995). In order to analyze the significance of shipwrecks for understanding the cultural contacts in Viking-age Europe, it is necessary to present in short some most important finds representing these types in different regions of north-west Europe. It is impossible to present all the finds, however the ones shown below are surely representative for their seamanship traditions. Scandinavian shipbuilding, as it was mentioned above, is very well studied and documented, mainly because of the extensive research directed by the Danish National Museum at Roskilde. Numerous ship finds coming or related directly to that region contributed enormously to our understanding of early Medieval shipbuilding technology. The best known examples of these finds include the Skuldelev or Ladby ships from Denmark, but also Norwegian Oseberg, Gokstadt and Tune or Swedish They represent a wide range of types, including warships (Oseberg, Gokstadt and Ladby), cargo ships ( Archaeological evidence suggests that by the time of the Viking invasions, the shipbuilding traditions in Britain were also already well-developed. Since we still know relatively little about it, the discovery of Graveney boat contributed largely to the understanding of the craft of shipbuilding. In terms of its function it can be paralleled with Skuldelev cargo ships. Greenhill (1995) suggests that it has been used in early tenth century AD to sail between creeks and rivers of Sothern England and probably to the Low Countries. Another interesting, although not that obvious example is the Skuldelev 2 warship, which has been built in AD 1042 in Dublin and sailed possible as part of a diplomatic voyage to Denmark (Crumlin-Pedersen and Olsen 2002). North part of what is today Germany has been inhabited by both Germanic and Slavonic tribes in the Viking Age. Therefore, these two ethnic groups developed somewhat similar shipbuilding tradition. By the end of the analyzed period, their achievements were only slightly behind those of Scandinavia. Their ships were also clinker-built, predominantly cargo or multi-function vessels. Extensive research has been done at the trading", "label": 0 }, { "main_document": "To carry our diazonium coupling reaction between a Diazonium Salt and phenol with reaction at the Reaction via an electrophilic aromatic substitution mechanism. Diazonium coupling reactions are examples of electrophilic aromatic substitution reactions. In this experiment the diazonium salt is the electrophilic, which reacts with an electron rich nucleophile (phenol). This reaction occurs mostly in the para position however is can be facilitated in the ortho position by protecting groups placed in the para positions. The product of this experiment, an Azo-coupled product are commonly used as dyes as the extended conjugation of the Precautions to be taken, appropriate lab wear to be worn at all times including, lab coat, goggles, gloves and hair tied back. All work is to be carried out in the fume hood. To begin anthranilic acid (1.7g) and distilled water (25ml) were mixed in a conical flask. Concentrated sulphuric acid (4ml) was then added whilst swirling the solution. The mixture was then warmed until the amine dissolved, and then cooled on ice until below 10oC. Following this Sodium nitrite (0.9g) and distilled water (10ml) were missed in a conical flask and cooled on ice until below 10 The sodium nitrite solution was then added to the amine solution, slowly and with swirling, ensuring the solution temperature did not exceed 10 Next Phenol (1.2g, 12.7 mol) and Sodium hydroxide solution (2.5M, 25ml) were heated together in a conical flask until the phenol dissolved. This solution was then cooled on ice until below 10 The mixture was then acidified with hydrochloric acid (4M) to PH1. This solution was then filtered at the pump, and washed with ice cooled water (3*10ml), leaving the crude products to dry under suction for 5 minutes. This product was followingly transferred to a conical flask and ice cold methanol (15ml) was added, the mixture was then swirled and filtered at the pump, leaving to dry under suction for a further 5 minutes, The yield and melting point were then taken and the product recrystalllised from methanol. The purified product was then weighed, a melting point taken and a TLC plate run (eluting the plate with a 1:1 toluene: ethyl acetate mix) TLC analysis - Rf values 0.8 - product; our TLC plate was inconclusive and did not seem to develop properly. Reaction mechanism to rationalise the formation of the Azo dye: To begin substitution of hydrogen by NO2 through Nitration (reaction with concentrated Sulphuric acid and Nitric acid, at a temperature below 55oC) To follow this reduction with h2/Ni, Nitro groups are deactivating and meta directing. After this Nitrobenzene is reacted with concentrated sulphuric acid under reflux. Finally Thionyl chloride is reacted with the reaction mixture; SOCl2; then (NH4)2CO2, an Amine source.", "label": 1 }, { "main_document": "the first level of participation, in which farmers still do not have a say in the problem definition and the search for solutions, it is the second and rather the third level that need to be achieved. In order to do this, our Cross-sectoral interventions are needed, where agricultural technology development is integrated with institutional and capacity building. This goal requires multidisciplinary approaches that address technical and social issues at the same time. Empowerment and community participation must be the basis of the solutions, so that the process focuses on raising farmers' critical capacities and on sharing knowledge with them. Thus, farmers are provided with a basket of technical options and the criteria for identifying their problems and choosing or developing the most suitable solutions, combining foreign knowledge with their own and adapting them to their local conditions. Additionally, assistance must be offered for developing rural institutional frameworks that ensure sustainability and encourage the dissemination of the successful innovations (Csaki, 2001; Hangmann et al, 1998a). The extension agents' role is to assist in this process, acting as facilitators and mediators. Researchers provide access to external knowledge and technical options, help in the experimentation encouraging the farmers to take control of it and learn from it, and provide them with tools for analysing and systematising their findings (Hangmann et al, 1998b; G There's evidence that decentralised participatory research and extension experiences show higher levels of adoption of agricultural and natural resource management technologies (Dalton et al, 2005; Axinn, 1997; Garforth and Harford, 1997), because the technologies developed are relevant and appropriate toward the real needs of the population. But participatory approaches have other positive outcomes that reach beyond the mere spreading of agricultural technologies (Ashby, 1987; Braun et al, 2000; Scoones and Thompson, 1994; Okali et al, 1994; Dalton et al, 2005; Garforth and Harford, 1997; Hangmann, 1998b): The relationship between farmers, development professionals and scientists is improved: local traditional knowledge and capacities gain in legitimacy in the eyes of professionals; farmers' capacity to raise demands on the formal systems is increased, and thus the feedback to scientists for the development of useful technologies is accelerated. The intervention is more legitimate: the project team is accountable not only to the intervening organisation or the donors, but directly towards farmers aswell. Therefore farmers more easily accept the intervention outcomes as theirs, so that local support and direct involvement of the beneficiaries increase the sustainability of the project. People are \" There is more equity in the access to means of production by different groups or individuals in the community, because participation implies the negotiation of interests among these groups and provides space for the poor and marginalised in collective decision-making. Poverty is reduced because most of the promoted technologies are labour-intensive, providing employment for the landless and near-landless. Food security and children nutrition are enhanced due to increased production security. Natural resources and the environment are managed in a more sustainable way. There are some aspects, fundamental for the definitive success of participatory research and extension, which most experiences presented by the literature still", "label": 0 }, { "main_document": "to use foreign currency reserves to support each other if their currencies come under attack. Without establishing such an extensive bureaucracy as within the EU, economic collaboration has perhaps begun to lay foundations for further regionalism concerning political and security issues within these projects. Quotation taken from Russett B, Harvey S, Kinsella D. ' Regionalism is also said to benefit elites within states. Rather than being held to account by their populations, they may seek defence behind regional cooperation. Christopher Hill terms this tendency the 'cover function' whereby responsibility for difficult decisions can be shifted to the regional level thus avoiding responsibility or justifying unpopular policies Military elites may also benefit being able to highlight regional commitments to justify increased defence budgets whilst pressure groups may bypass national governments and lobby regional institutions to achieve their aims. White B, Little R, Smith M. ' Having assessed the benefits of regionalism, the very assumption that regionalism coincides with a loss of sovereignty can be challenged. States themselves do remain ultimately sovereign in the sense that they can withdraw from regional organizations e.g. Britain could choose to repeal the European Communities Act (1972). Despite qualified majority voting within the EU states remain powerful in that decisions about which issues can be decided by majority voting must be based upon unanimity. The Council of Ministers always tries to reach as wide a consensus as possible. It should be recognised that cooperation within APEC does not produce legally binding agreements. Advocates of the EU emphasise the notion of 'pooled sovereignty'. Sovereignty is shared in that decision-making responsibility is now spread among governments whilst on certain issues individual states can no longer veto proposals on the basis of their sole national interest. Germain argues Quotation taken from Thus it seems many states have been persuaded of the benefits of regionalism despite accusations that such processes result in a loss of sovereignty. The questions of whether regionalism is a 'good thing' or whether sovereignty is to be desired are, as Smith describes, essentially normative questions about the way things Yet there seems to be a trend towards regionalism within areas of the globe which have up until now sufficed with regionalization. Some argue that within East Asia, an attitude of interdependence is emerging which could well provide the basis for further regionalism. States have encountered two main problems with the traditional concept of sovereignty. Moral concerns have arisen from its capacity to block interference in the affairs of other states e.g. concerning human rights violations whilst theoretically, problems have stemmed from the fact that the notion of an independent sovereign state may no longer be of value in an increasingly interdependent world which requires cooperation on political, economic and military grounds The forces of globalisation and regionalism may require the concept of sovereignty to be redefined. As we have seen, supporters of the EU emphasise the notion of 'pooled sovereignty'. Perhaps it should come to refer to a state's responsibility to maximise its economic and political status and significance within a demanding new international arena. Strict adherence to", "label": 1 }, { "main_document": "a position to encourage healthy behaviour among their patients and provide guidelines for changes in such behaviour. Doctors can be involved in health education programmes, which focus on teaching about sensible drinking, as this might target the aspect of drinking behaviour which is closely linked to deprivation Doctors have three main key roles in reducing health inequalities. These include working for social change, profiling and monitoring, and preventing and alleviating Working for social change is the most important role of the doctor. It involves working for health enhancing policies developing out of strong profiling and monitoring activities. This can be achieved by being an informed, knowledgeable and powerful practitioner who can advocate for reduction in health inequalities by working with professional organisations and supporting or working with anti poverty groups Other dimensions to this role include creating supportive environments, building healthy public policies, strengthening community action, developing personal skills, patient empowerment and minimizing equity and inequality to health care services Profiling and monitoring involves collecting and analysing information to identify health needs and aid targeting and development of services. This allows doctors to identify the extent and nature of health and social problems, identify and target unmet health needs and use the obtained information to develop services that are effective and responsive to health care needs The preventing and alleviating role involves provision of services that prevent ill health and help patients to cope with the health effects of social disadvantage. It requires recognition that poor health is the outcome of exposure to health hazards across the lifecourse and adoption of a non-victim blaming approach This role has four key components; maximizing access to health care, income, material and social resources for health and minimizing the costs of health promotion It is important that doctors accept that health inequalities do exist, as this is often forgotten as doctors, who are the highest social class. Doctors may experience problems such as time constraints, language barriers, patient's willingness to answer questions and differences in patient's preferences Health-related behaviour can be difficult to change unless the patient is willing, as behaviour is guided by attitudes, beliefs, values Having a limited role to play in terms of formulating government policies and reducing poverty and income inequalities, doctors alone cannot tackle health inequalities but can make significant contributions to help reduce them by working in a multidisciplinary team and helping to implement national and local policies and encouraging healthy behaviour.", "label": 0 }, { "main_document": "One aspect psycholinguists are interested in is to know how words are stored and retrieved in the brain and how speech production works. Thus, to get an insight into the mental lexicon and into the processes taking place there, researchers make use of various experiments. One of these experiments is the so called TOT (Tip of the tongue) experiment. Here, researchers try to artificially create a state in which the subjects feel that the word is on the tip of their tongue, but they cannot quite remember and express it or to put it in Brown Being in the TOT-state, the subjects are asked different questions about the word These data then are used to form or back up hypotheses about word retrieval and production. Thus, this essay is going to focus on the question what the TOT experience tells us about the way words may be retrieved in the brain and about speech production. First of all, the TOT experience shows that we seem to have different dimensions of word knowledge in our mental lexicon and whereas the retrieval of some dimensions may be blocked in a special situation, others may be available. A very good example for this is the TOT experiment by Brown and McNeill (1966) which is described in Jean Aitchison In that experiment, definitions of relatively uncommon words were read out to students such as the definition \"navigational instrument used in measuring angular distances, esp. the altitude of sun, moon and stars at sea\". (Aitchison, 2003: 24) Though some of the students did not quite remember the word \"sextant\", they could answer many or even all of the other questions described above. Thus, they could name words of similar meaning, such as \"astrolabe\" and \"compass\", and sound, such as \"secant\", \"sextet\" and \"sexton\", they could determine the word class and their guesses about the rhythm and the number of syllables were also correct most of the time. This shows that although the phonological knowledge about the word could not be retrieved completely, other dimensions like syntactic, morphological or semantic aspects of the word were not affected. In short, this finding supports the view of multidimensional word storage in the mental lexicon. With respect to these different dimensions, TOT experiences further support the hypothesis that a word is like a coin in the mental lexicon with its lemma, the word meaning and word class and thus the semantic-syntactic dimension, on one side and the word form (phonological dimension) on the other because of two reasons. First, the fact that in TOT guesses, the word class is mainly maintained, with verbs and nouns retaining their word class 90 per cent of the time and adjectives about 60 per cent according to one researcher, supports the view that the word meaning and word class have a strong relationship and thus form the lemma. (Aitchison, 2003: 104) Second, the somewhat looser relationship between word form and meaning can be seen from the fact that in TOTs, people seem to have the concept or the idea of what they want to say", "label": 0 }, { "main_document": "services online. Indeed, Reichheld and Schefter (2000) believe that in e-commerce environment customers acknowledge trust to be more important than price. Furthermore, nowadays the powerful features of new technologies, such as virtual reality or intelligent agents, can be effectively applied to provide an e-business environment enabling the formation of trust (Papadopoulou et al., 2001). Development of a strong brand is of a great importance in a today's highly competitive e-business environment. Tapscott (2000) argue that the brand is no longer an image created through one-to-many communications. Marketers should start to think of communications as a two-way process and of the brand as a measure of relationship capital. Brands differentiate companies from their competitors and can increase trust between the company and the customer (Chu , 2005). Customer retention, based on trust, can be turned into loyalty. According to Reichheld and Schefter (2000) the unique economics of e-commerce make customer loyalty more significant than ever. Customer loyalty is no longer just one of the many ways to increase profits but is essential for company's survival. The general considerations and concerns about payment, security and privacy have been brought to the forefront with trading on the Internet. In order to make the Internet a safe place for customers where they can exchange accurate information and conduct transactions, self-regulated compliance programmes that award seals to privacy-friendly web sites, such as Nakra (2001) believes that privacy policies are a necessity in an e-commerce environment and should offer a clear explanation to the customer about what information is collected, for what purpose it is used, and with whom it is shared. An \"opt out\" option should also be available to allow visitors to forbid usage of their personal data for marketing purposes. Furthermore, customer should have an option to choose whether or not to receive cookies (Nakra, 2001). Security of transactions is critical for consumer satisfaction. According to Marcussen (1999) actual and potential Internet buyers are very concerned about the potential misuse of their credit card details. However, technology solutions, such as advanced encryption techniques and digital certificates, are gradually making the Internet safer and increasing customer confidence. Turban and King (2003) point out that special protocols, such as Secure Socket Layers (SSLs), are a necessity for companies, operating in an e-commerce environment, to secure customers' e-payments and thus enhance their satisfaction. The selected customers for the purpose of this assignment are going on a leisure trip to Iceland where they are going to spend five days, departing on 14th May 2007. They will fly from Manchester (UK) to Reykjavik with Icelandair and will be accommodated in a three-star hotel, Fosshotel Lind, with breakfast provided. The main purpose of their travel is special occasion, the 60 Furthermore, they are interested in health travelling, which encompasses gentle walking trails and spas in Iceland. In addition, they are very keen on the country's natural environment and are therefore going to choose their activities accordingly. They intend to visit some of the Iceland's most impressive sights, such as Gulfoss, Geysir and The Blue Lagoon. For more detail on the travel", "label": 0 }, { "main_document": "became proactive rather than reactive since our demand was fluctuating. In the trend there was a rise in the demand and the peak period comes gradually with modest peak in third week followed by highest peak in fourth week. In order to meet this demand we decided to build up stock to buffer against supply problems and finished goods stock to buffer against fluctuations in demand keeping in view the next peak period. We insisted on make to stock because cost of holding inventory in the game is only 0.5% compared to 12 % penalty of the sales value of under delivered goods. We accumulated finished goods stock than depleted in the peak period and made profits. In this level we added 10% more to our forecasted demand because of scrap, rework and machine down time. By the end of this level i.e. in weeks 19 and 20 we ordered more raw materials due to introduction of a new model XL which requires two units of accessories in addition to the other components, keeping in view one raw material has greater lead time. During the third level with the introduction of the XL model, the break-even point stood at 2560 units for XL models but was pushed a little higher to 3200 units for standard model per week. A 3200 units figure, being a conservative out of the two, was picked up as a break-even point by our group. It was primarily due to an increase in semi-fixed costs. However, the reason for drop in the break-even units for XL model was increase in the contribution as a result of increase in the selling price. There was a very marginal increase in the cost of the product but the rise in the labour cost was covered by the increase in contribution per unit. On the whole break-even point was a useful tool to monitor our progress during the game. In a group we decided to keep a healthy stock of 'accessories' because they would only cost Although being cheap this component was equally important for the production. Increased usage of this component in production of aerials for XL model in level 3 was another reason to keep high inventory of this component in the later weeks. Keeping a stock of 'accessories' did not put much burden on our finances and ensured the availability at all times. We decided to reduce the stock of 'main body' and 'aerial' as these were expensive items to keep in stock and would also increase our cost of holding inventory. Shorter delivery time was another reason to reduce the stock for these items. The only time that we would increase the stock for these items was a weak before expected rise in demand to enable us to increase our finished goods stock. By looking at the trend last year it was detectable that every fourth week there is a rise in the demand and peak period comes gradually with modest peak in third week followed by highest peak in fourth week. In order to meet this demand", "label": 0 }, { "main_document": "During the Victorian period there was what is known as the 'Victorian crisis of faith' due to scientific discovery, changing philosophy and doubts over the validity of the Bible. This crisis of confidence in Christianity was experienced mostly by intellectuals who renounced the Bible as literal fact and replaced it by a humanistic religion. These changing ethics were clearly illustrated in the literature of the time; the discourses of the writing and the characters portrayed in them. Both George Eliot and Thomas Hardy were sceptical of Christianity, especially some of its strict moral standards, but both felt that its altruistic teachings were beneficial. They both used the Bible richly as a resource for their novels and I will discuss their use of it and their alternative humanism looking at Eliot's George Eliot's (called Marianne Evans) parents were members of the Anglican Church and sent her to schools which were strongly influenced by Evangelical teachings of the day. Eliot became especially involved in church at the age of 12 when she began teaching in Sunday school but it was not until the age of 15 when she became convinced of Evangelical Christianity. Nineteenth Century Evangelical teachings emphasised the doctrines of eternal salvation and judgement, original sin, justification through faith alone and activism. Yet there were many ambiguities; shown by new emphasises later in the century, as to how one is assured of salvation Preachers People felt great unrest because they were left unaware of whether or not actions on earth determined one's salvation because they did not have a balanced biblical understanding. Amidst the confusion, Eliot felt great relief in dropping her faith, The issue at the heart of the 16th century Reformation see Mr. Clare in See 2 Timothy 3:16 from the p.9 from Forsyth, R 'The Very Religious Atheist - George Eliot's Religion', ed. Kirsten Birkett, (from the magazine Her anxious and ascetic lifestyle did not provide much resistance to the thinking that she became aware of when introduced to the writings of Charles Hennell. Eliot was particularly convinced of the inadequacies of the Bible when she translated Strauss Strauss believed the gospels were a \"literary construction\" Feuerbach claimed that the Christian God was simply a product of man seeking to imagine perfection. He believed that the greatest and most important feeling in nature was the love between fellow human beings. Eliot felt the moral teachings of Jesus were beneficial but concluded that the Bible itself consisted of \"mingled truth and fiction\" She was also hugely influenced by the 'positive' philosophies of Auguste Comte Despite Eliot's changing thinking she presented a slightly ambiguous figure as she remained much attached to the Bible. Hennell had claimed Christianity to be \"the purest form yet existing of natural religion\" Her novels in fact became increasingly more religious in content and filled with Biblical allusion and imagery. Charles Hennell (1809-1850) a Unitarian who wrote David Friedrich Strauss (1808-74) German theologian who wrote Ludwig Andreas Feuerbach (1804-72) German philosopher who wrote p.22 from Qualls p.7 from Forsyth Auguste Comte (1798-1857) French philosopher and social theorist wrote", "label": 1 }, { "main_document": "executives implement propitious measurements in order to retain, improve and also attract female expatriates for future assignments. One of the significant and urgent alterations companies have to make is the change of recruitment and selection procedures in order to attract a more globally - qualified pool of female applicants. This includes increased emphasis on multiple language proficiency and job descriptions, which help companies to have a globally competent group of recruits (Ashamalla 1998). Current research on expatriate selection and recruitment procedures however, highlights that organisations rely on technical expertise and knowledge of the company rather than cross - cultural fluency (Harris and Kumra 2000). In fact, many managers assume that technical expertise is the most important criterion in selecting employees for international assignments (Webb 1996). Competencies such as cultural sensitivity, interpersonal skills, adaptability and flexibility on the other hand are rarely taken into consideration. Although technical competence is an important factor, it cannot be forgotten that the ability to build relationships with the host country nationals increases the probability of a successful assignment by a large degree. Moreover, the tendency for companies to rely on the technical competencies approach is seen as a determinant for expatriate failure rather than success (Reuben 1989). That this factor cannot be underestimated is proven by statistics of expatriate failure, which show that 16 - 40 per cent return prematurely from their assignments (Kaye and Taylor 1997). Furthermore, have these failures a tremendous impact on a company's financial situation. Authors like Ashamalla (1998) estimated that the costs of expatriate failure could reach staggering numbers of up to $1 million. It is therefore crucial to select expatriates who are better suited for cross - cultural situations, rather than depending on technical competencies. Particularly women managers are regarded in the literature as being more flexible and inclusive managers compared to their male counterparts (Guthrie and Ash 2003). Westwood and Leung (1994), for example, reported that in their qualitative results, a number of female expatriate respondents perceived that women benefited from being more sensitive, interpersonally aware, empathetic and sociable than men. Additionally, Selmer and Leung (2003) have conducted profound research, which compared international Western female and male business expatriates in Hong Kong and has found that female expatriates have higher interaction and work adjustment abilities than their male counterparts. Therefore, in a cultural context where business can be promoted through interpersonal interactions, like the hospitality industry, female expatriates may have an edge over their male counterparts. Furthermore, respondents from a survey conducted by Chung-Herrera et.al. (2003) have identified flexibility, adaptability, interpersonal skills, effective communication abilities and commitment to quality as some of the crucial competencies required for future hospitality leaders. It can therefore be argued that female expatriates not only have an edge over their male counterparts, but also seem to have the necessary skills and abilities to perform similar, if not better, on international assignments compared to their male candidates. However, it is only by recognising and appreciating cross - cultural differences across countries that the competencies of female expatriates will be truly valued. Companies that continue to believe", "label": 0 }, { "main_document": "In my opinion the learning of grammar causes students so much worry, because there are always grammar tests for which they really have to know the special rules or otherwise need to have a quite good feeling for the language. Especially tense and aspect tests, where the student has to fill in the correct verb form, are very difficult and I find them even now the most unpredictable part of exams. The teaching of grammar may be worrying for teachers, because they know that students have difficulties with new grammar and often it may be not easy to teach it good and clear enough to weaker students. At school, I always had great difficulties with new grammar in all languages (even German which is my first language) and I found it hard to memorize. Since I am at university it is much more easier for me to learn it and keep it in my mind for quite a long time. Maybe the reason for this is that most of the modules are lead my native speakers. However, I honestly do not know if that is the only reason. After my first semester I was surprised about how much I had learnt over these few months. I always found it useful when teachers gave me comparisons or showed me a diagram for the new grammatical stucture. The diagrams which are shown in Baker (2000) are very good and I will use diagrams like that in my later lessons, because they will be a great help for some students. For me it was really helpful to have something where I could look at and where I saw all new forms and changes. The idea to present grammar indirectly through pictures or real objects is also an interesting approach (Baker, 2000). Sometimes it might be good to teach grammar indirectly (presentation) or directly (self-directed discovery), but I, as a student, think that maybe the direct approach or guided discovery (Scrivener, 1997) might be the best ones for myself, because I like to be guided through new grammar structures. As Gollin (1998) stated, for a teacher it may be the best to choose the approach method according to the available time or character of the class. When I think about teaching grammar, my greatest worry is that I cannot make it clear enough to my students. Probably, this worries me so much, because I had so often problems with it and many of my teachers just were not able to explain them to me in a way I was able to understand. One other thing which worries me is the fact that some students know the rules by heart, but they are unable to use them in a freer sentence structure or in conversation. I really hope that I will find a way to teach them how to use the rules in \"real life\". In my last Portfolio I wrote that I really enjoyed this course so far. This has not changed at all! It is the most interesting course I have in this semester (although", "label": 0 }, { "main_document": "Using an audio oscillator and pickup to induce oscillations, the standing waves produced in a fixed length of two different wires were investigated. The velocity of the waves on a thin wire was found to be Measurements of the fundamental frequency for increasing lengths of a thin wire described a proportional relationship between the fundamental frequency and the reciprocal of length, as predicted by theory. Measurements of the fundamental frequency for increasing applied tensions were made for both wires, and the velocity of the standing waves found to be proportional to the square root of the tension. The masses per unit length were found to be 2.61 Measurement of the harmonic frequencies of a thick wire showed a deviation from the simple relationship predicted by basic theory, indicating that the elastic force was significant. Young's modulus for the thick wire was calculated to be 6.07 A wave is any form of periodic disturbance of a medium that changes in form as time progresses. The medium itself does not travel on the macroscopic scale, but undergoes small scale vibrations and displacements from the normal position. These waves may be either longitudinal, along the direction of wave propagation, or transverse, at right angles to the direction of wave propagation, and the displacement of any point on the wave from its equilibrium position can be considered to vary simple harmonically with time. In this experiment we are primarily concerned with the common case of transverse waves on a taut string, although the case of longitudinal waves in an elastic rod will also be considered. Figure 1 shows the basic case of a sinusoidal transverse wave on an infinitely long, taut string, with several common variables indicated. From these variables a general expression for the wave can be derived, of the form Or alternatively, in complex notation, The longitudinal velocity of the wave is given by In this experiment we are also interested in the transverse, or phase, velocity of the wave. This is the velocity of the transverse displacement of each point on the string from its equilibrium position. If the string experiences an applied tension T and has a mass per unit length m, then the transverse velocity of the wave (assuming that the amplitude of the wave is very much less than the wavelength) is given by This can be derived from first principles by considering the forces acting on an infinitesimal section of the string, but such a derivation is too long to present here (For details of the derivation, refer to The important point is that equation [4] is a consequence of the wave equation The form of a wave can be altered through interactions of the medium with boundaries that impose certain conditions upon the form of the wave at that point. An example is the case of a fixed end of string, towards which a wave pulse is travelling. The displacement of the string at the fixed end must always be zero, and hence when a wave pulse arrives at the fixed end it exerts a force on the", "label": 1 }, { "main_document": "The solution was found in the Mexican side of the border with the United States; a strategic place to attract American companies seeking low-cost location and workforce. Among the incentives, the country offers exemption of mininum wage or benefits like health care and maternity leave, and restriction on trade union rights. In addition, firms are allowed 'to adjust the size of their workforce or lengh of their workday according to their needs' (Kopinak 1996, 10). Mexican workers also receive wages far lower than those paid to U.S. workers in similar industries, are forced to work longer hours and produce more per hour despite the risks to their health. The turnover is high probably because of the intensive nature of production; the average career is no longer than five years (ILO 2003). The workers lose efficiency as they start to have problems with eyesight, allergies and kidney diseases (Villalva quoted in Korten 2001, 132). The situation is not better in the settlements used by the workers as residence. Many live in shantytowns without running water nor sewage systems (Korten 2001, 132). Since investors are exempted from property taxes on their factories, public infrastructure is also inadequate. To complete the pattern of exploitation, the lack of strict environmental regulation has given room to an increase in environmentally damaging behaviours. The ILO has received many complaints about improper handling of hazardous materials by maquiladoras (2003). Firms are taking advantage of the loose policies and are not careful when discarding industrial waste. The case for economic development brought by maquiladoras is not compelling. Despite the relief given in the past providing employment and foreign exchange for the country, there are signals that these improvements are not long-lasting. The impact on the the balance of payments has not been proved as good as it was predicted, and the country still has a huge debt. Furthermore, Mexico is now dependent on the maquiladoras: in 2000, the sector was responsible for almost 15% of the country's total GDP and in 2002 it represented 80% of the country export earnings. Likewise, the number of jobs reached 1,285 million in 2000, but has been decreasing since then mainly because of competition from China. In 2002, about 200,000 posts had already disappeared (ILO 2003). Taken together, these results indicate that the experience of EPZs in Mexico have, at best, provided short-term foreign exchange relief during times of crises, and, at worst, increased dependency on international capital at the cost of worker's rights with hardly any spillover effect on the economy. The excuse commonly heard to justify the implementation of the kind of exploitation seen in the maquiladoras in Mexico and in other EPZs around the world is that people accept substandard wages, working conditions and environmental regulations because they would be worse off without them. However, we claim that this situation has been viable just because the local elites in the developing countries have been successful in promoting their interests through subjugation of the impoverished layers of the society. As a result, even in countries where the adoption of EPZs is considered", "label": 0 }, { "main_document": "The farm is not organic but integrated. The whole area is divided into 2 sites. The farm itself is a horticultural unit, growing kohlrabi, cauliflowers, onions, spinach, Sweet Williams, coriander. The cultivation system practiced is the bed systems. In terms of the soil, the farm is situated on a reclaimed gravel mining. The land of the farm has been gone from a gravel peat to this since 1978. The farm supplies wholesale markets and restaurants without dealing with supermarkets. The farm also has machinery shed and a pack house available. At the end of each field there are ditches for the accumulation of drainage water. The ditches remain wet for 6 months, are kept clear and function as corridors for promoting wildlife since they get beneficials closer to the field. Particularly, the ditches provide a wet habitat for amphibians and other organisms that live in water. The sides of the ditches are used as beetle banks. Within the farm there is also a twenty-year old lake, which is a wildlife habitat for insects and predators. The banks of the lake also provide a wildlife habitat since they have holes where parasitic wasps nest. Being aware of the importance of the lake as a wildlife habitat the owners of the farm have put a protection all around in order to prevent enemies from destroying beneficials. Apart from the ditches and the lake there are also hedges in the farm as a biodiversity source. The hedges consist of native deciduous species (e.g., black thorn, hawthorn) and operate as both wildlife habitat and windbreak. The way they manage the hedges is critical. The hedges are trimmed every year in A-shape allowing light to go to the bottom of the hedge. On the other hand, if the hedges were trimmed in V-shape the light could not penetrate and reach the bottom of the hedge, which may result in die back at the bottom of the hedge. Also, the time of trimming is quite crucial. So, trimming is taking place in February when there are not any nests to destroy. Another policy of the farm that promotes biodiversity is the way they manage crop residues. Foe example, kohlrabi crop has been left to rot down in order to attract over wintering pests, parasites, predators and birds to the farm since they are used as a feed source for all these living organisms. Not only crop residues but also a cut flower crop (Sweet Williams) is used in the farm as a feed source for increasing biodiversity. Similarly to the kohlrabi residues, Sweet Williams constitute a feed source attracting both insects and birds. Finally, the nests that exist in several places of the farm are marked so as to avoid damaging them whenever cultivation and spraying are taking place. The whole farm has been marked on areas of 25m Soil analysis is taking place individually for each 25m Obviously, this is a much more precise and intelligent system than the one used in the organic farm. This is due to the fact that fertility and especially the basic", "label": 0 }, { "main_document": "males who are capable of working the fields and more dependent females. The larger proportion of men than women in the city means there is a higher demand for prostitution and thus the spread of sexually transmitted diseases such as AIDS. Housing provision also poses a major problem in developing cities. As people migrate to the city in search of work, it is probable that they will have little money and nowhere to live. In Lima, many of the new arrivals will settle near where they hope to find work or where they arrive such as Port Callao. The work that they will do will most probably be informal and could be dangerous. The newer squatter settlements tend to be poorer such as Prima de Ernero which has no waste or sewage disposal. This is a big leap from the area of Mira Flores which is a much more affluent and high class area, protected from the slums by high walls and gates. Such gated communities are not uncommon in a lot of developing countries (especially South Africa) and cities where the richer minority protect themselves from those who they perceive as poorer and dangerous. As more and more squatters arrive at the cities, urban sprawl becomes a real problem. In Bangalore, urban sprawl was combated by setting aside land as a designated green belt. This led to pressure on housing within the green belt causing infill and the building of tower blocks. When the National Games Village was built in 1996 to provide accommodation for athletes, the slum dwellers were forced to vacate the land where the development was taking place and their homes were burned down. It is undoubted that globalisation and the growth of trans-national and multi-national corporations (TNC's and MNC's) has had an effect on cities in LEDC's. The large gap in wealth between developed countries means that it is cheaper to produce products in LEDC's. The competition from other TNC's and MNC's means that companies are always trying to find ways of cutting costs. One of the many ways is to find cheaper labour which leads to the exploitation of workers. The TNC's can pick and choose where they want to invest, and so many governments in LEDC's will offer them ways to cut their costs in order to attract investment. Turning a blind eye to unsafe or poor working conditions is one way. The consequences of this were demonstrated in Bhopal in India when a Union Carbide factory spewed out poisonous gas killing thousands of people. Looking at the relationship between globalisation and inequalities in this way would imply that the TNC's and MNC's are completely to blame. However, the UN-HABITAT (2003) report suggests that it is not the companies that are to blame. \"It is not globalisation, per se, that has caused countries and cities to abandon policies aimed at redistributing wealth to the benefit of the majority of their citizens. Rather, it is the perception of governments that their countries must be competitive in the world economy that has led to this policy shift\".", "label": 1 }, { "main_document": "load however small it is. The running light test doesn't give actual no load results. This is because there is a slight load from the measuring equipment, for instance the dynamometer. Graph 2 shows the line current/ speed curve. The starting current matches that of the measured starting current. Generally that starting current is 5 to 10 times higher than the full load current. In this case it is just about five times higher. This over-current, even though it only lasts for a few seconds, can be a problem. The solution of this is to use a starter or wound-rotor motor. The measured results are close to the theoretical, showing that speed decreases with current. Graph 3 shows the efficiency/ speed curve. The motor is most efficient (68%) at 1410rpm. Beyond this speed efficiency drops off rapidly to zero at 1500rpm. The measured values are quite close to the theoretical. Any differences between actual and theoretical results are due to friction within the motor.", "label": 1 }, { "main_document": "and 1930s is thus marked by a fresh interest in Aztec art and architecture, and a celebration of the 'Indian race' can be found in nationalistic essays. Gilbert M. Joseph and Timothy J. Henderson (eds.), Alan Knight, 'Popular Culture and the Revolutionary State in Mexico, 1910-1940', Ibid, p. 395. Alexander S. Dawson, 'From Models for the Nation to Model Citizens: By the thirties, a new political machine, the National Revolutionary Party, had been put in place and the work of myth-making of Mexican heroes of the revolution started. This period has been marked the post-revolutionary period, as legitimising the state, as well as instilling a paradoxical blend of conservative revolutionary sentiments in Mexicans. Knight, 'Popular Culture and the Revolutionary State', p. 393. Martin, Rowe and Schelling, The depiction of a 'native woman, wearing traditional costume, standing before the Aztec calendar stone' was never circulated, and instead the images of Zaragoza and Madero were preferred. They were, in fact, 'rational political actors with modern sensibilities'. Dawson, ' Ibid, p. 285. Ibid, p. 294. Joseph M. Galloy, 'Symbols of Identity and Nationalism in Mexican and Central-American Currency', Knight, 'Popular Culture and the Revolutionary State', p. 395. Dawson, ' Martin, Another aspect of this They are transformed to be used on mass scale, into three-minute songs to be played on the radio Agust The following is an example of a corrido describing the death of Francisco Villa. Rowe and Schelling, Armando de Maria y Campos, Yet another way in which the revolutionaries were eternalised and epitomised as heroes of the Mexican nation was through the paintings of Diego Rivera, Jose Orozco and David Alfaro Siqueiros. Supported and often commissioned by the government, these painters fulfilled 'the visual component of [the] need to create that Mexican citizen necessary for the survival of the post-Revolutionary state'. Martin, Anthony W. Lee, 'Mural Painting and Social Revolution in Mexico: 1920-1940, Art of the New Order - Review', Ibid. In terms of literature, the 1930s saw a continuation of the style employed in the twenties. Epic works with 'the man on horseback', Where Guzm His novel Martin, Ibid, p. 45. There were, of course, also many writers opposed to this mythical image. In fact, the Mexican revolution brought no generation of poets of any significance; most of them were to be sought with the defeated reactionaries. The following extract comes from a Gonzales and Treece, Franco, Anonynous, 'The Ballad of Valent Arguably, illiteracy rates stood in the way of effective nation-building through literature. Some have thus asserted that 'it [was] film, not writing, which create[d] [...] the 'imagined community' of the nation'. Thus when the Soviet filmmaker Eisenstein, visiting Mexico, planned to make a film on the history of Mexico, the government withdrew their subsidies for fear that such a film might cause an outrage amongst the oligarchic King, Ibid, p. 42. Ibid, p. 42. Ibid, p. 44. In the aftermath of the revolutionary decade (1910-1920) in Mexico, the new authorities then energetically set about constructing the archetype of a new \"ideal Mexican\". This was based upon a renewed enthusiasm", "label": 0 }, { "main_document": "Over the last years, noticeable changes in the climate have been observed on the Earth. The term 'climate change' can be defined as 'the long-term fluctuations in temperature, precipitation, wind and all other aspects of the Earth's climate'. The terms 'global warming' or 'greenhouse effect' can also be used instead of climate change (CORIS glossary, 2003). It is estimated that in the future there will be significant changes in terms of the climate. The main cause for all these changes is the emission of greenhouse gases in the atmosphere including carbon dioxide (CO In reality, these gases isolate the Earth from the sun's heat and therefore help in order to have a stable climate. Without the presence of the gases the heat that Earth absorbs from the sun during the day would not be trapped on the Earth. Instead of that it would escape. On the other hand, when the atmosphere has much of these gases, too much heat is trapped causing warming. (Environment Waikato Regional Council, 2003) There are some indications that the climate in the UK is going to change in the future. The latest scenarios about the future UK climate were developed by the Department for Environment, Food and Rural Affairs (DEFRA) in April 2002 and are known as the UKCIP02 scenarios. According to these scenarios, the UK climate is likely to become warmer. By the 2080s, the average annual temperature in the UK may be 2 The weather is going to be warmer in the south and east in comparison to north and west and also there are going to be higher temperatures in summer and autumn than in winter and spring. High temperatures during summers will be a more frequent phenomenon while very low temperatures during winters will be rare. In addition to this, winters are going to become wetter while summers will become drier. These changes will be more noticeable in the south and east of the UK. The soil moisture during summer by the 2080s may be reduced by 40% or more. Also snowfall will be reduced. Apart from these changes there are several predictions about the sea level mentioning that the sea level will probably rise and extreme sea levels are likely to become a more frequent phenomenon. (UKCIP, 2003) The predictions for the next decades make references to the rise of the sea level as well as to greater frequency of extreme sea levels. The harmful effects of the sea level rise on the horticultural field derive from the increased number of floods that are going to take place. Due to the floods, several areas that previously were appropriate for the cultivation of various crops will be waterlogged and therefore improper to support any crop. Apart from the floods, the phenomenon of sea level rise also causes the salinization of the soil, which is extremely harmful for the crops. When there is too much salt in the soil, it is difficult for the plants to take up water. (USDA Natural Resources Conservation Service, 1998) Regarding the changes in temperature, the weather in the", "label": 0 }, { "main_document": "shift the AD curve to the right, when the LM curve is flat no shift occurs and the economy remains in a slump. As can be seen from the graph below, when AD curve is vertical, shift in the Aggregate Supply curve does not change the equilibrium outcome. Paul Krugman:Ibid. P.12 Therefore it is the demand side policies that can affect and bring the economy out of the slowdown. Money supply as such, however, is not sufficient to move the AD curve, when the economy is in a liquidity trap, as was the case in Japan from 1996 to 2000. During this time Japan's central bank increased money supply by 40%, but this resulted in only 3% increase in the Gross Domestic Product, and it has been argued that after this monetary expansion Japan was still not at her natural level of unemployment or output. One possible way to help the economy out of the slump, when the economy is closed and the people are assumed to display adaptive expectations is to use fiscal policy. A sufficient increase in government spending would be able to get the economy to equilibrium at a point, where the LM curve is steep. The massive deficit-financed boost would be able to restore the full employment output. Consider the graph University of Warwick, Macroeconomics 2, Term 1 Lecture notes AS-AD 1. P.31 However, as noted before, this massive expansion can only be achieved by a massive loan by the government. So if the government's budget is in a healthy state, it might be able to run such deficits for a while. This would require a healthy and growing economy. However, if the economy is already running large budget deficits and has troubles with her economy, this task would be difficult to achieve. This is the case in Japan, the government is already indebted with a larger debt than its annual GDP, and therefore this option has risks that would be dangerous to ignore. If we relax the assumption of the economy being closed, we can have another solution for the problem. If the economy is open, a simple analysis would suggest that monetary policy can be made effective again. If the central bank increases money supply, this should in principle decrease the demand for the domestic currency and as a result it would depreciate. This depreciation would make the domestic goods cheaper abroad thus increasing the demand for domestic goods. This effect would take place through the J-curve as shown below: The initial effect of the currency depreciation would make the people poorer, since the purchasing power of people has decreased, but as time passes, the price of domestic goods abroad falls and the net exports increase. As a result of this the economy should improve. Hence the monetary policy would be effective again. However, as noted in Blanchard, the depreciation of the currency happens through the interest parity relation, increase in money supply shifts the LM curve down and as a result the domestic interest rate falls resulting in currency depreciation In the case where interest", "label": 0 }, { "main_document": "law is not an individualistic principle of self-determination, but a structural principle of self determination' where the 'self' is part of the 'majority' enabled and constrained by the rules and procedures of democratic life. The third form is that of individual autonomy, resting on the idea that individuals should control their own lives. A number of writers have advanced the principle of autonomy in its individualistic sense as being essential to the justification of democracy. It is also argued by some democratic theorists that, as a democracy encourages people to take responsibility for their political lives, it is instrumental to human development. Others are of the opinion that 'democracy represents fair terms of a social contract among people who share a territory but do not agree upon a single conception of the good' (Gutmann, 1993, pp. 412). Democracy is thought to consist of co-operation, negotiation and thus a fair moral compromise from this contractarian viewpoint. However, this perspective leads to the two main paradoxes associated with democracy. Often, a voter may vote a particular way depending on his or her personal beliefs on the issue. If the majority votes the other way the voter is caught in a clear paradox as he or she being a reasonable person and democrat must now believe contradictory things: that the outcome is not justified (by the best reasons) and that it is justified (because the majority voted the other way.) Though this paradox disappears on a more defensible understanding of the nature of the democrat's beliefs (Honderich, 1973; Pennock, 1974), a second paradox explains that voting is irrational from the point of view of the cost-benefit calculation of an individual in a large electorate. If most people are cost benefit calculators, then democracies are 'doomed to collapse under the weight of all the rational free riders on the system' (Gutmann, pp. 419) If most people are not, democracies depend on the irrationality of citizens. Both these paradoxes can be developed into arguments against democracy which in turn can be grouped under those that are ideological and those that are practical problems. Tocqueville identified two of the former in particular, the first being tyranny of the majority, when he wrote 'I can imagine no permanent protection against the most galling tyranny; and a greater people may be oppressed by a small faction, or by a single individual with impunity' (Tocqueville 1835) where the view of the majority effectively silences those with other beliefs. In a democracy there is also always the problem of a permanent and entrenched minority whose views and opinions are never heard or acted upon simply because the majority is always against them. The other problem Tocqueville identified was that of individualism: 'a mature and calm feeling, which disposes each member of the community to sever himself from the mass of his fellows' and while drawing apart with his family and his friends, forming a circle of their own, and finally move away willingly from society. Tocqueville saw individualism as a problem as, at first while it would only 'sap' the virtues of", "label": 1 }, { "main_document": "no extra or additional heart sounds. Evidence of a grade 3 pansystolic murmur heard in all 4 areas, loudest at the apex. No carotid bruits Palpation: Trachea non-deviated, expansion normal Percussion: Dullness to percussion at both lung bases, posteriorly Auscultation: Bilateral expiratory crackles, heard most clearly in the left lower base. Generalised wheeze. Both the spleen and liver were easily visible when the patient was supine Palpation: Liver was palpable five fingers breadth below the subcostal marginSpleen was palpable to the umbilicusNo other masses felt. Kidneys not palpable. Percussion: Liver was percussed two ICS superiorly. Shifting dullness elicited. Auscultation: No liver or splenic bruits heard. Bowel sounds present. no abnormalities found on gross examination. The evidence from the history and examination would indicate that the most likely cause of Mr Heart failure (RHF or CHF), this is a likely diagnosis due to dyspnoea, orthopnea and peripheral oedema; as well as, the evidence of a systolic murmur on auscultation, which would add additional strain on the heart; and bilateral basal lung crackles, indicating pulmonary oedema. Anaemia could be an additional complication, exacerbating the heart failure and subsequent dyspnoea, as there was evidence of pallor in the conjunctiva. Liver failure cannot be ruled out due to marked hepatomegaly and ascites. It is less likely to be renal failure; although there was ascites, there was no loin tenderness or palpable kidneys. Again, this diagnosis cannot be ruled out and blood test will help to understand current renal function. In the history, Mr Apart from a resting tremor, there was no evidence of thyroid disease on examination, blood results will give the definitive diagnosis. Mr He appears to understand his CLL very well and the treatment he has for this and why he has it. His main complaint for this admission is his insomnia and rapid degeneration of his breathing. Psychologically he states that he is low in mood, this will need to be addressed, as any feelings of depression will exacerbate his physical condition, which is already a delicate one. HB = 12.2(L) - had been given 3 units on Global moderate pericardial effusion about 1.8 - 2cm. No evidence of diastolic RV collapse. IVC non-dilated. Severe tricuspid regurge and mild aortic stenosis. Pericardial effusion can be caused by CCF and hypothyroidism. Both of these need to be treated to help reduce the pericardial effusion. TSH = 14.18, thyroxine increased to 100 micrograms. Spiking temperature; septic screen and chest x-ray organised (NAD) Transferred to care of Dr Frusemide increased to 40mg BD Evidence of his CLL with anaemia, increased WBC count and reduced platelets. There is also evidence of dehydration, with some possible renal failure, as although the number is within the normal range, Mr Repeat echo: Pericardial effusion 2.0 cm posteriorly and 1.7 cm laterally. A pericardiocentesis would be difficult due to the size and position of the effusion. Would need to transfuse platelets before the procedure to reduce bleeding risk. Haematology review: Consider blood transfusion. Need to make sure the patient is not fluid overloaded (which would exacerbate the current heart failure), however,", "label": 1 }, { "main_document": "diversity very well; 'It means relating to, and working with, people who hold different views, bring different qualities to the workplace, have different aspirations, and have different customs and traditions'. Tensions between employees can be disastrous, particularly if there is a language barrier leading to a breakdown in communications. In order for managers to manage diversity successfully, it is important they understand potential differences in culture and societies between employees and address these issues to avoid workplace clashes. 'Hospitality managers, charged with the tasks of motivating, coaching, and mentoring service employees, must focus on one of the most central, yet neglected, aspects of communication- effective listening\" (Brownell, 1996:201) Brownell also states, 'Understanding effective leadership requires perspectives that take into account the dynamic nature of hospitality organisations and that recognize the unique characteristics of each employee'. This is a very important point because managers must recognise that every employee is different. This approach helps make staff feel valued and encourages them to offer their best service to the guests; 'The hotel sector is likely to exhibit a wider and more diverse range of organisational and staffing characteristics that might necessarily be found in other sectors of the hospitality industry' (Mullins, 2001:305) Other characteristics of the workforce include things such as, full-time/ part-time employment and gender. Whether or not an employee is full-time or part-time can be seen to affect whether a hospitality service is genuine. For example, is a part-time student, trying to pay off a student loan, offering a less genuine service than a full-time employee and to what extent do the student's motives for working really matter? Just over half of the employees within the industry (54%) work part-time (Mod 3101, lecture 7: Angela Maher), which demonstrates the diversity of workforce and highlights the need to ensure good communication exists. For instance, a part- time receptionist in a hotel must insure that everything is order for the full-time receptionist whose shift starts after she has left. Similarly, gender within in the industry can cause problems. For example stereotyping is very common, we all do it but it should be discouraged. Women are often stereotyped as being the 'caring, mother figure', and many see them in the industry as working in housekeeping or at reception. This could cause problems within the industry when a women manager is in charge of male employees. If employees have stereotypical views of women they may feel reluctant to take them seriously, leading to problems. Numerically, women dominate the workforce with 67% of employees being female (Mod 3101, lecture 7, Angela Maher). After all 'It is the interaction of Some form of It is through the All these issues have to be addressed to be able to run any successful commercial hospitality establishment; 'Diversity requires an understanding that there are differences among employees and that these differences, if properly managed, are an asset to work being done more effectively' (Bortz et al 1990: 321). The hospitality industry is a very competitive and large-scale industry and as a result repeat business is very important. As Carper estimated 'it", "label": 1 }, { "main_document": "beyond the scope of this essay. The implicit implication of this is that a country exports the services of its abundant factor and imports the services of its scarce factor, hence trade in commodities also exchanges surplus factor services between countries. The assumptions of the model are restrictive with regard to its applicability to more advanced theories of international trade. The assumption of factor mobility between industries is only relevant to the long run position of an economy. That is, the transition of factors of production from one industry to another is subject to potentially large time periods. Capital may require time to depreciate in some uses and for new investment to take place in others. Similarly, labour may need considerable re-training in order to acquire the skills necessary to work in a different industry. The assumption of identical and homothetic preferences across countries is also clearly unrealistic in the real world. Cultural and socio-economic factors are likely to be significant in the formation of preferences, and hence community indifference curves, across countries. The model does not allow for the possibility of Factor Intensity Reversals: these occur when a good can be capital-intensive in the capital abundant country, but labour-intensive in the labour abundant country. Within the model, this would imply that both countries would export the same good - clearly an impossibility. The extent to which the Heckscher-Ohlin theorem is robust can be considered in terms of the extent to which the assumptions can be relaxed while the theorem still holds. If production did not exhibit constant returns to scale Similarly, if production technologies were not identical in both countries, this would disable the use of the theorem. Some propositions of the model would be unaffected however, for example the impact of a tariff on real wages in H only depends on technologies in H, not on those in F. Recall that the degree of homogeneity would therefore be Caves, R.E., Frankel, J.A. & Jones, R.W. (2002), If the factors of production were heterogeneous, this would mean that the two-factor version of the theorem would not be of any relevance, since there would be a large variety of capital and labour factors. Despite this, a short-run version of the theorem may still be applicable, if factors are locked into contracts that prevents their mobility between industries. The model assumes that factors are immobile internationally, due to complications involving trade in factors between countries However, it is possible to relax this assumption, since the theorem is expected to hold even under simultaneous commodity and factor trading Factors may \"migrate\" across countries in order to take advantage of differences in relative factor returns, w and r. Markusen, J.R., Melvin J.R., Kaempfer, W.H., & Maskus, K.E. (1995), It may be possible for market distortions to overturn the theorem if the effects were severe enough to offset the fundamental influence of differences in relative endowments within the model. Distortions such as tariffs imposed on imports can impose restrictions by acting as barriers to trade. However, even if some of the trade in goods was eliminated,", "label": 1 }, { "main_document": "The Sony Ericsson S710a Mobile Telephone, part of the S700 family, is part of Sony Ericsson's new generation of mobile phones, equipped to the latest standards, to be launched in the United States and European markets, including the United Kingdom. Featuring a 1.3 Megapixel digital camera, Bluetooth wireless interface, and a memory expansion slot (with which a Sony Memory Stick duo can be used to increase available system memory for storage). The telephone features a large, high-resolution colour screen and 32MB main memory. A full specification of the unit can be found on the Sony Ericsson web page referenced at the end of this report [1]. For the new Sony Ericsson S710a, part of the launch of the new handset is the design and development of software for the product. One such piece of software to be produced is an 'Appointment Module' designed to work in connection with the SIM card, address book, and other functionality already present in the phone's bundled software. Sony Ericsson desire that this appointment module should allow users to enter appointments into a calendar interface, stored in the handset, which will hold corresponding contact details of involved parties with the appointments, in order that when an appointment is due, each person concerned with the appointment may be issued a text message reminder automatically by the software. This report is concerned with how the requirements of Sony Ericsson were met with the design of such an appointment module, codenamed \"CellCal\" for the duration of this project. Several stages in its design were undertaken sequentially in order to produce a well-designed and appropriate solution: The design of an appropriate solution to the problem presented by the client, Sony Ericsson, requires the use of an object-oriented approach to coding a suitable application. Use of an OO language allows for better interaction between the separate modules of the phone, greater flexibility, easier debugging, and an is ideally suited to the problem presented. The need to store multiple appointments in a calendar, with associated contacts, already lends itself to an object-oriented design - with individual Appointments and Contacts form base classes themselves- each contact or appointment has a certain number of properties, or 'attributes', such as Name, Date or Venue, which are standard for every instance of a contact or appointment, but which multiple instances may be required with different properties in those attributes, for example, different contacts. These classes can be accurately modelled in an object-oriented language, but before this can be done, it is important to lay the foundations of this modelling by designing the system in Unified Modelling Language (UML). The following pages contain three diagrams that attempt to model the proposed CellCal system in UML - (i) A Use Case diagram that illustrates the interaction of the user and components of the phone handset to perform different actions or 'use-cases'; (ii) a Class Diagram that expresses the design of the system in terms of individual classes to be implemented and their interactions with eachother; and (iii) Interaction Diagrams consisting of a Sequence and Collaboration diagram to represent the", "label": 1 }, { "main_document": "A good example of a portrayal of femininity comes from the short story \"The Joy Luck Club,\" written by Amy Tan. Tan's character June is a member of the second generation of a Chinese family who immigrated to America from China during the Second World War. Her mother is a particularly prominent character in the story, despite the fact that she passed away two months before the present time the story is set in. The strength of her character is contrasted with several aspects of the short story; firstly it is contrasted with the terrible conditions of the inhabitants and refugees of Kweilin, which Tan describes as \"a city of leftovers.\" Secondly, June's mother's strength of character can be contrasted with the lack of a convincingly powerful male presence. June's mother's authority is dominant throughout the story and there are few references to any patriarchal values; only when the text is examined between the lines can one notice a couple of stereotypes of male and female roles within society and within the household.Thirdly her strength of character is contrasted with irresolute aspects of June's own character. The time this story was written, in the year 1989, also has some significance on the views that are expressed in the book: One can draw comparisons between Tan's writing and that of Author's such as Sara Paretsky, whose detective fiction novels bear great emphasis on strong female characters. There are several other examples of female authors rising to success through their depiction of strong female characters during this time period and one may even claim that Amy Tan is an author who may fall into this category. The Granata book of the American Short Story: Page 601. An important focal point for the development of June's mother's femininity and indeed, June's identity, is the origin of the origin of the Joy Luck Club. The story begins in the first person with June telling us that her mother has been dead for two months and that her father has asked her to \"be the fourth corner at the Joy Luck Club.\" This begins an instant curiosity in the reader's mind as to what the Joy Luck Club actually - as it is also the title of the story. Compared with \"The Monkey House,\" the words are used to end the story instead of begin it. We soon learn that the Joy Luck Club was the creation of June's mother during her awful experience of Kweilin during the invasion of China by the Japanese in the Second World War. Normally in China, the people regard Kweilin as a place of great natural beauty. June's mother describes this in Chinese: The Granata book of the American Short Story: Page 599. The Granata book of the American Short Story: Page 600. June's mother describes Kweilin very poetically, as if it is a fairly tale land, describing the hills as \"like giant fried fish heads trying to jump out of a vat of oil.\" At first this may seem to the English or American reader as a rather strange image", "label": 1 }, { "main_document": "prove the \" Fast Mapping \" and \" Quick Incidental Learning (QUIL) theories? Let's introduce and analyse them. First, we need to know some generalities about the learning of new words by children : once the child recognizes an object, he/she attributes some properties that are representative of it and then labels it (e.g. 'dog' is characterized by four legs, hairs, barking...); but how fast can he/she achieve such a complex process? Before the theories we are going to present, it was said that \" children needed several exposures to a new word to start acquiring the meaning \" of it. But the experiments led by Carey and Bartlett in 1978 somewhat prove the contrary. Indeed, the theory of \" Fast Mapping \" shows that an 'initial quick and incomplete comprehension of a new word's meaning (a form of understanding) can take place on basis of only a few exposures'. Its role is to help children to acquire some vocabulary more rapidly and prove that he/she has abilities to do so. Hence the first study experienced by Carey and Bartlett exposed few children of pre-school age to objects like trays or cups of different colours each : two were known colours (blue and red) and the other one introduced a new colour \" chromium \" that the child heard for the first time there. In this context, the linguists asked the children to 'bring [them] the chromium tray or cup' (that is to say the new concept) in contrast with the blue and red ones. By a logical and still unconscious reasoning the children deduced that this new colour was neither blue nor red and so brought the dark green or 'chromium' object : an explicit explanation was not necessary for him/her to understand what the linguists meant. After several weeks and exposures to the new word, the children knew something about it without all the same totally controlling and distinguishing its full meaning. If we analyse the results of this experiment, the general idea we get is that to go from comprehension to real production of the word is still a slow and hard process and that several exposures to a new word are still required before it is spontaneously produced. Moreover, there are limits to this fast mapping : the child recognizes the word only in a contrasted context, which is helpful, because he/she can draw a parallel between known and new words. Yet, though we state a slow and partial mastery of a word's learning, we also note a partial sense of it and the possibility for the child to use it casually in his/her vocabulary. Let's introduce now another theory of child mapping : the \" Quick Incidental Learning \" or QUIL, which is a process to show that children own even more robust abilities to map new words, or in other words, a sort of speedy process of Fast Mapping. This new theory developed by Rice and Woodsmall in 1988 is opposed to a 'forced acquisition' (e.g. at school, children have to learn from adults and books", "label": 0 }, { "main_document": "the ORAC Other research, comparing the FRAP and ORAC methodologies across has shown that different trends may be detected between the two methods. Research has concluded that the differences between the results obtained are likely to be as a result of the basic chemistry upon which the two principles work. A research paper published in 2002 suggested that the ORAC method was chemically more relevant to chain breaking whilst the FRAP methodology had limitations including interference (Ou The Trolox Equivalence Antioxidant Assay (TEAC) has been far less extensively researched. It is however another relative simple assay to perform and as such has been used extensively by labs reporting various antioxidant assays. Much of the work into the TEAC methodology has been conducted by Catherine Rice - Evans in London and has particularly looked at the method for assessing the antioxidant potential of various body fluids - including plasma (Rice-Evans, 1994). Because the TEAC assay is an end point assay, the reaction rate differences between oxidants and antioxidants is not detected which means there are limitations to this methodology (Huang Any antioxidant assay, is only able to act as a measure of the antioxidant activity of the chemicals measured in that particular assay (Huang In principle, the various different assays that may be used to measure some form of antioxidant potential all work on the same basic idea. In all assays, the accurate measurement of antioxidant capacity requires both the inhibition time and inhibition degree to be taken into account (Huang All the assays, like the FRAP assay, work by comparing absorbance changes of a test sample against that of a known standard (Benzie and Strain, 1996). The absorbance changes (or loss of fluorescence) are an indication of the extent of damage caused by the reactive oxygen species and therefore can be used as an indication as to the effectiveness of the antioxidant power of the sample (Ou First, the three antioxidant assay (FRAP, TEAC, ORAC) procedures will be used to set up three respective calibration curves. Once this has been done, and the three procedures have been understood and verified for repeatability, they will be used to assess the antioxidant potential of a range of samples; Each sample will be assessed for antioxidant potential with each of the procedures and the results will be done in triplicate (or five times depending upon time availability). The samples will be prepared and then frozen so that the same sample can be tested across each of the three Assays - this will be done to try and eliminate any variation that may occur in sample preparation. For each of the repeats a fresh sample will be prepared and again split into three so that it can be tested across each of the assays. With each of the repeats, the assays will be conducted in a difference sequence so that the same assay isn't used for the fresh (as appose to the frozen) sample on each occasion.", "label": 1 }, { "main_document": "is a flagship hotel (Hilton Hospitality Inc, 2004). This is an area Hilton could expand on, as they already operate several establishments in the upscale market. If this trend continues, Hilton will be able to take advantage of it. Environmental concerns are becoming more widespread in the hotel industry. As the industry is a fairly clean one, with many voluntary environmental policies, there is very little legislation governing the protection of the environment within the hotel industry (Pryce, 2001). The protection of the environment comes under the heading of sustainable tourism. The World Tourism Organisation (in Pryce, 2001, pg 97) defines sustainable tourism to: Many hotel companies have developed environmental policies (Page The Travel and Tourism Analyst (2001) cites a survey that shows how many of the top 10 European hotel chains have implemented an environmental policy. Hilton Group PLC in the UK has the Hilton UK & Ireland Environmental Sustainability programme, which aims to train all of their employees in environmental sustainability by the end of July 2004 (Mintel, 2004). This will make Hilton the \"industry leader in terms of caring for the environment\" (Mintel, 2004, pg 38). The implementation of policies such as these is advantageous to both the environment around the hotels, and the hotels and their owners themselves. For the environment, it means that water use may be reduced, more recycling may take place, and energy consumption may be reduced (Cooper For the hotel companies, implementing an environmental policy can improve company image, attract environmentally aware tourists, as well as reducing their costs with the likes of energy and water bills (Pryce, 2001). For the hotel industry, as long as world events remain stable, the trends should continue to develop. The hotel industry is still recovering from the world event of the last few years, but this recovery is set to continue. Hilton has announced 19% profit increases in the four months up to October 2004, and this is set to continue (Paton, 2004). The development of more budget hotels cannot continue forever, as eventually the market will become saturated. However, trends such as increased concern for the environment can always continue, and is likely to, whatever happens in the world. For Hilton, the main risk is further investment in the budget hotel sector, and if they wish to compete with these operators, they may wish to invest themselves, and reap the benefits. Other, less significant trends that are affecting the hotel industry include the consolidation of larger operators, which puts small, independent operators at risk, and more standardisation of the service and product within the industry (Sigma and Baum, 2003). For the hotel industry, times ahead are uncertain, with the continuation of the war in Iraq, and more threats of terrorism. However, there is also hope for the industry with in increased domestic tourism, and for Hilton, the increased popularity of the upmarket hotel sector, in which they are already well integrated.", "label": 1 }, { "main_document": "The white-throated capuchin monkey, The word capuchin is a French word taken from Italian 'cappucino,' meaning hooded one, a reference to the markings on the monkeys' head which resemble the cloaks worn by the order of capuchin monks (Flannery/2004). The species is perhaps one of the most well known of South American primates due to its unfortunate status until recently as the preferred choice of monkey for organ grinders and street entertainers (Moynihan/1978). They are the most typically monkey-like of all the ceboids and according to Moynihan (1978), 'certainly are a climax of ceboids evolution'. Their high level of cognitive ability has earned them the title the 'the chimpanzees of the new world.' As Garber and Paciulli (1996) state, enlargement of the neocortex in Only the great apes, dolphins, and humans have a greater degree of encephalization and complex patterning of cerebral fissures than capuchins (Marino et al/1994) Studies of both wild and captives indicate that capuchins show the highest degree of manual prehension among new world primates and exhibit advanced tool use more often then any other species of monkey (Garber/1996). Indeed, only white fronted capuchins, chimpanzees and orang-utans have been observed to use tools in their natural habitats. Additionally, capuchins have frequently been observed using rocks to break open palm nuts and oyster shells and at least one study site at La Suerte Biological field Station, Costa Rica, white-throated capuchins have been observed to use twigs to probe holes in trees whilst foraging (Garber/1996; own observations). There is also substantial evidence indicating that capuchins use several species of plants in behaviour indicative of self-medication (Weldon et al/2003; Baker/1996). The superficial similarities between capuchin and great ape behaviours are considered significantly important that the International Journal of Primatology has devoted the entire contents of one supplement to a comparison of the Genus and chimpanzees, both common, All new world primates including capuchins are grouped together under the infraorder Platyrrhini and the super family Ceboidea although the two are essentially synonymous since Ceboidea is the only living Platyrrhine super family (fig1). Recent Genetic studies have shown platyrrhines to be monophyletic group and to share a common basal ancestor with both humans and African primates. The two groups are estimated to have last shared a common ancestor at approximately 38 million years ago at the end of the Eocene (Horovitz et al/1998; Strier/2003). Their presence in the new world is interpreted as the result of a single dispersal event, and since by this time South America and Africa had already drifted about half as far apart from each other as they are today it is proposed that they reached South America by rafting or island hopping in some way (Fleagle/1999; Strier/2003). Upon arrival the ancestor of all extant platyrrhines underwent a process of adaptive radiation that has resulted in the present primate diversity. The similarities in morphology between Old and New World primates occurred via a process of parallel evolution as a result of exposure to similar environmental and ecological conditions. However, in contrast to Old World monkeys, capuchins, like most New World", "label": 1 }, { "main_document": "projects such as for school visits and for education purposes (EN 2006 There are many national grants provided mainly to voluntary organisations. It is expected that there would be reformation of those grants by Natural England in the near future. Some examples of existing or past grants are; Reserave Enhancement Scheme, Countdown 2010 Biodiversity Action Fund, Land Purchase Grants, Local Biodiversity Grants, Volunteer Action Grants (EN 2006 English Nature acknowledges the importance of volunteers to carry out effective conservation activities. There are 1865 volunteers under National Volunteer Project by English Nature, contribution of volunteers is estimated as to be equivalent to 7500 days of support (EN 2006 There has been an increase in the number of volunteers involved particularly in recent years (EN 2006). Main roles undertaken by volunteers are habitat management of National Nature Reserves, administrative assistance, bird ringing, invertebrate and botanical surveying, livestock monitoring, or organising educational visits and walks as leaders. English Nature also leads sustainable lifestyle as an organisation through, for instance, the promotion of the use of alternatively fuelled vehicles, recycling initiatives at headquarters, reducing water consumption (EN 2006 Various research projects have been undertaken and led by English Nature in order to improve the understanding of natural habitat and to utilise the acquired knowledge on the practice of conservation activities. English Nature leads Biodiversity Action Plan (BAP) and provides information on the achievement and monitoring results. Some success stories reported in the annual report 2005/06 are; re-introduction of pool frogs which became extinct in the U.K in the 1990s, steady recovery of the Cornish path moss population reaching the highest level since the start of monitoring, and progress on the accumulation of information for maritime BAP habitats (EN 2006 Red Kite, Hence, the birds were persecuted to extinct, in most cases, by being poisoned (RSPB 2006). By the end of the 19 In 1989, English Nature and RSPB set up a re-introduction project in two sites, one in the Chilterns and the other in Inverness in Northern Scotland (EN 2002). The habitat requirement for this bird is a combined vegetation of deciduous woodlands where it nests, and grasslands or farmlands where it scavenges food such as carrion, invertebrate, and small mammals including rats and chicks. The selection of those two sites was since the areas were under designation as conservation sites and the areas supported large and rich habitats providing an ideal condition for the first attempt for reintroduction. A total of 93 nestlings from Spanish population were released in the initial five years. In the early springs each year, twnety nestlings were brought into the sites when four to six weeks in age. Initially, chicks were kept in wooden cages for six to eight weeks and were released into the wild in the early summer (EN 2002, the Chilterns Conservation Board 2006). By 2001, the population has grown to be well established in both cites with over 120 pairs in the Chilterns and 32 pairs in Northern Scotland. Following the first success, further reintroduction has begun in the following four cites since 1999; Central and", "label": 0 }, { "main_document": "particle size, that results in reduced algae formation. It was anticipated that the clay would also give both pH buffering and Cation Exchange Capacity (CEC) benefits (ibid). The honeycomb structure of the material also gave a large surface area to be colonised by nitrifying bacteria. It was initially planned to randomise the treatments through the glasshouse in order to reduce the effects of environmental differences within. However, due to the heavy reliance on glasshouse staff for feeding, the replicates of the treatments were arranged so that all tanks containing fish were grouped together. Three different treatments, each with two replicates, were used to produce lettuce plants hydroponically. Six plastic tanks contained the following: FeEDTA is intended for use in hydroponic systems when the pH of the nutrient solution is around 6.5 - 7.0. However, even at a pH of around 8.0, limited iron is still accessible (Figure 2.1), and since neither the control treatment nor the Aquaponic + Fe treatment was pH-altered, the iron availability was considered constant across these two treatments. The preferable pH range in aquaria to ensure goldfish health and maintain a efficient action of nitrifying bacteria is around 7.5 - 7.8 (Bio-Con Labs inc. no date; O'Neill 2004). The decision was therefore made not to alter the pH of the water in any tanks; however, if iron deficiency symptoms, namely chlorosis of younger leaf tissue (Jones 1997), was observed in either of the two treatments containing iron, this would be reviewed. The water from each tank was pumped into to a 1.5m length of rain gutter filled with expanded clay aggregate. The gutters were inclined at the end furthest from the tanks so that water returned to the tank (Figure 2.2). The pumps in each tank continuously recirculated the water. Each tank was covered in plastic sheeting. This was black on the inside to prevent algal growth on the tank walls and white on the outer side to reflect sunlight and prevent the water becoming too warm. Initial construction took place on 6th July 2006. Water was supplied from each tank using a Blagdon MiniPond 700 pump to the substrate-filled gutters via standard 13mm irrigation tubing. The piping was attached along one interior side of the guttering using zip-ties threaded through holes drilled in the gutter. Eight holes approximately 3mm in diameter were punched at regular intervals of around 150-200mm along the section of pipe that ran inside the gutter. The water flow was adjusted using the pumps' control valves to ensure minimum water was lost through splashing whilst still maintaining a rapid flow rate to ensure maximum aeration of the water. During a run-in period, when no fish or plants were present, significant water loss had been observed due to evaporation from the surface of the tanks and from splashing as the water returned to the tanks. In order to minimise these problems, black horticultural shade cloth was used to cover the tanks in an attempt to reduce evaporation. The same material was also attached to the lower end of the gutters so the returning water ran", "label": 1 }, { "main_document": "gets the shape described in page 1. Even though in the case of a liquidity trap the price-reduction might not actually be unexpected, it still does not increase consumption. Ibid. P.56 As discussed earlier, the problem facing a country in a liquidity trap is a \"flat\" LM curve, low interest rates, usually accompanied with deflation or expected deflation, and shrinking Aggregate Demand. The government seems to be in a very difficult situation, monetary policy is ineffective and fiscal expansion very expensive concerning the budget deficit. However, there is a solution to the problem that has not yet been discussed. When the assumption of expectations in the IS-LM model is rational, even a closed economy can be improved, using monetary policy. Even though it was mentioned impotent in this case, it can actually be effective. The central bank can make the monetary policy function again, if it \" If the central bank manages to create inflation expectations, by announcing a long-term inflation target, say 4-5% annually for the next 10-15 years by increasing money supply and does not change its behaviour to fight the actual inflation when it occurs, it would be able to decrease the LM curve, as shown below Paul Krugman: \"Synopsis: Modelling Japan, assuming a Liquidity Trap\". P.8 University of Warwick, Macroeconomics 2, Term 1 Lecture notes AS-AD 1. P.39 When the real interest rate manages to fall below zero, and people expect the inflation to continue, they will no longer be indifferent between holding bonds and high-powered money. As cash becomes the preferred asset people will consume more. Also as borrowing becomes \"cheap\", investors will be encouraged to take loans and invest more. This increase in investment increases income, and as the prices still keep increasing people will be consuming a larger part of the additional income rather than saving it. As a result of this the economy will be able to get out of the liquidity trap to equilibrium, where the LM curve is no longer flat and nominal interest rates are above zero. As the nominal interest rates increase above zero, the central bank will be able to use that as a tool to increase economic activity, if the economy starts to slow down later on. Even this policy is not without dangers, the central bank has a great responsibility when managing the inflation, so that it will actually stay under control and not reach hyperinflation, which is highly undesirable. As shown above the problem facing an economy in a liquidity trap is not to be taken lightly, flat LM curve and stagnating economy are difficult issues to deal with. The central bank can tackle this issue by credibly committing to a long-term inflation target encouraging investment and consumption. Fiscal policy and increase in government spending can be used as a tool to get out of such a situation as well, but can be very risky, if the country is facing large budget deficits already.", "label": 0 }, { "main_document": "problem with infinity, but he is more acutely concerned with the very nature of causality. Kant's problem is that a 'sufficient cause' cannot be found in conditions which are caused, so it follows that this cannot be the only causality. A form of 'absolute spontaneity' must exist; \"there must exist an Bennett................... In the solution to the thesis Kant makes it clear that the freedom he is considering is specifically transcendental, and very different from the common psychological use of the word (which constitutes a merely empirical investigation). Transcendental freedom may have nothing to do with human freedom or morality, but we need to support a notion of transcendental freedom to hold practical reason. The 'big bang' may have been an example of an initial uncaused cause. If it was this would be an example of (and also prove) transcendental freedom. This is a purely metaphysical concept that concerns the first cause. He is only concerned with the idea of a spontaneous causal origin. He does not feel the need to account for how we come to have this faculty, since we cannot account for natural causality either, but simply accept it a priori; \"How such a faculty is possible, is not a necessary enquiry;\" ( Despite the fact that Kant feels no real need to prove the reality of 'transcendental' freedom, he insists that practical freedom presupposes and relies upon transcendental freedom. Kant also holds that we have to accept practical reason because it is essential for morality, and even if we hold practical reason true, this only provides us with a See later?? Kant is generally impressed by the philosophy of David Hume who awoke him from a 'dogmatic slumber', after which he re-evaluated his grounds for believing it was possible to create an unchanging metaphysics. The antithesis clearly presents an argument against Hume. Kant is looking for a stronger sense of freedom than Hume. The freedom that Kant thinks is essential excludes causal determinism. Kant meaning of freedom allows humans to originate a new causal series, a sort of spontaneous originality. Freedom is not just the power to choose what one wants or desires (as Hume held), but the power to pursue our moral oughts (where 'ought' implies 'can') and to act as a spontaneous first cause of a chain of events. Passmore Hume is a defender of the 'all nature' account that Kant obviously opposes. Hume, in Kant's words, would argue that if you do not accept that the world has a mathematical first then there is no need to look for a dynamical first. Nature is unlimited and we should not search for a first cause. The substances of the world have always existed and the changes of the conditions of these substances have always existed. Kant supposes the consequences of a naturalist viewpoint; if we suppose there is a transcendental freedom then nothing can precede to determine a free action. But, Kant says, \"every beginning of action presupposes in the acting cause a state of inaction.\" ( Transcendental freedom then violates the natural law of cause", "label": 1 }, { "main_document": "reduction, an objective of most organizations today, is also possible with a department that has the authority to oversee the activities of each operating unit. From its unique perspective at the top of the organization, a centralized group can focus on controlling the growth of total purchased part numbers. A central group can develop a single coordinated approach to supplier development and support it by providing corporate resources or developing uniform guidelines. A central group can also help to integrate first-, second-, and third-tier suppliers into a supply chain management program. Because significant resources and central coordination is needed for such an effort, centralized purchasing is a must to ensure the success of this strategy. Finally, a central purchasing group can effectively coordinate a supply base reduction program. This makes sure that the reduction process supports the goals of the entire organization and not just a particular purchasing department. When conducting competitive and make/buy analysis, Cummins engine performed several critical tasks. The first was to determine the appropriate level of abstraction, (that is, the proper unit of analysis.) For example, due to the sheer number of individual components in most final products, it would be nearly impossible to make a sound make/buy decision on each and every individual component. Therefore, it was important to aggregate, or combine, the level of analysis moving from the individual component or subassembly level all the way up to the assembly or complete system level. In the Cummins example, managers considered a backhoe loader from the systems perspective, treating it as a system made up of a variety of assemblies and subassemblies, (the drivetrain, chassis, cab, engine, and so forth). Using the backhoe loader's engine as an example of a subsystem, the engine can be further broken down into a series of complex assemblies and subassemblies like the fuel delivery system and the power cylinder. This level of abstraction can then be further redefined in terms of individual parts and components such as pistons, rings, pins, and sleeves. A firm should analyze each level of abstraction and decide the appropriate level that the firm will use in determining its critical manufacturing requirements. Firms facing the strategic make/buy decision must painstakingly evaluate the entire hierarchy of components, subassemblies, assemblies, subsystems, and systems for each of its major product lines to determine which subsystems and assemblies are essential to the firm's competitive position. This analysis should include future anticipated product generations as well. The character of the make/buy decision will vary widely among firms and among industries because of the variation in perceptions of core competencies, goals, and objectives. In addition to the identification and consideration of core competencies that will provide direction to the make/buy decision, the firm must also evaluate its competitive priorities in terms of overall business strategies. The nature of the overall business strategy should dictate guidelines affecting how to conduct the competitive and make/buy analyses. Companies should consider outsourcing their procurement functions for the same reasons they might outsource other operations, such as payroll or IT. Historically, the top rationale is that the work", "label": 0 }, { "main_document": "the frameworks and models are developed on the basis of large company. However, the problems encountered by SMEs are different, requiring different approaches. The aspects of specificity in SMEs might have effects on the development and implementation in ISS and therefore should be considered. Blili and Raymond (1993) state five characteristics of the specificity of SMEs with respect to strategic ISS: environment specificity, organizational specificity, decisional specificity, psycho-sociological specificity and information systems specificity (Appendix). In the Rembitt case, the challenges in figure 3 encourage a radical transformation of its information systems. A system integration approach is introduced as a solution by the new Managing Director (MD), Alan Thompson. Systems integration (SI) is becoming more and more popular to be one of the major drivers for ISS, and it is demonstrated by the massive investments in Material Requirement Planning Systems and later Enterprise Resource Planning Systems, and extend it forward to the integrated Customer Relationship Management Systems and Vendor Management Systems with the channels (Finnegan, 2006). However, the complex nature and functions of SI have made it difficult to define and bound (Wainwright and Waring, 2004). Boykin, Corbitt and Sandoe (2001, p.7) define SI as \"(bringing) isolated information systems together with the goal of providing a whole or complete information resource for organizations\" consist of the external integrations with supplier and customers and internal integrations within the organization. Most of the theories and literatures are developed on the basis of researches in large corporations (Platts, 1995). As for SMEs, the main goal is to survive in the market and seek for growth (Levy and Powell, 2005), it is arguable that the SMEs are equally well served by allocating scarce resources in SI. Finnegan (2006) provides a usable SI framework for a SME setting (Figure 4). This model provides a means for SME to achieve SI, which is essential to develop an ISS (Figure 5). Discussions are given in Rembitt case as follow. Top sponsorship is particularly important in SMEs because of its highly centralized structure and rigid controlled funding (Blili and Raymond, 1993). Due to limited resource and investment, the support gaining from top management is crucial in implementation of SI projects. Bottom sponsorship gives the feedbacks and wins the supports from the users down the hierocracy. This provides a channel to address the need of understanding and evaluating the current information systems investment. Both the top and bottom sponsorship create the pathway for the communication of existing information (Figure 6) in order to achieve a better understanding of current information systems (Business Process) in formulating the strategic opportunities from information systems investment (Strategic Content). The new MD, Alan Thompson has in-depth knowledge and experience towards SI and strongly supports the development of a new unified information system - Fourth Shift. However, his enthusiasm cannot be passed. Bottom sponsorship is weak, demonstrating by the hesitation and reluctance to change amongst staff. One of the major parts of SI is the matching with the needs from the suppliers and customers. An integrating systems approach is critical for companies competing worldwide. Success integration is determined", "label": 0 }, { "main_document": "Reasonable, 2006 There also appears to be a difference between the combined (objective and subjective) test Practitioners must be sure that where there is reasonable suspicion sufficient to lead the practitioner to submit a report to the relevant authorities they can nevertheless assist in the transaction and avoid the risk of constructive trusteeship. see: Twinsectra Ltd v Yardley and Others [2002] UKHL 12, per Hutton, LJ., para 27 Moffat, G., (op.cit.), pp.738 ibid Where a practitioner is unsure whether or not there is a breach of trust he is only \" The practitioner would only be a constructive trustee and liable for the breach of trust if \" The mere suspicion required when reporting under POCA falls significantly short of dishonesty Royal Brunei Airlines Sdn. Bhd. Appellant v. Philip Tan Kok Ming Respondent [1995] 2 A.C. 378 (Privy Council), at 390 Tayeb v HSBC Bank Plc. [2005] 1 C.L.C. 866, per Colman J., at 897 Moffat, G., (op.cit.), pp.738 Tayeb v HSBC Bank Plc., (op.cit.), per Colman J., at 897 \" This is a result of the POCA's definition of criminal conduct, as an act which \" This has criminalised conduct which might not be unlawful in the country in which it occurred. McCluskey, D., (op.cit.), pp.202 Proceeds of Crime Act 2002, section 340(2) A commonly cited problem caused by section 340 is that of the Spanish matador, who is a revered celebrity in his country, however, his proceeds would be regarded as criminal in England and Wales. McCluskey, D., (op.cit.), pp.202 The government sought to rectify this difficulty for legitimate businesses by introducing changes to the POCA through the Serious Organised Crime and Police Act 2005 (SOCPA). A person now no longer commits an offence \" Serious Organised Crime and Police Act 2005, section 102, amending: Proceeds of Crime Act 2002, section 327, 328, 329 Certainty that the act was not unlawful in the country and time in which it occurred would, therefore, constitute a defence. This in itself creates difficulties, in a practical context, it may be necessary to take legal advice from local lawyers to be This will necessarily incur increased costs to businesses handling overseas transactions. Furthermore, this defence is subject to the further proviso that it \" Included in this list are any crimes which would be punishable in the UK by imprisonment with a maximum term of more than 12 months. ibid see: Statutory Instrument no.2006/1070 the Proceeds of Crime Act 2002 (Money Laundering: Exceptions to Overseas Conduct Defence) Order 2006, section 2(2) McCluskey, D., (op.cit.), pp.202 UK practitioners therefore have to comply with two legal regimes. This increases costs to businesses and amplifies the amount of compliance checks to be undertaken. Practitioners might, justifiably, decide to avoid the expenditure of time and resources through making a disclosure to the SOCA regardless of the availability of this defence. It has already been argued that, \" Home Office, Organised and Financial Crime Unit, 2005, Money Laundering Provisions in the Proceeds of Crime Act 2002 as Amended by the Serious Organised Crime and Police Act 2005, Annex A,", "label": 1 }, { "main_document": "it comes to multi-carrier modulation (OFDM), the system gets more powerful and complicated. So we use the IEEE 802.11a(Wireless LAN) model shipped with Matlab. When running this model, we can not only recognize the transmitter, channel and receiver parts clearly but also acquire a better understanding about each component's function. By examining the contents of the modulator banks, we will find that the NASA coding is used. By noticing how the OFDM data, the pilots and the training sequences are collected together and how the padding of 11 zeros at the end of each frame with the addition of the cyclic prefix, we will find that almost all solutions that we have tested and studied before are combined together in this advanced communication system, where each element works efficiently. In the no fading channel, by changing the channel SNR, it is easy to find out SNR range for each modulation scheme. Likely, we will also plot the PER against SNR in three propagation channels and indicate which portion of the SNR uses which modulation scheme and their separate data rate. In a word, this implementation is a general application of all the discussed methods in previous parts. After the whole process, further issues and problems which arised with the practical application are discussed and analyzed in the conclusion part. For instance, the preference to more reliable slow data transmission or faster but less noise immunity transmission; the choice between PSK and QAM; the tradeoff between simpler coding schemes with CRC and retransmission of failed packets and more complicated coding schemes where no retransmission is possible; the decision of modulation schemes with a required SNR and a notionally acceptance PER; the other features needed when modeling the wireless communication and the other modulation and coding schemes that aren't discussed but also popular in digital data communication systems .etc. Because of the multiplicity of noise in a communication link, it is hard to define the frequency range, amplitude and instantaneous phase of noise and hence equally difficult to reduce its effect on the performance. The electronic systems generate their own noise due to the random contributions from individual electrons. A good model of the noise generated in electronic system is provided by the 0-mean Gaussian Noise Probability density function This means that its amplitude at a particular time has a probability density function given by equation. The statement that noise is zero-mean says that, on average, the noise signal takes the value zero. The mean power in the noise signal is equal to the variance of this function. The ratio of signal strength to the noise level is called the signal to noise ratio ( If the SNR is high, few errors will occur. However, as the SNR reduces, the noise may cause symbols to be demodulated incorrectly, and errors will occur. For convenience, most indeed practicing engineers assume noise to fall predominantly into the class of Additive White Gaussian Noise ( In our implementation, we use the AWGN Channel block simulates transmission over a noisy channel In communications, the AWGN channel model is", "label": 0 }, { "main_document": "some symptoms of mental disease tended to occur together, these groups of symptoms could then be used to develop classifications of mental disorders, it therefore became necessary to have inclusion and exclusion criteria for each syndrome (Joseph, 2001). This led Freud's concept of 'neurosis' (which he had diagnosed Wolf-Man with having) to eventually be dissolved, as the only commonality this group had was \"an unsubstantiated etiological theory\" (Marshall & Klein, 2003, p.12). The concept of neurosis was replaced by new diagnoses of panic disorder, generalised anxiety disorder, social phobia and post-traumatic stress disorder. New groups of disorders were also created out of symptom clusters previously included in neurosis, these became; somatoform, dissociative, psychosexual, and impulse control disorders (Marshall & Klein, 2003). Therefore if Wolf-Man were to receive treatment today, it is probable that his symptoms would be carefully assessed and he could be diagnosed with one or more of these more specific disorders, using the criteria given by the DSM-IV. This diagnostic criteria which distinguishes anxiety disorder sub-types is now considered necessary as in opposition to Freud's theory of 'universal mechanisms', recent research looking at biochemical imbalances and genetics have found that different sub-types can have \"different natural histories, epidemiologies, and, especially, pharmacological and psychotherapeutic treatments\" (Stahl, 1997, p.31). It can be seen that Freud's general diagnosis of neurosis and his belief in the panacea of psychoanalysis may not have provided the best treatment for Wolf-Man, as it is now widely accepted that biological factors are important when looking at the causes and treatments of at least some psychological disorders. Recent research using PET and fMRI technologies have found what appear to be metabolic abnormalities in the brains of people with anxiety disorders. These studies suggest that temporal lobe structures play an important part in the elicitation and maintenance of anxiety states. Drug treatments of anxiety disorders can also give clues as to the mechanisms of the disorders, for example the drug buspirone interacts with serotonin 5HT1A receptors providing relief from anxiety, this suggests that these receptors are abnormal in people with anxiety disorders (Rosenzweig Therefore if Wolf-Man was indeed suffering with a type of anxiety disorder as Freud's diagnosis suggests, it may due to these metabolic abnormalities rather than Freud's idea of internal processes. It can also be seen from evidence taken from Freud's case study that Wolf-Man's condition could now be seen to included at least some obsessive-compulsive components. As when he was a small boy he carried out compulsive religious rituals and as a man he suffered from obsessions concerning illness, his appearance and sexual urges (Gardiner, 1973a). Freud believed that these obsessions and compulsions were due to a \"sadistic-anal organisation\" (Freud, 1979, p.360). However research now suggests that there may be neural mechanisms underlying this type of disorder. Studies such as the one carried out by Saxena and Rauch in 2000, suggest that a circuit among the frontal, striatal and thalamic structures could be the cause of these compulsions and obsessions (as cited in Rosenzweig There is also some evidence that they could be due to an abnormality", "label": 1 }, { "main_document": "establish the time taken for the project, based on activity duration lengths. However despite its strengths CPM is by no means a perfection solution, and does not provide a good visual representation of the project. \"Why should there be need for other methods for Project Management to replace or maybe enhance CPM? Self-evidently, CPM frequently does not work\" Sainsbury's recent cancellation of a multi billion pound contract with Accenture shows that even companies who devote their profession to project management can get it wrong. This suggests \"either the methods being used for project management or their application or both must be at fault\" Rand, (2000), page 175 Maylor, Harvey, (2003), There are several issues that demonstrate the problems with CPM. All goals are based on estimates, which by definition will contain uncertainties. Activities in network diagrams often display latest start times, and if for example cash flow pressure demand that projects start at the latest time, then there are more activities on the critical path. This decreases the likelihood of completing the project on time and may cause the project manager to lose focus. A differing approach which provides a possible solution to the problems of CPM is the Theory of Constraints (TOC). The crux of this, highlighted by Maylor is \"A fundamental of this is to manage systems by focusing on the constraint\" The TOC approach identifies a constraint (critical path, critical resources), exploits the constraint, subordinates everything else to the constraint, elevates the constraint and then repeats the process. Alongside this is a shift in estimating techniques which calculates on activity times only and emphasises that 50 per cent off activities will finish early and 50 per cent will finish late. Overall TOC constructs a critical chain (as opposed to a critical path) analogous to \"a relay race - where the runners are lined up ready to receive the baton before continuing with their leg of the race\" The benefits are that early finishes for activities are encouraged and the stigma of a late finish is removed. This also creates a more active role for the project manager. Maylor, (2003), page 142 Maylor, (2003), page 148 Although an esteemed discipline in its own right, Project Management impacts on all organisations from hospitals to blue chip companies. One area that incorporates Project Management deeply in its thinking is Information Systems. Many principles of Project Management are transferable to techniques such as the Systems Development Lifecycle (SDLC). Piloting, risk management and the parallel implementation approach are fundamental to SDLC. With greater demands placed on managers whether in business, not for profit organisations or government institutions, project management is a skill that is quickly becoming a prerequisite for success.", "label": 1 }, { "main_document": "My work involves the synthesis of sugar functionalised polymers. These are made from aldose monomers such as 5-benzyl-5- Reaction monitoring and indeed the first stage in product characterisation is done by Thin Layer Chromatography (TLC). Any new material formed in a reaction will have a specific R TLC will show any major by-products of the reaction and their relative polarity, in essence, how 'pure' the intended reaction product is. The method is inexpensive and provides results quickly. Many of the reactions conducted require extractions with several solvents to isolate the product. When the reaction is novel, TLC and If TLC shows a clear partition, and the simple proton spectra allow us to identify the species expected in each solvent, whether it be starting material, product, or by-product, this is normally sufficient and efforts are concentrated on purification of the product. The desired product is usually purified by column chromatography. TLC is used initially to determine the species' progress through the column, and then Proton spectra with automatic moisture suppressant treatment are useful at this stage as removal of solvent from these monomers is difficult due to their amphiphilic nature [1]. Exact assignment of spectral peaks in the In order to characterise species such as these, the tetra-acetate derivative This spreads the pyranoid ring proton resonances over a greater range (3.0 - 5.8 ppm). The Their size provides evidence for the trans-diaxial conformation of protons found only in the pyranose ring form. The 10 Hz is indicative of the For example, C-5 of the barbiturate holds no protons but it resonates at ca. 58 ppm, characteristic of the 5,5-disubstituted barbiturate [2]. A great amount of previous work has established NMR spectroscopy as the primary tool for product characterisation in this field [1-5]. The lactosyl derivative of The apparently high yielding reaction showed only the intended product by NMR. This example highlights the inability of NMR to detect non-protic impurities and the care that must be taken when interpreting spectra. Next, low resolution EI Mass Spectra (MS) are taken of the products. After confirming the mass of the molecular ion, we can assess if the fragmentation pattern is as predicted. For example, Mass spectroscopy as a standalone method has limitations too: a mixture of unreacted barbituric acid NMR would however be able to differentiate between the samples as the two methyl groups in the unreacted acid are equivalent whereas in So only in conjunction with NMR data can this technique be used to elucidate structure. Elemental analysis (EA), in this work specifically CHN analysis, is a good indicator of purity of a sample with a known structure. Unfortunately, compounds such as Measurement of melting point can be another measure of purity. Melting points for species similar to The additional problem of the ability of For publication in the leading journals in this field, In the near future, and after thorough drying, CHN analysis will be carried out on Also, given that the tetra-acetate derivative In summary, NMR is the primary tool used for product characterisation, with MS and EA providing supplementary information. Infra-red Spectroscopy", "label": 1 }, { "main_document": "and their ecological interactions. With accumulation of reliable knowledge on the role of natural occurring Biological Control agents, government, researchers should develop and promote strategies for pest control and take a lead to educate farmers. Owing to the rich species diversity in plant community in the tropical South East Asia, it is expected that there would be a high potential for the use of allolepathic plants for weed control. There has been an increasing trend of research focus on the investigation of allelopathic properties in weeds in rice fields in recent years. The definition of allelopathy is given as the phenomenon of suppressing the growth of one plant species by another through the production and release of toxic substances called 'allelochemicals' (IRRI, 2006d). One of such examples is study by Hong Ten species were selected as candidate weeds for Biological Control agents based on the past field survey on 49 higher plant species found in the country. All ten weed species significantly reduced the number of weeds and the total dry weight of weeds. Although the extent of effects was variable between weed species, the rice yield was increased in all weed species treatments. Two potential positive effects by addition of allelopathic weeds are: reduction of weeds by allelochemicals and nutrient addition by those plants both of which indirectly promoted the growth of rice. However, there are some disadvantages revealed from this experiment. Firstly, the effects of weed suppression tended to be short-term probably due to the decomposition of allelochemical over time which allowed the re-emergence of weeds. Another concern is the application dose of dried plant materials, two tones per hectare were applied in this research which would be very labour intensive. Some candidate plants have been traditionally used for pharmaceutical purposes and are edible; hence the incorporation of those plants in rice field may be beneficial to farmers. However, information on the advert effects of candidate plants on the soil or other agro-products plants are limited and other aspects which have effects on rice production such as occurrence of disease or pest by these plants need careful examination prior to the application in practice. Efficacy from application time and doses also require further investigation (Hong Research on weed management in Southeast Asia has not been given much attention compared to pest control for which IPM has been developed and practiced. It is partly due to the great diversity of weed species in tropical rice fields which makes the significant suppression of the growth of weeds difficult by an effective and practical method. Therefore, quick solution has been the use of agrochemicals (IRRId). Allelopathy is relatively innovative concept for weed control on rice field, though early in its development stage for practical use, there would be some potential by further research. In addition, the outcomes of many field surveys support the importance of appropriate land and seed management such as preparation of land by plaughing, puddling and leveling to prevent weeds germination, collection of clean seeds to avoid the contamination of weed seeds and water management (Baki & Azmi, 1992, Moody,", "label": 0 }, { "main_document": "A CT scan was performed and a left sided CVA was diagnosed. After discharge, 2 weeks later, Initially this only occurred when she went on long walks, it has become progressively worse and now occurs more frequently. There have been no problems with her reading. One of There are no known allergies. No significant family history From the history it appears that Dysphasia is an acquired loss of production or comprehension of spoken and/or written language secondary to brain damage. There are two main types of dysphasia - Expressive or Receptive. Expressive dysphasia is characterised by non-fluent speech produced with effort and frustration. Signs to look for on examination include malformed words and impaired reading and writing. Comprehension is intact, patients understand questions and attempt to convey meaningful answers. There may be co existing right arm and facial weakness. It is caused by a lesion in the posterior part of the dominant third frontal gyrus (Brocas area). Receptive dysphasia is characterised by empty, fluent speech. The patient is oblivious of any errors, reading writing and comprehension are impaired. It is caused by a lesion in the posterior part of the first temporal gyrus (Wernickes area). The examination will therefore need to differentiate between the two types of dysphasia. The motor system will also need to be examined for any evidence of coexisting weakness. Dysphasia may develop as a result of vascular, neoplastic, traumatic, infective or degenerative disease of the cerebrum. The sudden onset of Vascular lesions also tend to improve with time which does not fit with Cardiovascular examination will therefore be important especially blood pressure, arrhythmias and carotid bruits which may all indicate risk factors for stroke. The recurrent episodes of dysphasia could be due to a vascular cause such as transient ischaemic attacks which are temporarily reducing the blood supply to the region of brain controlling language. There has been no history of head trauma to indicate this as a relevant cause of the dysphasia. The episodes have increased in frequency since A space occupying lesion will lead to raised intracranial pressure depending on the size of the lesion. Signs of raised intracranial pressure include nausea and vomiting, headaches (often worse in the morning) and papilloedema, none of which If the lesion is a neoplasm, the speed of onset of symptoms suggest a malignant rather than a benign neoplasm. Intracranial tumours are classified according to their tissue of origin with the most common group of primary brain tumours being gliomas. This group contains astrocytomas, glioblastoma multiforme, oligodendrogliomas and ependymomas. They commonly occur in the frontal, parietal or temporal lobe. Likely diagnosis at this stage would point to either a neoplasm in Another possible diagnosis could be recurrent TIA's due to e.g. carotid artery stenosis or valvular dysfunction as However from the history, Dysphasia was apparent from taking the history. On examination She was alert and comfortable at rest. There was no evidence of walking aids. No resting tremor or involuntary movements. GCS 15/15 PEARL Focused examination (relevant to patients symptoms) Naming - Comprehension - Repetition - Writing - Cranial Nerve", "label": 1 }, { "main_document": "which ' An epistemically possible world-state is a way that the universe can be conceived as being, it is weaker than the metaphysically possible world-state and as such has a wider scope. Soames We can still conceive of world-states that have properties that contradict ones found in this world, Soames does not think that 'heat=molecular motion', or indeed any identity statement, is an instance of the necessary a posteriori as Kripke does, he believes that it is knowable a priori I.e. if 'heat = molecular motion' is true in this world then it is true across all possible worlds; it is a member of the set of maximally complete properties that are instantiated in the universe. To relate this to the question, if we assume that 'pain = C-fibre stimulation' is true then is there a metaphysically possible world-state in which its negation is true? If there is then its truth is not necessary. As we are also assuming that 'pain' and 'C-fibre stimulation' are both rigid designators and as all such identity statements are necessary if true Consequently we can assert that it is metaphysically possible that there is a physically identical world in which there is no consciousness. Assuming, that is, in accordance with Kripke's argument, as he asserts: '[B]eing an identity between two rigid designators, would be necessary' Where x is an identity statements involving two rigid designators; '(x is true) Kripke asserts that 'heat = molecular motion' is a true identity statement because there is no possible world in which its negation is true. Kripke claims that there is an 'illusion of contingency' that makes it look as though the negation of identity statements is metaphysically possible. This occurs when the determiner of the designators coincides for both sides of the identity statement. In the case of heat and molecular motion, the referent is, originally, fixed by the sensation of heat When we feel It is, of course, contingent that when we feel This is, however, to say that '( It is not to say that 'heat = molecular motion' is metaphysically possibly false, which is what is needed in order to prove its alleged contingency. Kripke then goes on to argue that there is not a similar explanation for the illusion of contingency for 'pain = C-fibre stimulation'. Fundamentally, this is because we originally determined 'heat' by the sensation it creates in us, there is no analogous argument for 'pain' for pain simply is the sensation of pain. There is no situation in which one could be in pain without feeling pain. The identity theorist must account for the illusion of contingency of 'pain = C-fibre stimulation' else we must dismiss it as false as it would not be 'scientific identification of the usual sort' Maxwell (1978) argues, in response to Kripke, that the illusion of contingency can be explained through his distinction of causal networks and causal structures. There could be a metaphysically possible world in which the causal structure between mental and physical events is different yet the network is the same The mind-brain identity", "label": 1 }, { "main_document": "terms of social benefits to Wales the Rugby World Cup didn't entirely fail although the outcome of this effect was lower than originally expected. Ouillon (1999 (as cited in Jones (2001)) believed that over 90% of the residents in Cardiff were in favour of hosting the Rugby World Cup and that the residents saw the opportunity as a benefit to international acknowledgment, increasing tourism in the future and enhancing Cardiff's reputation. However, Jones (2001) goes on to list the problems the residents failed to take into consideration when they approved the proceeding of the bidding for such an event. Those that lived near and adjacent to where the Millennium Stadium was being built faced road closures, construction noise and dust as well as 24 hours a day work being carried out. Other residents faced the proposed creating of a provisional red-light district in their area in order to reduce prostitution in a large amount of residential areas (namely those areas situated within a close proximity to the Millennium Stadium). The local residents of Cardiff and the residents of Wales in general found it increasingly difficult to obtain tickets for the rugby match as 50% of the tickets were dispersed outside of the U.K. to international tour operators and to the corporate sponsors. This therefore created a lack of enthusiasm amongst the Welsh residents and reduced their social benefits. However, not everything in Cardiff was negative for the residents. Through the development of the Millennium Stadium it lead to further expansion within the city centre. The train and bus stations were redeveloped as well as pedestrian improvements to the city centre and redevelopment of the river walk areas that are situated next to the Millennium Stadium. Although there were notable disadvantages socially to the residents' of Cardiff, majority of these were all short-term effects and the advantages that Cardiff has received with its redevelopment are all factors that are more beneficial to Cardiff than being able to obtain tickets for the rugby match therefore leaving the city with long-term beneficial social results. The Millennium Stadium that Cardiff has gained can hold a larger capacity than the original. The facilities that are now situated within the stadium are of a greater standard than the previous. Higham (1999) brought up the subject as to how beneficial new developments will be in the future and as to whether the buildings in the long term will receive enough utilisation to make the construction of the facilities worthwhile and economically beneficial. When the Millennium Stadium was constructed the designers developed and created as many useful outlets for the stadium as possible. It can now be used not only for sporting events but also for concerts and corporate events (Jones, 2001) resulting in a fully utilisable development, which greatly enhances the previous facilities available. In 1984 the Olympics that were held in Los Angeles wasn't seen so much as a sporting event but more as an opportunity to optimise profits and maximise recognition to the area (Crockett, 1994). It was a chance for the local governments to draw more", "label": 1 }, { "main_document": "justice\" and having participated \"in anti-nuclear demonstrations\". They were then asked to rank eight statements about Linda according to probability. 85% of the time, the statement \"Linda is a bank teller and is active in the feminist movement\" (T&F) was ranked more probable than \"Linda is a bank teller\" (T). This shows that judgement was made based on the degree of similarity between the descriptive paragraph and the statements about Linda. However, according to rational reasoning, the probability of T&F cannot be higher than T because a conjunction can never be more probable than one of its constituents. When this was pointed out to the participants, most of them seemed to accept this logic. Therefore, there must be two mechanisms of equal psychological force which lead to opposing answers. An associative heuristic picks out T&F as being more likely while a chain of reasoning reveals T as more probable (Sloman, 1996). This is as apparent in examples of daily life. People frequently come to one conclusion using one method of reasoning only to recognise that another rationale yields a more valid answer. For example, judges are required to ignore what they personally believe to be fair and instead abide by legislation when sentencing offenders. Nevertheless, this is only applicable as evidence for two reasoning systems if the first conclusion remains convincing even if the second response is recognised as more rational. Consider the Muller-Lyer illusion. Despite realising and understanding that the lines are of the same length, one line is still perceived as longer than the other. Perception remains unchanged by knowledge thus making it likely that these are two different systems simultaneously at work. Although perception and knowledge generally correspond rather than contradict, it does not mean there is only one system. Likewise, the derivation of a single response in reasoning does not refute dual process theories (Sloman, 1996). More substantiation for two systems of reasoning comes from an experiment showing that the extent to which people transfer unfamiliar characteristics between two categories depends on how similar the categories are. Sloman (1993) asked participants to rate how convincing an argument was given that \"All birds have an ulnar artery\". The results were that participants found the argument \"Therefore all robins have an ulnar artery\" more convincing than \"Therefore all penguins have an ulnar artery\". This violates theoretic knowledge because penguins are just as much birds as robins are. Sloman (1993) explains that the participants' judgements were influenced by the degree of feature overlap. Penguins do not share enough features with the normative view of birds to be thought to possess a characteristic that all birds have. As with the \"Linda Problem\" that was mentioned earlier, participants later accepted that logic dictates that the strength of both arguments should be equal. However, a number of them still believed their first response to be plausible. This suggests that participants had two answers in mind (Sloman, 1996). Sloman's experiment supports the basis for a reasoning system based on similarity and verifies that a theoretic system functions alongside it. In other words, there must be", "label": 1 }, { "main_document": "Kelly Hurley, Bram Stoker's Mina Harker embodies all the good womanly virtues, while Lucy Westenra is a voluptuous blood-lusting animal. However, upon closer inspection, Lucy, though vivacious, is well respected and virginal when we are first introduced to her. She transforms from angel to beast in just a few paragraphs. Mina, although an ideal wife for Jonathan Harker, is portrayed as engaging with modern technologies, making her resourceful yet unconventional. Many critics, therefore, have formed the view that Stoker's representations of female sexuality is inconclusive. The question is, why? According to Carol Senf, Stoker's treatment stems from his \"ambivalent reaction to a topical phenomenon - the New Woman.\" Carol A. Senf, \"Dracula: Stoker's Response to the New Woman\", The New Woman emerged as a topic of controversy around 1894. She challenged the Victorian norms, encouraging women to search for freedom from domesticity. Some women advocated ambitions beyond motherhood and the right to sexual freedom and information about contraception and venereal diseases. Many represented the New Woman negatively, believing that they were threatening society and challenging gender roles. Sally Ledger and Roger Luckhurst eds., Stoker seems to agree. The first women who appear in the novel are the female vampires who attract and repulse Jonathan Harker. The women appear to Harker at first as beautiful and \"voluptuous,\" but he is soon shocked as they reveal their animalistic tendencies. Bram Stoker, Harker becomes passive, allowing the female vampires to articulate the gender reversals which seemed so threatening at the time. Further on in the chapter, Harker hears a \"low wail, as of a half smothered child,\" They are the perverse versions of the New Woman, portraying the anxiety that maternity was at stake. Ibid., p.39 Indeed, the female vampires seem to embody fears about the New Woman. But was this Stoker's main aim? Gender reversal is a common theme throughout the book, one we can also see in Dracula's quest to claim not only the women in the novel, but the men also. Some critics see only Dracula's goal to possess and repress the female characters. Bram Dijkstra explains why: Bram Dijkstra, Is Dijkstra underestimating Stoker's abilities to veil homosexual intentions? Dracula scorns the female vampires when he finds them seducing Harker and cries out, \"(t)his man belongs to me!\" Dracula utters a similar phrase towards the end of the novel, when he is taunting the Crew of Light. \"Your girls that you all love are mine already; and through them you and others shall yet be mine.\" This suggests that Dracula's ultimate desire is to possess the men, who in turn make it easier for Dracula to access them, by pumping Lucy full of their blood. She receives four blood transfusions, where the blood of the males are in effect mixed together. Lucy is already more receptive to Dracula's bite as she is sexual and vivacious so the men's blood will eventually filter through Dracula's veins. This questions the function of female sexuality in the novel. Do females act as mere vessels to hold the blood of men which will eventually be sucked", "label": 1 }, { "main_document": "ability to replicate in the human macrophage, despite this not serving as its natural host. It is convenient for the organism that the pathway used, and virulence factors expressed in amoebae, work incredibly well upon infection in alveolar monocytes. Firstly This allows the phagosome to express the ideal conditions for converting to its replicative form. Upon completion of conversion, loss of virulence traits blocking membrane fusion and lysosome fusion occurrence enables it to take advantage of a second niche for replication. Expression of an autophagy-like process facilitates host cells lyses by the high production of pores, and The surveillance of the intracellular environment to react to pH, iron and amino acid levels etc, enable", "label": 1 }, { "main_document": "weak front or that it didn't rain at Durlston Head. By the morning of Sunday 29 This can be seen in the charts for 1200UTC Sunday (figure 4, A1) and for 0000UTC Monday 30 Note, figure 4 is a forecast MSL pressure and thickness chart while the other figures shown are analysis charts. A large anticyclone can also be seen building over the entire region. This is associated with a stable air mass undergoing large scale sinking, which generally produces fine settled weather. The system is also causing the front to pivot clockwise off a point over Cornwall rather than following a conventional path. The situation is validated by the walk data, as seen in table 2 (A2). The pressure had risen noticeably since Saturday (seen in figure 9 (A2)) as would be expected in an anticyclone. Note, this pressure was measured with a digital watch, so the readings may not be as accurate as with specialist equipment. There are also errors in the altitude estimates used to calculate mean sea level pressure. The wind had also changed direction from South West to North East and its maximum speed during the morning was 3kt. This may be explained by the wind being drawn around the centre of the high pressure area over the UK, which would produce slack northerly winds (figure 4, A1). Note, local effects could have affected wind measurements as some readings were taken in more sheltered areas than others. The temperature was comparable to Saturday's measurements even though the region was in the cooler air mass behind the cold front. This is because there was much less cloud so more solar radiation could reach the surface. There was some stratus cloud reported, but this was at a higher level; cloud temperatures were approximately 8 A layer of cirrostratus was also reported, as a halo was seen around the sun. The visibility had increased to over 20km as the Isle of Wight could be seen, an indication that the air mass originated over the ocean. Two radiosondes were launched on Sunday 29 This shows a layer cirrus cloud above approximately 400hPa (7200ft) but little lower cloud, corresponding well with the discussed observations. The profile is also exceptionally stable which would be expected in an anticyclone. As can be seen in the analysis charts for Sunday (figures 4 and 5, A1), the cold front which passed through the region is almost stationary over the English channel and by 0000UTC Monday 30 As cold fronts slope backwards with height, the front can be seen on figure 17 as the sonde passed through the 2 air masses: A closer view of the tephigram is shown below, compared to one showing a typical cold front: More quantitatively, the front can be seen in figure 17 by a jump in wet bulb potential temperature. This occurred between 600hPa and 700hPa, suggesting the sonde rose into a different air mass at this point (figure 19). The profile for an anticyclone is one of stable large scale sinking. As the atmosphere would sink approximately adiabatically, the mixing", "label": 1 }, { "main_document": "delight in nature, consolation in religion [or] the sense of purpose in social charity.\" Emma writes to L However, she gives into temptation and, as a married woman, betrays Charles with L Fairlie, A., Ibid., p.42 Ibid., p.41 Emma is so besotted with her \"ideal\" life that when Charles takes her to the Op It could be argued that this is somewhat selfish of Emma - to expect her life to be fairytale and merely because it is not how she had dreamed, betray Charles. This neglect, however, extends much further than her husband and the very little presence of Berthe given by Flaubert mirrors how little she means to Emma. Just as with her husband Emma rejects her child just because she spoils the stereotyped picture of ideal maternity Emma has read about - Berthe looks ugly and is sick over her collar and she had wished for a son - someone who could be independent and leave a place if they wanted to or were unhappy as L Emma substitutes her lack of fulfilment with buying things from Monsieur L'heureux and falls into heavy debt emphasising that one of the dangers of living such a life can leave a vacuum in a person's existence and the desperation of the individual to fill this abyss can lead to yet more problems. Ibid., p.45 Emma belittles Charles and has a certain arrogance when at the ball of the Marquis d'Andervilliers. She pretends to belong to a higher social class and does not want Charles to dance: \" On se moquerait de toi, reste \" Rodolphe seduces Emma \" il se tenait les bras crois \" This is what Emma has always wanted and does not notice that Rodolphe has no intention of leaving with her. Flaubert, G., op. cit., p.51 Ibid., pp. 150-151 One could argue, however, that Emma's difficulties should be taken into account and that these are partly why she takes the path she does. Emma is oppressed by Charles and her mother-in-law and is forced to have lessons from her as Madame Bovary senior \" \" The world in which Emma lives is a completely male dominated one and Emma finds her only options to be adultery or suicide; L Ibid., p.44 Fairlie, A., op. cit., p.41 It could be contested that Emma finds escape in the only means put before her and should not settle for a mundane existence and that if she had not dreamed she would have suffered eternal Although Emma commits adultery she does maintain some honour, namely refusing to sleep with Guillauman even though she is desperate. As a reader we sympathise greatly and with Emma and the happy events which do not involve betrayal such as dancing with the Viscount and her first talk with L However, Flaubert shows that through pursuing these dreams happiness will never be ultimately achieved and \"the more intensely she [Emma] strives, the more her different experiences follow the same inevitable sequence: longing, apparent achievement, then the sudden or slow sense of emptiness, monotony, disintegration, to be followed", "label": 1 }, { "main_document": "ownership\" (Lockyer & Scholarios, 2004, pp 126). \"Best practice\" overlooks the differences between customized and standardized service and disregards the different roles recruitment staff have in branded properties and independent ones. Further, the ease with which employees are made redundant discourages the enhancement of recruitment & selection methods (McGunnigle & Jameson, 2000) Lockyer & Scholarios (2004) concluded from their study that while managers recognized the increased validity of formal methods of assessment (e.g. work samples, ability tests) and regarded them as positive selection methods, these were not used much. Despite the importance attached to these efficient methods, managers still preferred methods with low validity, inexistent predictive ability and objectivity, such as application forms, interviews, references, recommendations and personal knowledge (Lockyer & Scholarios, 2004). This preference may relate to the high costs of formal methods and Torrington et al (2002) question the validity and predictive ability of some tests, where the correlation coefficient is not so accurate. Also, the criteria used to define performance standards, in tests, is subjective. Tests are very job-related, so if the job description changes, the tests can no longer be used. Consequently, Lucas (2004) claims tests are most effective when used for jobs requiring expertise, hence inapplicable to hospitality & tourism where secondary and weak labour markets prevail. Lucas' (2004) perspective is supported by Gonzalez & Tacorante (2004) who conclude from a study they performed that \"best practice\" is not applied equally to all employees. It varies according to the skills needed for each position. Where high skills are required, companies use psychometric tests, references, recruitment consultants, more interviews and recruiters and the process generally takes longer. However, Gonzalez & Tacorante (2004) add that for positions where \"good practice\" is not applied, other HR practices are used to recruit & select staff. This approach suggests a RBV where \"bundles\" of practices are used to determine \"employee fit\". Jameson (2000) and Price (1994) emphasize \"best practice\" challenges by highlighting the link between business size and the degree of \"good practice\". Because of hospitality & tourism's nature, small firms rely on informal and inconsistent recruitment methods (e.g. word-of-mouth, local press), whilst larger firms are starting to take a more holistic approach towards \"best practice\", including formal processes, explicit standards and training and development opportunities (Jameson, 2000). Therefore, Newell & Shackleton (2001) suggest using improved interviews alongside tests. Measuring all candidates on the same criteria, and asking similar questions (Newell & Shackleton, 2001) as well as, defining the appropriate setting, establishing a reciprocal conversation, structuring time and using attentive body language, add validity and consistency to interviews (Torrington et al, 2001). In conclusion, it is unlikely for \"best practice\" to be effectively implemented in every hospitality & tourism organization (Lockyer & Scholarios, 2004). The characteristics of the industry (Goldsmith et al) as well as external forces (McGunnigle & Jameson, 2000) impede the implementation of one best strategy. However, there is growing evidence of signs of \"good practice\" in larger organizations (Jameson, 2000; Price, 1994). Nevertheless, it appears that introducing systematic HRM practices in an industry characterized by low skills and informal", "label": 0 }, { "main_document": "the enemy and they show great delight at successfully shooting them. These attitudes continue all the way up arguably until Karl dies. B Another example is found in the form of Greck who yearns for a go on the \"Schiffschaukel\" Both B One representation specific to B Alan Bance agrees. Bressen is one character to use as an example for this. When he sees the picture of the shepherd he remembers \"da Heinrich B (Munich, 1972), p.20 Bernhard Wicki, Heinrich B (Munich, 1972), p.63 Heinrich B (Munich, 1972), p.64 Heinrich B (Munich, 1972), p.96 Heinrich B (Munich, 1972), p.46 Heinrich B (Munich, 1972), p.46 Alan Bance, Heinrich B (Munich, 1972), p.17 Given the fact that both pieces of work portray a horrific situation (the Second World War) one would expect to find reference to the horror and the inhumanity of war. Indeed the examples of this are many. Both B these deaths are numerous and the human loss, particularly as affects Frau Susan, is devastating. Perhaps the most horrific death is that of Ilona who is shot by Filkskeit, a man who has spared other Jews and was thought not to be able to kill. C.W Churye puts this down to envy as Ilona possess both what he desires most (\"beauty, stature, religious faith, and a perfect voice\" Alan Bance says that \"it goes without saying that war is evil\" The inhumane aspect in the book comes arguably in the form of Filkskeit's selection basis for who should be killed and who should not, Charlotte W. Ghurye, Charlotte W. Ghurye, Alan Bance, This inhumanity of war is represented somewhat differently in Owing to their ridiculous enthusiasm the viewer is provoked into wondering why the boys hold such attitudes. Although this contemplation would perhaps come after the film, Wicki makes us think of the extent of the indoctrinated evil (probably, in this case, of the Hitler Youth or, ironically, the same English teacher who tried to persuade the Captain to spare their lives) for the boys to have such enthusiasm for war. When they hear that they will be called up Even when two of them have already died and they are given the chance to surrender by the Americans they do not give up. This, however, is not a direct link towards Nazism and as Peter Zander explains: This is a different representation to Wo warst du, Adam? as people address and leave each other with the exclamation \"Heil Hitler!\" and there is a felling in the book of people upholding and staying loyal to the Nazi criteria for how one should behave. Greck is one such example, as well as worrying whether he has given the correct Nazi salute or not, he is also concerned after having sold his trousers to a Jew. Peter Zander, Both oeuvres portray the role of women in wartime in much the same way, that is to say perfectly fitting with the In Wo warst du, Adam? this is done primarily through the representation of Frau Susan who cooks and looks after the soldiers staying with", "label": 1 }, { "main_document": "hand thinking of students and their education the reference groups and role of families as informal communication channels must not be ignored either. Visits to SP organised by the schools and teachers as incentive tours will have a great significance in visitor numbers arriving to the Parliament. Marketing managers should build strong and long-haul relationships with schools of all range (primary, secondary, further and higher institutions) and create a visitor database and a mailing list and regularly update members with the latest programs and events happening at the place in order to attract them all year round and thus create a regular clientele. This relationship would work most successfully if there was a meeting organized between the two parties on a regular basis in order to meet each other's needs. This is how the Holyrood Project Adviser (HPA) and Visitor and Outreach Services Manager (VOSM) could work together with professors from various schools; both on a national and international basis. They do work together on general matters like 'MSPs [Member of the Scottish Parliament] in Schools Project' (SP, 2005) also issues about needs and rights of children and young people on a general social basis. These are so called Cross-Party Groups taking care of general issues rather than matters connected with young people and students who are visiting the SP in order to learn about its running itself and maybe they are calling with the idea or goal to work there in the future. The SP is in touch with colleges, universities, students' associations, adult and community education. Education Outreach Office organises the sessions at an appropriate level where students can get an idea how the SP works and also how can they engage and take part in its work. According to the model of Middleton one must take into account all the consumer characteristics and reasons for motivation, which form the core component of the model, in order to understand the needs and behaviour of the targeted segment. There are four main components (Middleton and Clarke, 2001, p. 77) which establish consumer's action by stimulating motivation or control buying decision: 'needs, wants and goals, socioeconomic and demographic characteristics, psychographic attributes and attitudes'. Motivation arises when one's , 2002, p. 94) and the tension reduced. The higher the tension the greater the motivation for fulfilling the need. Whether they are 'utilitarian' or 'hedonic' needs the desired result is the Needs, wants and goals might be quite similar within the sector examined as a group of students in a sense that they are willing to learn and thus achieve a good career. Demographic attributes influence motivation and consumer behaviour. In case of students it is a perfect stage of the family life cycle for travelling or going for short breaks as they do not have any family commitment yet, they have relatively large amount of free time, summer holidays and weekends; all giving them a great opportunity for travelling. Moreover visiting the SP might be incorporated in their studies which is considerable by VOSM when promoting offers. Psychographic characteristics are dependent on the", "label": 0 }, { "main_document": "Commercializing a new invention or a new technology, never invented before, requires a deep understanding of drivers which lead to a successful commercialization. Such factors can be complex (e.g. the transfer of technology, the team building processes, the project evaluation). Several theoretical frameworks of commercialization potential facilitate the evaluation of an innovation. In this paper, we have chosen to critically discuss one of those theoretical frameworks called WIN2 (Udell 2002). That discussion will be based on a new technology or invention, which is not yet in the market. However, we deliberately exclude arts, paintings and fashion. WIN2 develops an evaluation criterion which follows 5 generic steps that we will review individually: The issue that a new invention faces is...its novelty. For example, along the path to commercialization, nanotechnology's biggest liability is its novelty (Mazzola 2003). Thus, Win2 requires that the impact on society is studied thoroughly as a first step. This impact considers people's welfare, laws and regulations, the safety consequences of the innovation's misuse and the environment aspect. This is assuming we can assess with exactitude the consequences on the health of people before the product is launched. Often, new drugs show side effects not spotted during the preliminary tests, e.g. Indeed, the effects vary from person to person and it is difficult to know the entire spectrum of these effects. Should the product development of a very useful innovation be stopped because of current laws and regulations? Most of the areas where biotech is finding application are heavily regulated. Unfortunately, the regulatory agencies are serving as bottlenecks for the commercialization of technologies (Fildes 1990). We argue that the regulations and laws are made by people for people; Obviously, it might take time to change laws but that is not a reason to inhibit new inventions. We must however separate the possibility of abusing the innovation from the faculty of using it, assuming we know all forms of normal usage. An electronic toy which can easily be dislocated in tiny pieces by a baby is dangerous to be commercialized. Yet, it is difficult to assess to which extent we can define the pass/fail test for abusing the invention. Quite all drugs, though excellent to cure diseases when used properly, can be dangerous when abused. Is that a reason to stop the commercialization? The answer is open to debate and we argue that Win2 do not offer clear criteria to find the line of separation, in order to fail the test. The analysis of this issue is close to the first point where we examine people's welfare. By definition, we do not have enough experience on the new technology or invention itself. It is therefore completely Since the nuclear energy was invented (discovered) in 1951 (French Nuclear Energy Society), no researcher can announce with certainty that s/he has complete knowledge of the impact on the environment. Chemical research shows that knowledge can be acquired much later than the commercialization. Evaluating the impact on society as suggested by Win2 is a critical way to assess the risks of commercializing a new invention or", "label": 0 }, { "main_document": "individuals will enable complementary morphometric measurements and description as well as the setting of a holotype specimen. Further investigations will look for interspecific differences in pelage (often their face masks), penile morphology, hand pad size and placement, vocalisation and chromosomal set-up (i.e. karyotype). Further, the study aims to create a database for the family Galagidae. The database will contain information on the most recent nomenclature, morphological characters (external and internal), distribution, behaviour, ecology and karyotypes available for the taxa. The database will be publicly accessible through the Internet and its intent purpose is to serve as a comprehensive information source for any future comparative studies of Galagidae taxonomy. The issue of this project is the lack of scientific names for populations of primates identified as new species. One taxa has a suggested name based on its current lower taxonomic categorisation, 2, G. sp. nov. 3 and . The most eminent threat that any of these suggested new species is under, resides to It has been recognized as one of the \"World's 25 Most Endangered Primates\" by IUCN Primate Specialist Group. G. sp. nov. Reason - no name no face. The protection endorsed to these primates comes from recommendations from IUCN by recognizing their habitat in western and central Africa as one of the world richest in biodiversity. Special attention is given to The Eastern Arc Mountain-region hosts the most recent discovery of a new monkey, the highland mangabey 2005). With further recognition of, at least two, new species of galagos within the same area, the focus on that particular 'biodiversity hotspot' will be considerable. These species will be a living evidence, flagship species, raising public awareness for a region of the world that need all the attention it can get. More attention is likely to revenue in more income for the 'host countries' which will act in favour for the conservation concept and encourage the creation of more 'biodiversity hotspots'. The past few decades has drastically changed the earlier perception of nocturnal primates as quite simple, solitary, forest dwelling creatures of the dark to very complex creatures with very elaborate behavioural and social attributes. Nocturnal primates rely more on vocal and olfactory cues than vision for social communication. In other words, nocturnal primates differ from their diurnal relatives in that voice cues are used for conspecific recognition rather than visual cues. Evolution has favoured variation in voice rather than face, so to speak, which makes the nocturnal world harder for us diurnal primates to understand. This might seem quite intuitively self evident but the resistance amongst diurnal primate researchers has been surprisingly resilient. Once specific vocalisation was established as a fact, the use of calls as species identifier has revealed a diversity of almost unprecedented magnitude. So many species are discovered that the knowledge of their existence can prevail for years without being given a scientific name. A situation that hardly exists amongst primates that is diurnal, as us. This study aims to set the naming of nocturnal primates in the first room in an attempt to let taxonomy catch up with", "label": 0 }, { "main_document": "less influential. The restricted model is: The unrestricted model is: F-test value Therefore, we fail to reject Refer to the Appendix to find out the detail for the F-test. The steps to formulate the model are: Using correlation matrix Refer to Appendix to find out Correlation Matrix under Q1 Using two-tailed t-test Due to Therefore, I decided to remain only one of them instead of two. With a t-value of 0.35 and 0.70 respectively, TOPB and ATTL are dropped. Refer to the Appendix to find out the statistic table for the regression. The reason for us to use one-tailed test is that in Q1, we have found that they are both negatively correlated, so basically, we could exclude the possibility that they are positively correlated with QTMARK. Next, considering all the answers to previous questions, I decided to include the variable ATTR even though it did not show a significant correlation with QTMARK in our correlation matrix. Its significance on QTMARK is already tested in Q2b). Moreover as seen also in Q2b), variable HRSQT have significant coefficient and outstanding t-statistics. Hence both variables are included. From the regression, I could see that ATTR and HRSQT both have an outstanding t-value of 3.12, 1.70 respectively. Furthermore, the R-squared value has increase from 0.255 to 0.273 which means the new model could explain more variation in QTMARK. This further confirmed the inclusion of these two variables as explanatory variables in the model. Refer to the Appendix to find out the statistic table for the regression. Based on the conclusion I have got from Q4 and Q5, I Therefore, I need to add dummy variables D02 and D03 to the final model. Based on my comparison in Q1 between different groups Refer to the Appendix to find out the Comparison between different sub-groups. Refer to the Appendix to find out the statistic table for the regression. From the regression result DMPASS will still be included as I believe This is consistent with my finding in Q1, that s Moreover, the coefficient of DMPASS is 1.57, among the highest coefficients. In addition, Due to collinearity between DMAPSS and DMATHA, I also expect the coefficient of DMAPSS to increase as I drop DMATHA. Refer to the Appendix to find out the statistic table for the regression. Regress QTMARK on the selected variables now I am now satisfied with the variables I have considered for the model: ABILITY, ALEVELSA, ATTC, EXPALC, ATTR, HRSQT, D02, D03, DCOURSE and DMPASS. Refer to the Appendix to find out the statistic table for the regression. The The coefficient of DMAPSS also increases, as expected, from 1.57 to 2.12. I think we may better specify the model by considering these variables but in different functional forms. I begin with taking the logs of all the quantitative variables (not dummy variables) R-square has decreased from 0.3462 to 0.2739, which means log function of quantitative variables could not explain more variations in QTMARK. Refer to the Appendix to find out the statistic table for the regression. Now I try two different functional forms, one is", "label": 0 }, { "main_document": "Both Malinowski and Radcliffe-Brown used the basic premise of 'functionalism', but each utilised it in a very different way (Leach, 1966, p6). Both of these scholars begin their theories on the basis that a culture or a society \" An excellent comparison of the two is made by Leach (1966), relating the two to time pieces \" The focus of Malinowski's theory of Functionalism was focused on the individual, which opposes that of Radcliffe-Brown's, \" This was a factor for which Radcliffe-Brown criticised Malinowski. On comment of his work the Mailu peoples in New Guinea, concerning kinship Radcliffe-Brown quotes \" This precise focus on the individual is essentially brought to light during the latter stage of Malinowski's life. He set out to summarise his perspective, and subsequently present a theory that encapsulated all his work. Firth (1968) presents the primary starting points of Malinowski's theorising. First he wanted to make any mode of human behaviour understandable, in relation to the motivation of individuals. Secondly he wanted to be able to include both rational and irrational behaviour on part of the same person. Thirdly, the appreciation of the interconnectedness of the different factors which constituted a culture, and finally a reference to a particular item, to some kind of function in the current situation of a culture, in order to form a basis of understanding. Malinowski presented in \" He proposed a theory of 'vital cultures' to which he proposes are \"the biological foundations incorporated into all cultures\" (Barnard, 2003, p68). Eleven of these sequences exist, containing an impulse, an act, and a satisfaction. An example quoted by Barnard (2003), is that of sleep, with the impulse of somnolence being associated with the act of sleep, resulting in the satisfaction of waking feeling restored. Malinowski's second claim was that the basis of his approach \"was a set of seven biological needs and their respective cultural responses\" (Barnard, 2003, p 68). One example comes from his monograph Argonauts of the Western Pacific (1978), where he talks about gardening. The Trobriand islanders have a garden magician, who consecrates any gardening site, and among other rites he magically assists the plants in sprouting, budding, and producing food. From this Malinowski believes that magic is based on a psychological need of an individual, which is culturally evolved. Consequently Malinowski believed that cultural institutions are responses to a variety of specific biological needs. It could be argued that Malinowski is making considerable assumptions by propounding this theory. As said previously, one of the main differences that can be seen in Radcliffe-Brown's work is his overriding focus upon society and structure, as opposed to Malinowski who at no point in his career attempted structural analysis. Radcliffe-Brown lacked \" The main aim of Radcliffe-Brown was to create a social Anthropology, which was generalising and thus a Science, he was a strong advocate of the organic analogy, viewing society as a biological organism, whereby each organ has a function in making that organism survive. The same he argued was present in society, whereby each institution function was their in order to", "label": 1 }, { "main_document": "Words used to describe Pat Barker's Barker herself recalls that the initial responses from many publishers were 'frankly this is too bleak and depressing.\" This would appear fitting for a book depicting, often quite brutally, the lives and hardships of seven working class women living somewhere in the north of England. Michele Hanson's use of the word 'exhilarated' therefore does not initially seem to fit. The Oxford English Dictionary defines 'exhilarate' as 'to make cheerful or merry; to cheer, enliven, gladden' What there is, is a certain sense of optimism and a feeling of respect for the women the reader meets throughout the novel. Thursfield, Amanda Haywood, Ian Nixon, Rob 'An Interview with Pat Barker' in 1 (2004) pg 4 Barker states that one of her intentions whilst writing 'They are what Barker calls \"voices that had not been listened to\"' Working class women had often simply been ignored in literature. This aspect of her work could be seen as exhilarating. Despite the bleakness of the women's stories, at least they have been heard. Bochenski refers to it as 'stories of the other Britain, the one that many never see or never want to realise exists' However, a current reader will not be unfamiliar with such texts. Barker was one of many writers, artists and filmmakers of her time who sought to represent the working class in a new light of significance. The working class towns which Barker describes are few and far between today, and Barker is in fact depicting their decline. Any impact the text makes today is therefore likely to come from another aspect of Barker's writing. Nixon, Rob 'An Interview with Pat Barker' in 1 (2004) pg 2 Bochenski, M This could be the fact that Union Street revolves around working class women as central characters. It is not unusual to find texts which centre around 'angry young men' from the working class, for example Osborne's Haywood comments 'it is this separate sphere of essentially feminine ordeal and trial which mediates elements of class consciousness.\" Barker enters the heads of these women, and finds that their stories are moving tales of strength, as they persevere through hardships often unrecognised by traditional histories. Hanson's suggestion that the reader is not meant to consider the situatutions themselves may not be so far fetched if we understand it to mean that we are not to despair at these women's lives, but marvel at how they make it through them at all. Haywood, Ian In this sense we can recognise some optimisms in a text where there initially appears to be little hope. In each story, bleak as it may be there is often some light. The final line of Kelly Brown's story, preceded by a horrific rape and a painful description of her attempts to deal with it by rejecting her femininity, is 'she was going home' This suggests that perhaps she can return to society, a reappropriation of the family. Although Lisa Godard initially rejects her third child 'There was nothing about this baby she recognised as hers' (113) the", "label": 1 }, { "main_document": "In his 1968 pluralist thesis, Robert Dahl describes politics as the analysis of the nature, exercise and distribution of power Scholars and philosophers have struggled with the concept of power since Antiquity; debate has focused over whether power can be exercised responsibly, on how it should be distributed, and on whether power can be possessed as an attribute, or rather manifested purely as an activity, in the production of intended results. Theorists further question The complexity of these views suggests that claims that 'politics is power' and that political power exists only within governments are oversimplified. So as to truly gain insight into today's political world, we must carefully reconsider the nature, uses and effects of power. Dahl, Robert. Who Governs? Pg. 12. Goodwin, Barbara. Pg. 305. Firstly, what is power? For Dahl and other pluralists, \"A has power over B to the extent that he can get B to do something B would not otherwise have done\", particularly when B is aware that the modified behaviour is contrary to his interests Such power abounds in international politics, where developing states are often forced to counter their national interests and to comply with transnational companies' potentially harmful requests due to these agencies' superior financial and political status. The Cola-Cola company, for instance, wields unquestionable power over the Indian government, which has granted it subsidized water and land despite the grave environmental damage caused by Coca-Cola's production processes Power is here understood in terms of its Power relationships are accordingly 'zero-sum games', which must deprive B of something if A is to benefit Dahl thus provides a very precise and quantifiable definition of political power. Hay, C. Pg. 171 Srivastava, Amit. \"Communities Reject Coca-Cola in India\". India Resource Center. Accessed at: Goodwin, Barbara. Pg. 308. This explanation may be over-simplistic, however: it unrealistically assumes that actors possess perfect knowledge, knowing exactly where their interests lie in the power game. Furthermore, in measuring A's power over B in terms of the decisions that A makes at B's expense, Dahl's theory does not weigh these decisions' importance. Peter Bachrach and Morton Baratz additionally criticize the pluralist approach for insufficiently differentiating power over They also argue that the possession of instruments of power (such as Coca-Cola's significant funds in the above example) is merely Most importantly, these scholars move beyond Dahl's definition of power as decision-making, claiming instead that individuals exert power primarily by In this view, A has power over B if it can \"limit the scope of the political process to public consideration of only those issues which are comparatively innocuous to A\" This theory, dubbed the 'second face' of political power, finds support in the Camp David Peace Accords, for instance, where the United States obtained Israeli-Palestinian consent to demilitarization of the Sinai Peninsula by formulating 'yesable propositions', in which the discussion of highly disputed issues was deliberately minimized Bachrach and Baratz theory also point to Agenda-setting and non-decisions, both overlooked by Dahl's 'first face of power', are therefore clearly important indications of political power. Bachrach, Peter and Baratz, Morton. Pg. 4. Goodwin,", "label": 0 }, { "main_document": "to claim unfair dismissal, regardless of whether the employer has committed discrimination by selective dismissal or re-engagement. (DTI, 2005) What's more, the common law does not provide a rule whereby those workers could obtain reinstatement in their employment. Hereby, several committees had called for reforms according to the International Standards, the Right to Strike and Protection of the Individual, regarding the problems stated above: Participation in a strike should not be a breach of contract by the workers concerned and should not be grounds for dismissal. The protection against dismissal should apply to the duration of the dispute (not confined to 12 weeks). There should be protection against dismissal for workers taking unofficial industrial action. Anyone dismissed for or during taking strike should be entitled to return to work at the end of the strike. (IER, 2004) The right to strike is one of the essential means available to workers and their organisations for the promotion and protection of their economic and social interests. Therefore, the restrictions relating to the objectives of a strike and to the methods used should be sufficiently reasonable as not to put excessive limitation in practice of the right to strike. Secondary action, which is also called as 'sympathy' action or 'solidarity' action, is an 'industrial action by workers whose employer is not a party to the trade dispute to which the action relates' (DTI, 2005: 31). Wherever there is more than one employer in dispute with his workers, the dispute between each employer and his workers has to be treated as a separate dispute. As in the case of Gate Gourmet dispute, the conflict between GG and its employees is 'primary' action, while the walkout token by the employees of BA is secondary action. In the 1990 Employment Act, immunity for all secondary action was removed. Hence it is now illegal to call for, organise or act secondary action. This big change appears to make it virtually impossible for workers and unions lawfully to engage in any form of boycott activity against parties not directly involved in a given dispute. The law effectively prevents workers and unions from taking action against the real employer hided behind. Taking advantage of this restriction, employers are able to avoid the adverse effects of disputes by tactically transferring work to associated employers or restructuring their businesses 'in order to make primary action secondary' (IER, 2004: 20). British Airway, in this respect, would be a good illustrator. BA used to make all in-flight meals itself until sold the business to Gate Gourmet in 1997, so historically the GG workers were part of BA's workforce. This irresponsible contracting-out aimed solely at cutting labour costs was bound to stir up strife. Yet BA had been attempting to impose massive cuts on the contractor - GG, which pushed GG further to the edge of survival crisis. As a result, GG had to introduce the staff reduction as one of the strategies to remain competitiveness, which finally led to the drastic dispute. Thus it can be seen BA actually is the real employer who has", "label": 0 }, { "main_document": "won. Davis, Jack, Greenberg, Cheryl, The black civil rights movement was made increasingly difficult by Roosevelt. Though often portrayed as supporting the black movement, on close examination of New Deal policies the President appears to have been more concerned with winning votes than promoting equal rights. \"Deferring to the sensitivities of Southern congressman\" Than a disenfranchised and relatively disorganized minority group,\" Furthermore, such policies as the TVA and the CCC preserved Jim Crow practices throughout the 1930s. The majority of New Deal spending went to Western and Eastern states where previous elections had been close. This was illustrated by the Indiana Democratic county chairman VG Coplen, who told Roosevelt's 1932 and 1936 campaign manager James Farley, \"use these Democratic projects to make votes for the Democratic Party.\" Roosevelt did make some effort to offer blacks equal opportunities; he desegregated federal buildings in Washington and \"appointed some blacks to respectable offices\" It is possible that this was only due to his wife's persistence, however. Eleanor was a strong campaigner for minority rights despite what her husband, who once described Italian Americans as \"a bunch of opera singers\" Regardless of this, however, Roosevelt still only passed one significant concession to black protestors- the 1941 Executive Order against discrimination in military industry. Even this he was forced into against his will following A. Philip Randolph's March on Washington Movement. Furthermore, the desegregation of the armed forces was not granted until three years after Roosevelt's death, and it has been argued that \"wartime shortages alone might have forced employers to drop all the racial barriers\" Despite often being portrayed as the \"champion of the underdog,\" therefore, Roosevelt's personal contribution to the civil rights movement during the 1930s was poor. Blum, John Morton, Chappell, David, review of Sullivan, Patricia, \"Days of Hope: Race and Democracy in the New Deal Era\" in 1) Powell, Jim, (Cato Institute online) Blum, John Morton, Powell, Jim, (Cato Institute online) Chappell, David, review of Sullivan, Patricia, \"Days of Hope: Race and Democracy in the New Deal Era\" in 1) This situation was worsened by the system of implementing New Deal policies, which resulted in a clear North/ South divide. Many of the policies were administered by the states, giving Southern states' governments enough power to ensure that more Federal aid \"went to rural areas than to cities, and to whites rather than blacks.\" Furthermore, Southern states, where the nation's poorest citizens lived, received \"less New Deal spending than comparatively richer states.\" It has been suggested that this was because the Southern states had dependable Democratic voters, who Roosevelt did not need to focus on to win the next election. The bulk of New Deal spending went to Western and Eastern states where previous election returns had been relatively close. This suggests than Roosevelt was more interested in winning elections than helping the blacks, despite the negative result it may have had on minorities. \"Assess the long term consequences of the New Deal.\" Root, Damon, The Executive is not the only branch of Government able to influence civil rights, however. The Judiciary had", "label": 1 }, { "main_document": "for them, it may use the success of the reforms to advocate its other, more controversial and ambitios proposals. The codification of the British Constitution will not be an easy reform to get through Parliament, but if it seeks to entrench the rights of citizens and further the representative makeup of government, then it may be able to refer back to previous reforms that have furthered similar aims successfully. Rose, R. (1974, p.253), cited in Grant, W. (1989), In conclusion, Charter88's aims seem to correspond with the overall desires of the electorate and, though their resources in terms of expertise and access to government may be limited, their objectives are slowly being fulfilled with or without their influence. Perhaps this is ironic, as the very fact that such a group is able to see quite radical reforms through Parliament suggests that there is a high level of democracy and political freedom in the British governmental system and thus undermines Charter88's initial objection. Nevertheless, the group can claim to be an influential part of an interesting time in British political history and can rest assured that, despite the occasional bill, their aims are being slowly embraced by both the public and the government. In attempting to research the workings of Charter88, I found it difficult to aquire any information regarding funding. Though it is not their legal obligation to disclose their accounts, these figures would have been of use when judging the effectiveness of the group in relation to campaign costs, administration and other expenses. As well as this, there was a distinct lack of up-to-date information in regards to the group's achievements, giving the impression that they have withdrawn their attempts to influence government since the 2005 General Election. The methodological problem of comparing the effectiveness of a pressure group such as Charter88 - with broad and to some extent unreachable aims - against that of a group with narrower, more immediately attainable aims such as the aforementioned Snowdrop Campaign, was difficult and did not produce significantly interesting results and was therefore ommitted from this report.", "label": 1 }, { "main_document": "the cosine of the angle with all other quaternions. We see the largest cosine not equal to 1 is So we set up a network, G, with 24 vertices and an edge for each pair of vertices whose angle cosine is Every vertex has 8 neighbours. Take The common neighbours to these two vertices are Exploring in this way while examining angles between edges, we find the smallest hypersurface polyhedron to be an octahedron. The hypersurface must be built from regular three-dimensional solids of which there are only five types, so most are easy to eliminate making the work a lot simpler. The six vertices form a typical octahedron the bracketed pairs here being opposite, non-connected vertices of the octahedron. To complete the task we find the centre of the octahedron as the mean of any of these opposite pairs. We then normalise it to unit length so that it lies on Joining pairs of points by line segments according to the adjacency information contained in the network This three-dimensional projection may be displayed on a Maple worksheet as displayed. This beautiful object is called the 24-Cell [S2001]. We regard SO(4) as the group of rotations for four-dimensional Euclidean space. This would include rotations such as those of the 24-Cell. We may consider some quaternion pairs as rotations in four-dimensional Euclidean space. Take two unit quaternions There exists a relationship between SO(4) and SO(3) x SO(3). More accurately, there is an isomorphism between the projective groups Take a rotation We know that Theorem 5. Eigenvalues of orthogonal matrices have unit modulus. Proof: Let Take the complex transpose so that We know that A is real so Then Since x is an eigenvector it is nonzero and as such Since the eigenvalues of We will take these vectors to be an orthonormal set. We see that We also see With respect to the orthonormal basis {e1, e2, e3, e4} the matrix form is now We may interpret this as a product of commuting rotations, Take any unit quaternions r, Then we may regard the mapping Let us consider the case Then we get the rotation Similarly, for Finally, the rotation A is given by We will discover an expression for the quaternions r, s without considering a change of basis. Let us consider the quaternions corresponding to a rotation through We suppose that a non-zero quaternion q is fixed under the mapping We split the quaternions into scalar and vector parts by writing Then So we see The scalar and vector parts of this equation may be considered as: with We then (scalar) multiply equation (2) by x and (1) gives Hence So equations (1), (2) become Let the orthonormal quaternions Then: So from (3a), (3b) for some To both sides of (4a) we take the scalar product by Substitute back into (4a), (4b) and use (5) to find Then Thus The quaternions we need are and The rotation clearly occurs in the orthogonal two-space with q1,q2 held fixed, The orthogonality conditions or are satisfied by all We are thus able to move", "label": 0 }, { "main_document": "from them) - full employment (reinforcing the overall EU employment rate targets set at the 2000 Lisbon Summit - 70% for the labour force as a whole by 2010, 60% for women and 50% for older workers (aged 55-65)), quality and productivity at work and social cohesion and inclusion. As Watt (2004:125) underpins 'the importance of involving all actors, of taking account of gender issues, and of a focus on unemployment and inactivity, not just employment, is emphasized at the outset'. It is clear that new agendas, prompted by demographic (ageing European population), gender (more and more women entering the workforce and demanding equal treatment) and social inclusion matters have been brought about. In this sense, the European employment and social policy and, more specifically the European Employment Strategy acknowledges the especially problematic employment rates for three categories of workers in Europe - women workers, elderly workers and young workers. Employment for these groups remains an issue as the European Commission 'Employment in Europe 2005' report confirms. It points out to the fact that in 2004 employment rates for these groups of employees remained below their Lisbon and Stockholm targets with levels for female and older workers staying at 4 and 9 percent below their 2010 targets respectively (employment rate of 55.7% for women in EU member-states in 2004 and employment rate of 41.0% for older workers). As far as the young workers are concerned, the report notes that at 18.7%, youth unemployment in the EU is approximately twice as big a rate as the overall unemployment level for the region. These ongoing trends have prompted the special focus of the EES on these three categories of workers and purposeful measures have been introduced to address their employability problems. By examining the contents of the EES in more detail, this essay will attempt to assess the measures provided for each of the three mentioned groups of workers and will try to make a tentative evaluation and prognosis on which of these types of workers the Strategy will have the greatest impact. In doing so, it will try to suggest that measures aimed at encouraging female participation in the labour market in Europe have so far had relatively greater success than those supporting elderly or young workers, due to several factors, pointed further in the text. It will draw on data from the EES itself, as well as various evaluation papers (official, as well as representing individual positions), created after the first years of its implementation. It is important to note that this essay acknowledges the notion that the different countries - EU members have had different starting points in the implementation of the policies and guidelines of the EES. It will, nevertheless look for the existence of common understanding of how things should ultimately look like under the joint policy and will assess the common approach and its potential to succeed, rather than dwell on difficulties or inefficiency, brought about by individual countries' setbacks in the process. Despite the suggestion that Gon Long-term measures, embodied in the EES have supported and emulated", "label": 0 }, { "main_document": "a non-inverting amplifier with 4). Check and test the amplifier with connecting it to a digital voltmeter. 5). Connect a capacitor of 100nF across the feedback resistor of the amplifier. 6). Connect the amplifier to the bridge circuits. 7). Zero the system when there is no loading force, and add progressively washers to the nylon screw. 8). Record the following 9). Connect the output of the system to a data acquisition system which has an analogue-to-digital card plugged into a PC and driven by Labview. 10). Collect following data from the designed system The basic data collected, calculations from published values, as well as properties of the system, appear in Table 2 in the Appendix 2. The recorded loading force and output of the amplifier was tabulated in Table 1, it was compared with predicted output which was calculated with Eq.1 using the measured values of loading force, and Eq.2 using the gauge factor from Table 2 in the Appendix 2, and Eq.4 using the source voltage from Table 2 in the Appendix 2. Errors were assumed to be a noise in the output of the amplifier due to the high gain used and the electromagnetic interference. From Table 1, a graph of the output voltage against loading force was plotted, as shown in Figure 1. Figure 2 displayed a graph of data against time when the cantilever beam was not loaded, with the feedback capacitor. Figure 3 displayed a graph of data against time when the cantilever beam was not loaded, but without the feedback capacitor. Figure 4 represented a graph of dynamic behavior of the cantilever. From the graph, the time needed for stabilising the output of the system after it was disturbed was about 0.5 second. And the resonant frequency of the cantilever system is 24HZ. Eq.1 can be used to derive the strain produced at the strain gauges in terms of the force F, as the parameters had been measured. According to Eq.3, therefore Combine Eq.1 and Eq.3, the maximum loading force allowed for this system can be achieved. From Figure 1, the linear range of the force sensor system is 0~2.95N. and the gradient of the linear line is 11.33, which means the sensitivity of the force sensor system is 11.33. Combine Eq.1, 2 and 4, an overall equation for output voltage against the loading force can be derives, which is According to the Figure 1, sensitivity of the measurement system depends on the output voltage and the loading force, which is Eq.5 shows that Assume the ambient temperature increases, the resistance of the two strain gauges increases, but the change in resistance over the strain gauge resistance does not change. From the Eq.4, as The measured values of output voltage are within about This is reasonable agreement. Furthermore, the errors show a tendency to increase with increasing loading force. The reason is that the electromagnetic interference becomes more and more serious as loading force is rising. This experiment yields the following conclusions: The overall equation of output voltage against the loading force for the force sensor", "label": 0 }, { "main_document": "\"Popular sovereignty or cosmopolitan democracy? Liberalism, Kant, and international reform\", 2, 2000, pp. 277-302. Stanley Hoffman. But if states are the only subjects of international relations how do we explain humanitarian interventionism? Cosmopolitanism considers that the individuals, and not only the states, are the moral subjects in international relations. As Ignatieff pointed out \"Modern universalism is built upon the experience of a new kind of crime: the crime against humanity\". Global warming, ozone depletion, transnational terrorism are real problems that cannot be dealt with by states in isolation and require a rethinking of the means and goals of co-operation. Not only that there are duties for intervention beyond borders but also these collective actions are actually required by the globalised world politics since something that happens in one place affects other places too. Michael Ignatieff, Concepts of Community and Humanity after the Cold War\", p. 175. Nevertheless, promoting duties to humanity is sometimes accused of Westernisation. Whose morality is promoted globally? The humanitarian discourse of our days has established some rules and norms that are taken for granted. Is this morally correct? But who decides on universal truths? In assessing the moral complexity of international relations, Fiona Robertsone states that \"certain values, although absolutely contingent, ought to be upheld, not because they are \"true\" or \"natural\" or even \"morally superior\", but because they are the best so far developed.\" More bluntly, Held was declaring that democracy is the \" the only grand or meta-narrative\" Fiona Robertsone-Snape, \"Moral Complexity and the International Society\", 4, 2000, p. 521. David Held, \"Democracy and the International Order\", in Daniele Archibugi and David Held. (eds.) David Held, \"Globalisation, Cosmopolitanism and Democracy: an Interview\", by Montserrat Guibernau, Global Transformation website, Democracy seems to be the best form of government so far developed since it establishes checks and balances that protect the people from any kind of tyranny, even their own tyranny. However, Linklater emphasised that humanity is not a given community and that people should avoid establishing moral hierarchies. Moral progress should be a result of dialogue and consent in a wider framework with the participation of all stakeholders. Linklater, Archibugi was expressing his sorrow that the interventions after 1989 will actually be \"remembered in the black book of military history rather than in the pink book of humanitarian altruism\". See Daniele Archibugi, \"Terrorism and Cosmopolitanism\", Social Science Research Council website, Archibugi, \"Cosmopolitan Democracy\", and Archibugi, \"Principles of Cosmopolitan Democracy\", in Daniele Archibugi, David Held, Martin Kohler, (eds.) Finally, the cosmopolitan democracy project may seem wishful thinking since there is no global 'demos'. \"Cosmopolitan democracy is an ambitious project, whose aim is to achieve a world order based on the rule of law and democracy\". Democracy should be envisaged for all these levels as distinct yet complementary. Moreover, citizens must be politically represented in global affairs, as there are global issues that might affect them. The idea of cosmopolitan democracy represents \"a normatively desirable political future rather than simply an intriguing empirical possibility\". For more see Anthony McGrew, \"Transnational Democracy: Theories and Prospects\", Daniele Archibugi, \"Principles of Cosmopolitan", "label": 0 }, { "main_document": "1838 is a crucial date in the history of Guyana as on the 1 August, the emancipation of Africans and creoles who had been working in sugar plantations was finally carried through and protected by law. In order to preserve the status quo between 1834 and 1838, creoles were still bound to plantations through apprenticeship, which in many ways was similar to the conditions they had worked and lived in as slaves. This created great tensions between creoles and the colonial elites and as soon as the blacks were freed from their bondage, most sought to escape the plantation system permanently. Some became peasant and tenant farmers, and those who were lucky were finally able to rise through the social ranks. For example, they moved into the retail sector, become artisans and merchants; some even became white collar workers such as lawyers and doctors and a very few were able to gain considerable wealth and aspired to become as esteemed as the traditional white colonial elites. However, despite this fresh fiscal activity, Guyana remained profoundly reliant on sugar plantations to fuel the majority of her economy. This meant that even before emancipation, plantation owners began to search for other sources of labour. The solution presented itself in the form of immigration from India, China and Portugal. The reasons these people had for immigrating varied from economic hardship to religious or political persecution. A majority of these immigrants (particularly the darker skinned Chinese and the Indians) filled the roles of the ex-slaves as indentured servants on the plantations. Largely, this meant that they experienced similar living and working conditions to the ex-slaves. Yet, generally, the Portuguese remained outside the plantation economy, preferring to focus their efforts on retail trade. Crucially, the colonial elites allowed the Portuguese to work outside the plantations due to their European background and their white skin. Elites placed them nearer the top of the 'racial and cultural stratification' Essentially the cultural elites in Guyana sought to compel other cultures to conform to their cultural traditions and accept their view of racial superiority according to skin colour. Kean Gibson, 'The Cycle of Racial Oppression in Guyana,' p. 19 When in Guyana, immigrants realized that their traditional religious practices, living arrangements, their language and nearly every part of their normal day lives would be challenged by a variety of influences and pressures from other cultures present within Guyana. Importantly, the majority of immigrants envisaged their stay in Guyana as a temporary affair which would last only as long as it would take to amass a fortune with which to return to their home country. Immigrant cultures, therefore, generally sought to preserve their traditional cultures as opposed to the creoles who had already accepted their place in Guyana as permanent. Consequently, Creoles often sought to mimic the cultural elites in order to increase their social mobility, whereas the immigrants generally attempted to resist the pressures of the elites in order to maintain their cultural heritage. The degree to which different racial groups would be affected would vary from person to person and", "label": 1 }, { "main_document": "follow an Environmental Adaptation and Management Programme (PAMA). Nonetheless, firms are not complying with their PAMAS and are continuously requesting reschedules. Although they have installed wastewater processing facilities, the 7 fishmeal plants located north of the Reserve, process 1 million tonnes of fish, and still discharge around 30000 tonnes of proteins and 8000 tonnes of fish oil in their effluents annually (Mundo Azul, 2005). Creation of APROPISCO: As one of the measures included in their PAMAs, these 7 industries formed a consortium (APROPISCO) in order to install a pipeline system to dump the wastewater 12km out to sea and thus reduce water pollution in the bay. The system does not work properly, because of the current system that returns all the waste back to the bay and due to spillages along the pipeline. Water quality in the bay is critical, and massive fish death events are still observed in relation to fishmeal production activity. NGOs' projects: Local NGOs are engaged in various projects aiming at wildlife conservation research, local waste management programmes local training workshops and beach cleaning campaigns. Although they raise some public awareness and improve specific problems, these efforts are isolated. They should rather be coordinated as part of a broader conservation strategy (The Nature Conservancy, 2005; Mundo Azul, 2005; ProNaturaleza, 2005). Creation of an Autonomous Commission for the Sustainable Development of Paracas Bay: As a strategy for improving their environmental performance, the Camisea Natural Gas project is supporting this Commission, which will \"rehabilitate the risk zones existing in the bay and launch some technological advances to help to preserve the reserve\" (WWF, 2003). Pro Paracas project: Launched by the Regional Government, this initiative is also aiming at preserving Paracas Bay. Its first activities will be to assess the pollution levels in the area, help establish maximum allowable pollution levels and identify the problems' responsibles, before engaging in a process of strategic planning (CNR, 2005). Once again, this instance will repeat what other institutions have already or are doing: water quality assessments are done regularly by the Fisheries Vice-Ministry and by APROPISCO itself; who is responsible for water pollution is already known; maximum allowable pollution levels were already created and then revoked because of the fishing industry's pressure; strategic planning has already been done by the PNR Master Plan. The panorama in Paracas Bay is highly complex. Conservation, human livelihood, commercial and industrial objectives converge causing conflicts of use that seem irreconcilable. Too many governmental institutions have a word in Paracas' management and coordination between them is lacking. However, Paracas is a place worth preserving and efforts need to be done, but in an integrated, coordinated, structured way. The next sections will be devoted to present and describe the objectives and strategies to solve the exposed problematic and a specific proposal for the development of an institutional framework re-arrangement and the establishment of an integrated coastal zone management (ICZM) programme. In order to achieve the adequate and sustainable conservation of the Paracas Bay ecosystem, water quality, biodiversity preservation and habitat conservation are of vital importance. Additionally, to preserve and enhance", "label": 0 }, { "main_document": "the business needs to make more of an effort to make students aware of the business in order to maximise income from this market sector and the implications of this advertising would be obvious to analyse, as the profit made from these promotional nights would be significantly increased if the promotion was to be a success. Another way of benefiting from students is to promote to parents of students when it comes to graduation time, as families look for somewhere to have lunch on graduation days. This could be by putting a sign outside or again promoting in the last issue of OBScene of the previous semester in an effort to advertise to the graduating students. The success of this recommendation could be measured by looking at the number of meals served on previous graduation days and see if there is an increase. Not only does The White Horse need to improve on promoting to students, it should also consider expanding on its customer base by appealing more to other markets. This is important, as students have long holiday periods where the profit of the business decreases substantially. Therefore flyers could be posted through doors of houses in the local area (not just student accommodation but family houses too). It is important that all of the local community is aware of the products the pub offers in order to make the most of its potential and existing customers in the area. Promotions such as of the Christmas dinners (mentioned in the marketing communications plan) need to be enforced, especially as this is a popular time for students to return home so the pub needs all the business it can get so that there is not such a major decrease in profit when students go home. In order to see whether promoting the Christmas menu has been a success, the number of bookings taken could be compared to the number in previous years and then evaluated by the business. People: At present there are no reward schemes or targets set for members of staff, which could obviously provide a lack of motivation amongst the workforce and could potentially be a reason for there turnover levels being high. Therefore the business needs to reward staff for when they do a good job or handle awkward customers well and in a correct manner. The business should consider giving pay rises to staff members who are have worked there a long time and are loyal and reliable and introduce a free meal policy for staff when working a double shift. This will encourage staff to stay, which will result in saving time having to train new staff members up. Also if the staff are rewarded for good work, this will motivate them to do a good job and provide good customer service, which will be reflected on the customers. If customers are pleased with the service they receive they will be far more inclined to return again so this would provide repeat business, again increasing revenue. If customers were given feedback questionnaires on their service received", "label": 1 }, { "main_document": "in their mind but they cannot retrieve the appropriate word form for it or as William James puts it, they cannot find \"a sound to clothe it\". (James in chapter 5 \"The internal lexicon\" from presentation reading) This second finding also suggests that \"lexical retrieval takes place in two stages. The meaning of the word enables us to make contact with some kind of meaning code (i.e. we recognise that a word exists that has the meaning we are aiming for). We then go on to determine (or sometimes fail to determine) the form of the word.\" (Field, 2003: 63) But TOT experiences not only support the idea of a dual stage retrieval process, they also provide researchers with data which help to tell something about how this retrieval process actually works and how words seem to be stored in the mental lexicon with respect to meaning and form. When examining the answers \"astrolabe\", \"compass\", \"secant\", \"sextet\" and \"sexton\" for \"sextant\" in the TOT experiment described above more closely, it becomes clear that whereas the first two are words with similar meaning as they also describe navigational instruments, the latter are similar sounding words. They are all two-syllable words with a similar beginning and ending and except for \"sextet\", the stress pattern (a stressed first syllable and an unstressed second one) is the same as well. These answers imply that we do not store words independently but that there seem to be many links between them and that \"the organization is closer to a web of interconnecting nodes\" (chapter 5 \"The internal lexicon\" from presentation reading). TOT-experiments show that links between words of similar meaning and especially between coordinates, words on the same level of detail as the different terms for navigational instruments above, are particularly strong. They also support the bath-tub effect hypothesis which says that beginnings and endings of words as well as the stress pattern are more prominent in storage and that words are identified as similar-sounding if these features are equal or similar. (see examples above) According to one study, initial consonants were remembered 51 per cent of the time, final consonants 35 per cent and guesses about the number of syllables were also correct more than half of the time. (Aitchison 2003: 140) Probably most important, the TOT experience displays the parallel activation of words in the mental lexicon during word retrieval. When searching for a word in a TOT-state, many other words which are linked to the target word because of sharing the same outline features either with respect to meaning or with respect to form are activated in the speaker This seems to happen automatically and unconsciously and often the speaker then either denies them or uses them to describe the word he or she is searching for by explaining for example that the required word is similar sounding to \"sexton\". Thus, the TOT experience provides evidence for the spreading activation theory. In that theory, massive parallel activation takes place in both, the semantic-syntactic component containing the lemma and the phonological component containing word forms,", "label": 0 }, { "main_document": "On a first observation of Each of the three stanzas is of equal length and metre, the first five lines of each stanza working in iambic tetrameter followed by the final line of iambic pentameter, reverting to the traditional metre in Shakespeare's love sonnets. Perhaps this steady lyrical rhythm represents the steady heartbeat of the speaker emphasised by the masculine single rhyme scheme, thus linking to the theme of love presented throughout the poem. Supporting this idea is that as the poem draws to a climax in the third stanza, the natural flow of the final lines are broken up by punctuation more often than elsewhere in the 'song' and an exclamation of 'But which, O Cupid, wilt thou take?' possibly conveying the increasing intensity of feelings of the speaker and confusion in her heart, showing how deeply the situation affects her. M. Ferguson, M. J. Salter, J. Stallworthy, Norton & Company, London, 2005), p. 545, The rhyme scheme differs in the middle stanza from the first and final as the pattern changes from As the final syllable in each line is stressed it creates a greater impact and more resounding effect encouraging the awareness of the seriousness of the poem. Behn uses mainly imperfect rhyme; 'god', 'blood' and even eye pararhyme; 'prove' and 'love'. The poet could be using these fairly unsatisfying techniques to signify the complexity and irregularity of emotions and deep feelings, the chaos of love. The lack of perfect rhyme also maybe represents her indecisiveness about which lover to choose and the unorganised thoughts in her mind. Throughout this piece of literature can be seen a specific lexical cluster containing such words as 'heart', 'love', 'passion', 'flow', 'blood' and 'Cupid' working to conjure up a strong image in the reader's mind. This collection of words appear to link to the natural world; 'flow', evoking an image of a river and 'blood', a vital part of the human body reminding the reader of the naturalness of love and emotions. The particular image of 'Cupid' acts as a metaphor for the speaker's own decision on which lover to choose and relates to the traditional views on love as this Roman god is brought into the picture. Language is a key feature in the 'song' and many types are used for dramatic effect. The archaic language used; 'wilt thou', ''twixt', creates a sense of tradition and importance enhancing the significance of the poem's meaning and the gravity of 'love'. Behn also occasionally uses onomatopoeia in the poem such as the repeated word 'sigh' to enable the reader to obtain a clearer insight into the speakers mind and create a sharper picture through the detail of sound. The 'fever in' her 'blood' is personified and becomes 'restless'. This emphasises the torturous quality of her feelings and enhances the physical passion she feels in the situation. The idea of possessive love is conveyed powerfully it seems by the use of first person and the consistent use of 'my' especially whilst referring to 'Damon' and 'Alexis'. She loves both and cannot live with one", "label": 1 }, { "main_document": "Isolated chloroplasts and an oxygen electrode were used to investigate photosynthetic control, P/O ratio and the effect of uncoupling agents such as ammonium chloride on them. Uncoupling agents such as ammonium chloride prevent the formation of a proton gradient across the thylakoid membrane and mechanisms such as ATP synthesis that diminish the proton gradient, remove a constraint on the rate of electron transport. Hence the rate of oxygen evolution is enhanced by a smaller proton gradient. Non-cyclic photophosphorylation alone is capable of providing sufficient ATP per molecule of NADPH to satisfy the demands of the Calvin cycle. The rate of oxygen evolution is dependent on the irradiance level that the chloroplast are exposed to. Research into photosynthesis is often carried out on suspensions of isolated chloroplasts, for example photophosphorylation. This reaction is part of the light reaction and is studied in broken chloroplast therefore allowing the access of artificially added hydrophilic reagents such as electron acceptors to the thylakoid membrane. The oxygen electrode is a tool in conjunction with a light source that allows the continuous measurement of photosynthetic activity in such research. The Hill reaction (2H Plastocyanin is located on the luminal side of the ferricyanide - impermeable membrane therefore ferricyanide can only accept electrons from a donor on the stromal side of the PSI complex. Hence the evolution of oxygen in the Hill reaction with only ferricyanide present as the electron acceptor requires the activity of both photosystems (Figure 1). Cyclic phosphorylation (ADP + P As a result there is no net oxidation-reduction or oxygen evolution. In contrast to this, non-cyclic photophosphorylation (H Much controversy over the real value of n exists. n is often referred to as the P/2e- ratio or P/O ratio (i.e. number of ATP molecules synthesised per pair of electrons transferred or per oxygen atom equivalent evolved). The value of n though must be greater than 1.5 if non-cyclic photophosphorylation alone is to provide enough ATP per molecule of NADPH to satisfy the demands of the Calvin cycle. Otherwise cyclic phosphorylation would be required to fulfill the deficit that a situation of n less than 1.5 would create. The aim of the experiment was firstly to demonstrate the Hill reaction. Secondly to calculate a value for the P/2e- ratio and investigate the effect of uncoupling agents such as ammonium chloride and decreasing chlorophyll concentration have on photosynthetic control and the P/2e- ratio. The method was adhered to as stated in the laboratory manual 'Experiment: Photosynthetic electron transport in isolated chloroplast using the oxygen electrode.' With the exception that in part 3 (ii) the amount of chlorophyll in the reaction vessel was halved to that of 75 When the external light from the two LED's was turned off the rate of oxygen evolution decreased from 23.41 to 4.81 Upon turning back on the 2 external lights the rate of oxygen evolution returned back to approximately what it was before the lights were turned off (though 26% higher). Upon addition of DCMU (rate 4) the rate of O 8.93 Upon addition of ADP + P 1.45 c.u./min (rate", "label": 1 }, { "main_document": "Beier's comments that there was large-scale juvenile delinquency in London, there does not seem to have been large-scale gangs of problematic youths in London. This could perhaps be attributed to the alderman's orders that if apprentice riots broke out, curfews would be emplaced and masters would have to answer for the apprentice's behaviour, which would have encouraged the masters to treat their apprentice well. Generally, apprenticeships were a stabilising environment for the youth. They got moral, practical and religious teachings from their masters, as well as keeping them busy and away from crime. Similarly, control of the youth could be attributed to alehouses, theatres, bowling allies, tennis courts, providing social meeting places for the youth, keeping them occupied, away from crime. Although, some historians have argued that these social places could have been a double-edged sword and also provided a gathering place to organise riots, there does not seem to be much evidence to support this. So London dealt with the youth successfully, but what about the poor? They were another group who figured predominantly in London's population and could have caused serious unrest. London is said to have offered the best and the worst of urban worlds in the sixteenth century: a fabulously wealthy elite living cheek by jowl with a thoroughly destitute minority.\" London's poor relief system was the most advanced in England. It 'provided for some poor housekeepers and their children not only the statutory weekly doles...also pensions from the guilds and charitable handouts in money and kind from parochial and ward fines and parish fees.\" Finaly argues that 'the poor never engineered social uprisings in London,' 'Major cities, such as London, developed institutions precisely to ensure that poverty did not lead to unrest.\" London's poverty was contained through numerous strategies. For example, the government aimed to maintain food supplies, so that during times of poor harvests the poor still had food to survive. In 1570 they set up permanent grain reserves. The companies and wards took much responsibility for government initiatives such as grain reserves. They brought and stored the grain and then during scarcity decided how much to distribute to different families. There were no major grain riots in London during this period, therefore the permanent grain reserves should be seen as a successful venture maintaining control in London. Rappaport 'Social Structure and Mobility in sixteenth-century London: Part I' p. 107 Pearl 'Change and stability in Seventeenth -Century London' p. 4 Finlay, R. and Beier, A. L (eds) Finlay and Beier (eds) There were numerous other government actions to try to control poverty, such as sending vagrants abroad as soldiers, or as indentured servants to the colonies. A key act was the 1601 Elizabethan Poor Law. The sick, old, infirm and mentally ill, known as the impotent poor were looked after in poor houses and able bodied poor were sent to workhouses. Those vagrants who refused to work were sent to houses of correction. Also, poor children were given apprenticeships, and thus the opportunity to turn to crime was removed from them. Some hospitals were founded to", "label": 1 }, { "main_document": "slowly. The relative humidity also increased at both Leeson House and Durlston Head; this would be expected if a warm front passed through the region (figure 11, AC and figure 15, AD). The potential gradient shows the passage of the front clearly (figure 10, AC). It shows a classic fog profile, which corresponds to the observations on the instrumented walks: The solar flux also decreased sharply at 1400UTC, which could signify thick low cloud or fog (figure 12, AC). Finally, the pressure was at a minimum at approximately 1400UTC (figure 15, AD). A radiosonde launched at 1503UTC on Saturday 28 This corresponds to satellite pictures for Saturday afternoon in which a layer of cirrus can be seen following the jet stream (figure 20, AF). This also corresponds to observations made on instrumented walks. There is a thick layer of cloud extending to the ground that corresponds to the fog reported by the other sources of data. In addition, the warm front can be seen in the tephigram, as seen in the figure below: After the front had passed, the region was in an exceptionally warm and moist air mass, which can be seen by the high relative humidity measurements in figure 11 AC and figure 15 AD. This can be observed as a striking feature in the satellite images for Saturday (figure 20, AF). Cirrus clouds are carried along by the jet stream, so a defined edge is seen in the cirrus at the edges of the stream. The strength of the jet stream can be seen in the 500hPa geopotential and pressure charts. Earlier on in the week, the jet stream was weak due to the large depression over the UK (figure 21, AF). However, by Saturday the jet had significantly increased in strength and caused the edge to the cirrus (figure 22, AF). During the early hours of Sunday morning, a cold front passed over the region. This is the more northerly front lying across the South coast in figure 3 (A1). It is a trailing cold front from a developed cyclone, which moved in from the West during the previous days. The passage of the front was observed as a sharp dip followed by a rise in pressure, recorded at 0230UTC at Leeson House observatory (figure 15, AD). The temperature also started to fall at approximately this time, suggesting a cooler air mass may have moved into the region (figure 15, AD). 2.5mm of rain also fell at Leeson House although the time is unknown. This is unlikely to be a local effect as rain was recorded at Bournemouth between 0000 and 0600UTC. A sharp change in wind direction was recorded at Durlston Head at approximately 0230UTC (figure 13 AC), this is consistent with the passage of a cold front. There is also a sharp dip in the potential gradient at this time (figure 10, AC). However as rain drops are negatively charged, the potential gradient should have dropped below zero if it had rained at Durlston Head. Therefore this is an indication that it might have been a", "label": 1 }, { "main_document": "define in a If an element is used several times (like the windows), it's define in a However, two side of the postman is needed: profile and face side. So, the both are described in two different groups. Two different postman is made: one which is static, and a second which is walking (the legs are moving). Each one will be used in different case (when the postman walks or not). For some elements (like the arm) which are the same for every character, the definition (described in the postman definition) is used and only the colour is changed. Each element is drawn. Only one mountain is define and three are instanced using several transformation in order to create three \"different\" but similar mountains. The same method is used to create the forest and the flowers (one tree/flower is drawn and several are instanced). The sun is made with a A path defines the path road. Some flowers are placed around the path road. Then, each element can be used with the The SVG file can be separated in two parts. The first part is the definitions parts (which we explain above) and the second part is the one where we instance each element. Each animation concerning the place of an element in the scene (e.g.: x, y, width,...attributes) are made in the second part (so the For example, when the postman is walking, the legs are animated in the first part whereas the postman position (x and y attribute) are animate in the second part. Each animation is defined by duration. However, the beginning of each animation is not defined by a time (for example: So, only the first animation is defined by This method is useful because we can change the duration of one animation (or more) without change all the begin attribute. Effectively, if we change the duration of an animation, every animation which comes after will be moved in the time. However, with this method, we don't care about the beginning of the others animations, because they may be launched after the end of the previous (or another). The different animation: The car motion: a simple The wheel turn in the same time by using an The postman/girl/husband motion: the legs motions (and arms motions) are defined in the first part of the SVG file by an The postman/husband translation is made with an These characters can be hidden if they are in the house. This effect is done with the When the postman (and the husband) walks on the path road, 3 animation are in the same time: one for move the character, another for move the legs and the last one for scale the character (because the house is far from the road. The heart and eyes animation: these animations are made in the first part of the SVG file, by using The beginning and end gradient: A big rectangle has been created and become transparent (or opaque) at the beginning/end of this animation. I think that some elements are not very detailed. For example, every", "label": 0 }, { "main_document": "A 32 year old male with 7 month history of episodes of 'blankness' admitted on the The episodes were described as The longest episode was witnessed, unresponsiveness was timed at between 3-4 minutes. He is able to hear voices during these periods but is unable to make out what the voices are saying. He has been told by witnesses that his The episodes are unassociated with loss of consciousness, fits, tremor, aura, sickness, visual impairment, incontinence, tongue biting or injury. There is He feels completely 'normal' both before and after an episode. He does not feel the episodes are triggered by tiredness or excess alcohol but feels stress does have some impact. Episodes are He self-presented to his GP in Since his anticonvulsant has been stopped recently however in an outpatient clinic, episode frequency has increased once again. He is also no symptoms of significance affecting cardiovascular, abdominal or respiratory systems. No recent disturbance of either bowel or bladder function. Prior to He reports being unable to relax and is becoming increasingly aggressive, moody and short-tempered with his friends and family but doesn't describe low mood. He has felt more tired than usual in the month prior to admission and on a number of occasions has felt light-headed but has never fainted. No asthma, diabetes, cardiovascular disease, TB, rheumatic heart disease, rheumatoid arthritis, stroke, MI, jaundice. no recent surgical history. Fluoxetine 20mg OD - selective serotonin re-uptake inhibitor. Management of depressive illness. He takes no other over the counter or alternative medication. No known drug allergies. both his mother and father have longstanding history of anxiety and depression. Father has received ECT in the past. Both are alive and otherwise well. Prior to admission he was fully independent and managed all cleaning, cooking and personal care. He works as a garden shop supervisor and experiences a significant degree of associated stress. He has never smoked Given the presenting complaint of episodes of blankness in an otherwise fit young man, the most likely single cause of In such a case, the history from a witness is crucial for diagnosis as neurological examination is most often normal in epilepsy. Examination may however indicate underlying pathology and a clinical diagnosis e.g. papilloedema and hemiparesis in a tumour affecting one of the hemispheres. A seizure is a convulsion or transient abnormal event resulting from spontaneous, uncoordinated discharge of cerebral neurones. In a partial complex seizure as proposed here, the Some stereotyping of episode e.g. arms and legs go stiff each time. Responded to anticonvulsant medication, both sodium valproate and lamotragine. Some indication of automatism e.g. lip puckering, staring. No aura, episodes are sudden in onset, Episodes are brief, often only a few seconds long. Episodes not fully stereotyped - variable length and reports varying experiences i.e. doesn't always phase out of conversations. Unusual to have daily seizures if have never had them before the age of 30 years. If are experiencing daily seizures it is unusual not to have experience a tonic-clonic seizure. Given the number of factors going against a diagnosis of complex partial seizures", "label": 1 }, { "main_document": "Nations and Nationalism in a Global Era. Blackwell Publishers, Cambridge. Pg. 162. One of politics' oldest ideologies, nationalism has impacted governments' and individuals' actions since the 18 Today, however, as the historically detrimental consequences of nationalist actions are exposed, the ideology may be losing its appeal. As argued by Hobsbawm, in a globalizing world where geopolitical boundaries are losing importance, \"the economic and political developments which were so conducive to [nationalism's] formation... [may] render [it] obsolete\" Some theorists nonetheless insist that \"it would be folly to predict an early supersession of nationalism\", as the ideology's role in maintaining social cohesion and political legitimation render nationalism essential even to today's governments How credible is this view? According to Andrew Heywood, \"few political ideologies have been forced to endure prophecies of their eminent demise for as long as nationalism\" Hobsbawm's prediction, like many before it, could indeed be mistaken; only the coming decades can determine whether nationalism's political power will survive in this century. Smith, Andrew. Nations and Nationalism in a Global Era. Blackwell Publishers, Cambridge. Pg. 159. Smith, Andrew. Nations and Nationalism in a Global Era. Blackwell Publishers, Cambridge. Pg. 178. Heywood, Andrew. 3rd Edition. Palgrave, 2004. Pg. 185.", "label": 0 }, { "main_document": "Bourgeois Revolution', p. 315 Not only does this highlight the main problem with MacPherson's thesis, but illustrates one of two fundamental factors that this essay has stressed as vital in understanding the Levellers and the franchise. The Levellers were political realists, willing to compromise on specific issues if it meant that there was more chance of the acceptance of their programme as a whole. Throughout their short political life the Levellers had changed their demands accordingly with political developments; the final goal was more important than single issues. The Levellers 'adopted a democratic stance when they believed it was indispensable to their purposes, and equally they abandoned it when it was a hindrance to their achievement'. They came together with shared grievances, but often they did not share the same solutions. The Levellers were thus 'a more heterogeneous party advocating a programme that has not been fully worked out in all its details, but prepared to make a series of compromises to achieve its ends.\" Thus any attempt to discover a single principle that guided the Levellers on the issue of the franchise would seem fruitless, for there were a myriad of opinions. Yet certain generalisations can be made. The Levellers were not necessarily all democrats, although it was manhood suffrage that was being discussed at Putney, but they strove to create an egalitarian order. There would be greater opportunity to own property in a Leveller society, and thus people would not be reliant on others for wage-labour or alms. The voting body would be widely extended, significantly more than stated by MacPherson under the non-servant franchise for there would be fewer servants and beggars, and thus there would be greater political participation, heightened by a decentralised state. Thus it is questionable whether so much emphasis should be placed simply on their proposals for franchise reform. They proposed an incredibly radical and 'modern' set of ideals that were far more important than one issue, proposing a written constitution that England has not adopted to this day. Davis, 'The Levellers and Democracy', p. 180 Thompson, 'Maximillian Petty and the Putney Debate on the Franchise', p. 63", "label": 1 }, { "main_document": "to the context and situation of the interaction. Ethnography is also concerned with other features such as the genre of the speech event and rules of interaction, which are discussed further on in other types of analysis. Another approach is conversation analysis, which ignores where the data was collected and is therefore seen as a micro-analytic approach. Most advances in conversation analysis were made by Sacks, Schegloff and Jefferson who saw this approach as a first step towards achieving a Turn-taking in the group interaction was fairly disjointed in that all of the members of the group had an opinion on most of the topics raised, leading to a large quantity of overlaps and interruptions. In some situations, these features can be seen as disturbing the flow of conversation but in this particular exchange, they often helped to create good conversational cohesion as people were continually taking over the turn of speech. Consequently, there were few pauses, indicating a continuous flow of dialogue throughout the exercise. Many of the interruptions in this interaction were in fact as a result of large amounts of laughter from the whole group, which at various points caused breaks in speech of up to three seconds. This laughter showed the relaxed atmosphere of the interaction and was often influential in encouraging the flow of speech as it motivated speakers to continue as people were obviously interested in what was being said. In some instances, the interruptions to speech actually changed the whole direction of the conversational topic, for example when J responds to a previous utterance by stating 'I know, when I was 18...', N interrupts by stating 'that mum...' and the conversation then flows on from this point. The linguists Zimmerman and West have the opinion that interruptions are a violation of turn-taking, as well as being The political interview however, was entirely different as a result of a completely different conversational goal. Politicians are constantly aware that interviewers are most likely trying to obtain damaging admissions from them which will cause media interest and create friction between the different governmental parties and the public as a whole. By doing this, the interviewer is working to antagonise the interviewee to such an extent that they give away some element of detail that should not be exposed. This interview therefore closely relates to the opinion of Zimmerman and West in that these interruptions are rude and discourteous, as can be seen from the example below; Overlaps are not seen to be present in the political interview, where as they are found consistently throughout the group interaction. These occur when the new speaker comes in where they think the other speaker is about to finish. This is therefore an unintended error but in most cases is unavoidable, mainly due to a miscalculation of timing, for example; In this instance, it is obvious that N felt that M had finished her turn at the point of saying 'cat' and therefore took over the turn, unaware that M was still to say 'though'. This can instantly be seen as an", "label": 1 }, { "main_document": "The game that we have decided upon is a waterfall model based game. The board will be split into five levels, each of which will get progressively higher, and riskier than the one below. Unlike in the waterfall model, the later stages - i.e. testing and maintenance will be located 'physically' above the earlier levels. The aim is to progress through the earlier levels, completing tasks, improving your development team and making important choices about the development process, until eventually, you reach the release - the maintenance level. He who has amassed the greatest number of points then wins the game. The points came from a range of things which are calculated when you finish - the amount of cash you have, the quality of your development team and the speed with which you completed the game. Competition comes in from other development houses, seeking to gather the best programmers for their own projects, rushing to get out competing software and fighting tooth and nail for stakeholder's money. In total there will be eleven different types of tile upon the board, three types of card - programmer, chance and backup, up to six player pieces, six sets of coloured pins, one die and many wads of cash. Following this is a brief description of each of these that need explaining, their purpose in the game, alongside any necessary details and available images. As stated above, the board will be split into five different levels; each tiered above the earlier one. The board will be constructed of the hard board as stated as available by Rachael. People will navigate the board in a clockwise manner, moving up and down between levels using the up and down tiles. Each level shall be split into a number of tiles. Here is shown the manner in which they shall be laid out: At the beginning of the game, all players should lay their pieces inside the start square. The highest roller will then initiate a clockwise rotation around the board based on their second roll. Following this it has no further significance and can be considered a 'safe' square. When you land on this square you have the choice of moving up a level. Each one of these tiles will have six holes drilled into the top of it. The first time you move up each level having completed all compulsory tasks for that level, you must collect an amount of cash (to be decided after prototyping) from the bank and place one of your coloured pegs inside one of the holes. When you move on this square you can move down to the same square on the level below. When you land on a bug square, you must wait in that square for three turns, or roll a double to escape. When you land on a crash square you must go down to the When you land on a chance square you can either choose not to do anything, or pick up a chance card. See Chance Cards. Unless you wish to obtain a new", "label": 1 }, { "main_document": "blisters\". Mrs Fosamax 5mg (a bisphosphonate to reduce the rate of bone turnover, used prophylactically with the use of azathioprine) was also prescribed daily. Following the eruption of more blisters, azathioprine was prescribed. Mrs The eruptions of new blisters were monitored at the clinic appointments. Mrs Mrs The disease is self-limiting in the majority of cases and the steroids can usually be stopped after two years. The medication and treatment plan will be shared with Mrs Mrs Steroid-induced side-effects should also be monitored for and the dermatologists contacted if any side-effects present. Having started azathioprine, Mrs Mrs Mrs Mrs Mrs Mrs On the There have been no complications with her bullous pemphigoid treatment; she was discharged home 4 days later. Bullous Pemphigoid (BP) is a chronic, acquired autoimmune subepidermal bullous disease in which autoantibodies are directed against components of the basement membrane zone of the skin, causing blistering of the skin. BP is characterised by the presence of IgG autoantibodies specific to hemidesmosomal BP antigens. BP is the most common blistering disease in the West, with an estimated incidence of 6-7 cases per million population per year (1). BP occurs equally in both sexes and it is usually a disease of the elderly (>70 years). BP is a non-scarring blistering disease, with a typical presentation in the flexure regions, however the disease can be generalised. Mucous membranes are involved in about 50% of patients (2). Tense blisters occur on either erythematous or normal appearing skin and the blister formation may be preceded by an urticarial or eczematous rash. The degree of itch in each individual varies from none to intense and it can precede the presentation of blisters by weeks, months and occasionally years. Diagnosis of BP is made clinically, histologically and immunohistochemically (using either direct and/or indirect immunofluorescence). Biopsies of a fresh blister show a subepidermal cleft with a mixed dermal inflammatory infiltrate containing numerous Eosinophils. IgG and/or C3 deposits are found at the basement membrane zone. The class of immunoglobulin bound to the basement membrane zone distinguishes from Linear IgA disease. The treatment aim in BP is to suppress the clinical signs sufficiently to make the disease tolerable to an individual patient, i.e. the reduction of blister formation, urticarial lesions and pruritus. The disease is self-limiting and usually remits within 5 years, the mortality rates today vary between 6% and 41% (3). The treatment primarily aims to reduce the inflammatory process, i.e. with the use of corticosteroids and antibiotics. Immunosuppressive agents aim to suppress the production of the pathogenic antibodies. BP is a long-term disease and ideally all patients should be followed until they are in complete remission and off all treatment. They should be regularly reviewed to ensure that they are not being continued on higher doses of topical or systemic treatment than are necessary to provide sufficient control of their disease. There is no established optimum treatment for BP and therefore there is no gold standard against which to audit clinical practice. In an age when our population are living longer, it is inevitable that BP will become", "label": 1 }, { "main_document": "group plc had poor cash flow in 2005 with - 1 5.8m compared to Operating cash flow was One notable reason for the enormous declined of cash flow from operating activities was that the movement on provisions includes Meanwhile, the increase of stock and debtors and reduction of creditor also led to the declined of cash. (Annual report, 2005, p58) The capital expenditure and financial investment increased by the huge declined of purchasing of tangible fixed assets about Another reason for the declined of cash is that the group moved about Anyway, the group now emphasis on improve their process to better cash generation, such as streamline the procurement process by FAST track initiative to make easier for customers and suppliers, provide work capital management courses to staffs and the like. (Annual report, 2005, p18) The general comparison and analysis of the above three financial statements can't give real indication of the company's corporation financial performance unless by ratio analysis for deeper understanding. Ratio analysis are used to interpret performance trend year by year and compared to competitors' performance or as benchmarking to average level in industry. [1] The accounting ratios can be classified to four categories: the rate of profitability, liquidity, Activity/Efficiency and Investment/Shareholder return. The reason for this classification is for simple calculation, easy presentation and understanding. However, the classifications of ratios vary and depend on different industries and circumstances. The above ratios are basic of any investigation in spite of the company's type. The calculations in the following sections refer to the VT group plc report from FAME database. The profitability ratios are used to check how much profit the company makes in specific period compared to other periods or other competitors. The data in the profit and loss account can't interpret correct because the profit should be related to the business size and capital investment amounts of the company. [5] Here are four ratios to be calculated: These two ratios measures how successful the company has been at trading. Gross profit ratio indicates how much profit the company earned related to the sales turnover, while profit margin ratio compares the net profit before taxation with sales turnover. It looks like good trend if it increases year by year. [5] Another trading ratio for supplement of Gross profit ratio is make up ratio, which indicates gross profit percentage of cost of goods sold. It's better to increase because of the cost of sold for each units reduced. [1] This ROCE ratio is the best method to evaluate profitability of company. It measures what earned profit relate to what finance the company have totally. Due to the interest is a cost of borrowing money, it's better to use \"profit before interest to calculate the ratio. (FACS, WMG notes, 2005) The ROI and return on shareholders' fund ratios are used to identify shareholder' benefit. They compare the net profit before and after tax with shareholders' fund to find that if the business is valuable to invest and has financial risk. For long term business, they need to keep stable increasing for", "label": 0 }, { "main_document": "providing consulting work appreciate the wider entity. However this increased efficiency and effectiveness comes at a high cost of independence. In 2000, Andersen received US$5.8m in consulting fees compared to US$1.1m for audit services. Clearly, there is huge incentive for Andersen to remain on good terms with Freddie and ultimately, audit objectivity is impaired by self-interest. Finally, because so much consulting was being provided, Andersen could be perceived as being too involved with the audit client's management to conduct the audit in an unbiased manner. Strict guidelines exist regarding ex-employees of the audit client becoming an auditor of the company, as well as former principals/employees of the audit firm joining of the audit client. This did not appear to have been upheld as people joked that the Freddie Mac/Andersen relationship was one of a Interestingly, during the scandal's investigation, Andersen claimed that they did not have the expertise or competence to understand the complicated derivatives transactions that were scrutinised of which facilitated the smoothing of profits. The question is, if they were indeed incompetent, how did Andersen staff move to take positions in Freddie's Investment Divisions, especially as vice president? It appears that Andersen was not in a position to objectively evaluate the financial statements and to give an independent opinion on the state of the Freddie's financial performance and position. They lacked credibility and independence was threatened on multiple levels. This scandal is one out of a long list of similar scandals such as Enron and WorldCom, which collectively have damaged the reputation of the auditing profession. The outcome resulted in the further downfall of Andersen where it adds to the more than 100 auditing related civil legislations pending against the firm. The surge in corporate scandals raised public concerns of issues such as independence and corporate governance in many of the high profile cases. These issues prevail in the Freddie case and their existence highlighted the need for the SOX and more stringent regulation. SOX formally removed the self regulatory monopoly in the auditing profession by requiring the rotation of external auditors after a 5 year period Moreover, auditors are no longer permitted to provide non-audit services to audit client and the increasing influence of audit committees and mandatory inclusion of non-executive directors, all further contributing to enhance independence between the auditor and client. The SOX attempts to improve the balance of power within corporations by requiring the separation of CEO and chairman positions. Such distinction aids communication and increases transparency within firms, effectively reducing the chance of fraud occurring. Additionally, by setting clear management roles and functions, directors can be held accountable for their actions. Moreover, stricter rules now exist regarding the disclosure of director remuneration which is a preventative measure against 'slush-fund-accounting'; understating quarterly profits as a precautionary measure to protect manager remuneration. Clearly, the SOX has tackled many of the underlying issues regarding the scandals and appears have restored some confidence in the auditing profession. Subsequently, it is argued that the effect of this scandal (and others) prompted auditors to be more cautions in taking up risky", "label": 0 }, { "main_document": "tendencies, is the fear of a violent death. The ruler's powers are directly connected to this, his lack of limits, leaving him all the licence of the State of Nature, is precisely because the sovereign's lion must be inspire more fear than the collected polecats and foxes of the rest of humankind. Locke also believes however, that fear is an important element in the running of a society, for it is, to some extent, fear of punishment by the sovereign that acts as a preventative for criminal action, \"Punishing the crime for restraint, and preventing the like offence\" Locke places a limit here too, saying that each transgression should be punished proportionately to the extent that it becomes \"an ill bargain to the Offender\" In the animal kingdom, aside from the examples of ants and bees that Hobbes describes, animals without a political system avoid an early extinction by having aggressive rituals that avoid actual violence. Man's relative equality does not stop later sovereigns claiming authority by acquisition, as distinct to the institution discussed here. This is because they already have a system of control that allows them to amass a physical majority of man. Hobbes says that any man not within the Social Contract is logically in the State Of Nature and deserves all that he gets. Locke ibid. II, The difference between Hobbes and Locke's attitude to punishment is explained in their differing Laws of Nature. For Hobbes the Laws of Nature are guidelines dictated by reason This is similar to Locke's tenet that one should \"preserve the rest of Mankind\" Both of these Laws come with additional clauses and it is because Hobbes' is that much more aggressive than Locke's that their ideas diverge. In Hobbes' case if peace is unobtainable a man \"may seek, and use, all helps, and advantages of Warre\" whereas Locke merely releases you of your obligation only when your own preservation comes in competition. This difference can be further explained by the two thinker's respective origins for the Laws, Hobbes' being derived from reason and not ever calling upon God as a sole ground for a premise even talking of the State of War resulting from the absence of a \"visible Power\", intimating somewhat blasphemous lack of control for an omnipotent deity. On the other hand, Locke's Laws are more Divine, saying that God appoints a government to be the adjudicator of all things Locke also quotes \"the judicious Hooker\" extensively, who uses men's equality as the \"Foundation of that Obligation to mutual Love\" Hobbes Locke ibid. II, ibid. II, ibid. II, These differing views then explain their final positions, one advocating sovereignty with limits and by consent, the other advocating absolute sovereignty, and total control without the possibility of injustice. Hobbes desires stability above all else and as a result of his more pessimistic view of mankind feels that the less control is in the hands of the masses, the further from the State of War and the safer we will be. Locke, coming from slightly more settled times, sees unlimited powers as", "label": 1 }, { "main_document": "taking socioeconomic reasons into consideration may blur the moral status of foetus. The vague guidance provided by the Royal College of Obstetrician's and Gynaecologists on this point As indicated in the letter to the police from the Vice President of the Royal College of Obstetrician's and Gynaecologists in the case of See also R. Scott, 'The Uncertain Scope of Reproductive Autonomy in Preimplantation Genetic Diagnosis and Selective Abortion' (2005b) 13 291. The uncertainties highlighted in the previous section were raised in the case of [2003] EWHC 3318. The case was a successful application for permission to proceed with judicial review for the police's decision not to investigate doctors who authorized an abortion for bilateral cleft lip and palate at 28 weeks under the disability ground of the Act. Although the issue was left unresolved, the case raises various issues on the legality of an abortion for foetal disability. The Claimant raised a challenge to abortion on grounds of foetal disability first by claiming 'that the foetus at 24 weeks gestation and greater has a right to life pursuant to Article 2 of the European Convention on Human Rights, which is only subject to the competing Article 2 right of the mother' and 'that section 1(1)(d) of the Abortion Act 1967 is incompatible with Articles 2, 3, 8 and 14 of the European Convention on Human Rights' There is no case law directly on this point but it has been suggested that it is unlikely for this claim to succeed. R. Scott (2005a), See also the judgment from the European Court of Human Rights, The main thrust of the claim was whether cleft lip and palate constitutes 'serious handicap' within the meaning of section 1(1)(d) of the Abortion Act 1967. Scott is sceptical of this idea, although 'certain solidity' is acknowledged. See R. Scott (2005a), Thirdly, although this was not explicitly contended in the case, an issue raised was whether taking the parents' views into account in considering the definition of 'serious handicap' is consistent with the disability ground of the Abortion Act. See also R. Scott (2005b), The Claimant, Rev. Jepson issued a press statement (available at Prime responsibility for these abortions rests with the Secretary of State. I will therefore pursue this against the Secretary of State but will not be seeking the prosecution of the doctors concerned. Tragic as it is that this baby's life was ended, I hope this case will prevent further late term abortions.', as cited in R. Scott (2005b), In summary, it has been seen that abortion on grounds of foetal disability is a problematic area which raises difficult moral questions. Aborting a severely disabled foetus as a circumstance on its own is already arguable. What is more difficult is in distinguishing which conditions of disability merit termination. Determining the factors that should be taken into consideration in allowing an abortion regarding foetal disability further complicates the matter. The In other words, a closer examination of English abortion law in the context of foetal disability has revealed that the current law fails to address the more", "label": 0 }, { "main_document": "available to her. I felt that could be in the form of a midwife, breastfeeding councilor, or breastfeeding manuals that had been tried and tested by women. I advised Jane that the number for her local midwives could be used for this purpose as an experienced member of staff would be available around the clock to answer any questions, not just breastfeeding difficulties. My own experience in practice revealed that many women believe that the number is for use only in an emergency situation. Other members of the primary care trust would also be available for advice and support in the form of health visitors that are attached to each local GP's surgery and can be contacted through the normal number for the GP. A number of drop in centers and Breastfeeding Cafes are available throughout Oxfordshire where mothers are provided with hands on help from health care professionals experienced in the art of breast feeding. This is also an excellent opportunity to meet other women who have experienced or are experiencing problems, develop a peer support network (Dykes, 2003) and for fathers to learn how they can help as I have found they are always welcome. These centers are largely introduced and driven by new policies (NICE, 2004). A number of Lay Breastfeeding Counsellors for various support groups are available for contact these include local La Leche League, National Childbirth Trust, Association of Breastfeeding Mothers, Breast Intentions and Breastfeeding Network. Other forms of support can include books; however these can often be expensive and may not be an option for those on a budget. A cheaper alternative is an internet access site run by La Leche League which sells specific breastfeeding advice for A popular option in Kidlington is borrowing breastfeeding videos that show optimum attachment, various methods of attachment for breastfeeding and how to hand express and use the pumps (we do reassure the women that this will be demonstrated once they have had the baby). Having discussed previous problems experienced with breastfeeding and provided Jane with all of the above information, I felt it was important to discover if Jane felt her needs had been met and her opinions on health promotion. At this particular stage of her pregnancy Jane felt she was given relevant information on how to breastfeed she could understand, this left her better informed about help and support that is available, what to expect when breastfeeding, and maybe most importantly talking about worries and concerns she may have. Therefore I feel Jane's plan of care was successful at this point, as she felt empowered and increasingly confident about her ability to breastfeed. The success of health promotion in this case could not be properly monitored without follow up visits being carried out ideally up to six months after the birth of her baby as this is the minimum recommended time to breastfeed children (UNICEF, 1999). Health promotion can be very easily influenced if those providing care and those receiving it are both willing and able to carry out advice that is offered. I found Jane", "label": 1 }, { "main_document": "academic and political background; his approach of objectivity, historicism and intuition, his methodology of primary source authentication in addition to his teaching methods and literary style will be examined. It will be argued that Ranke's foundational belief in an all-knowing and active God determined that his Moreover, Ranke's In addition, Ranke was neither the pioneer nor the epitome of 'scientific' However, Ranke's favoured position in the Prussian state, eminence in the academic world, and the scholarly and literary appeal of his work brought him exceptional fame during his career, and made him a legendary figure thereafter. Ranke had an unprecedented platform then, which saw him become associated with previously introduced 'scientific' elements of history. He was also an innovator, as so does to a certain extent deserve the reputation as ' Ranke's religious, academic, and political background profoundly influenced his 'scientific' philosophy, approach, and methodology. It produced his particular unconscious and conscious subjectivity, which he persisted to deny existed in his work. Ranke experienced a religious upbringing in the Lutheran mystic tradition. God, for him, was the guiding hand of history, so his calling was (after a spell as a schoolmaster between 1818 and 1825) to learn God's plan for humanity - to be an historian. As a Lutheran, Ranke knew he could only decipher God's will by understanding history, through the study of historical documents rather than doctrine. He received his academic training in theology, and also in philology (the study of ancient languages and texts) and linguistics at Leipzig. Ranke was a theological historian at heart, who, paradoxically, claimed to use 'science' for his purpose. He was, indeed, not the first figure with such a philosophy. Jean Bodin (1530-96) is one such example. Bodin defended history as the search for 'truth', meaning the determination of God's divine plan, and suggested some undeniably 'scientific' methods. Jean Bodin's He distinguished divine, natural and human history, offered a method for determining which sources to consult (universal to the particular), instructed historians to be critical or past histories for purpose, subjectivity and methods, dwelt on the usefulness of histories to government, and complied a comprehensive (if neglecting medieval works) bibliography of historians from the Old Testament to recent writers. John H. Arnold, Next to religion as an inherent influence on Ranke's life and works was politics. Born at a defining moment in Prussian history, a time of awesome statecraft, politics infiltrated Ranke's life from his innermost thoughts to the materiality of his career and wallet. Ranke loved his country as a romantic and conservative patriot. He shared the Prussian partiality to monarchical rule and authoritarian government. Ranke rejected the Enlightenment philosophy of progress, including liberty and democracy. He was primarily concerned with the political, over economic, social, or cultural. He was an inflexible Eurocentric. A 'scientific' historian would not allow preference for political and European history to eclipse the need to consider other factors ('variables') and unique history of the other continents. Ranke not only had a personal preference for studying the political, he was obliged to produce works on politics due to the", "label": 1 }, { "main_document": "determiner Genitive and Quantifying determiners are not the only determiners which form partitive determiner structures when they combine to pre-modify a noun. Quantifying determiners can combine with Demonstrative determiners (19) and the trend continues as Demonstrative determiners can also combine with Interrogative determiners (20). (19) (20) In (20) the Demonstrative determiner The Interrogative determiner Thus the ways in which central determiners unite to modify a noun is very flexible once they incorporate (21) A demonstrated above, there are constraints to the ways in which combinations of the determiner types above may combine within the same NP. Thus, when Downing and Locke (2002:442) state that the preposition It is possible to conclude that when determiners pre-modify the head of an NP, there are obligatory grammatical rules which mean they do so in a particular way depending on the type of determiner or the type of noun. English grammar handles combinations of determiners by application of grammatical rules like obligatory inclusion of preposition However, the requirement for agreement is applicable to other parts of speech which behave as pre-head modifiers. Yet, these parts of speech tend to have more flexibility in terms of what position they can be in when they modify the noun (adjectives), or can indeed be modified themselves (adjectives and nouns) by other pre-head modifiers (adjectives and adverbs). This can be explained by the fact that these various parts of speech each have different functions and roles within NP's. Despite this, on observation of NP's, determiners are usually easy to identify and this is due to the restrictiveness of their position within the NP structure. This is often helpful as it would otherwise be difficult to distinguish a central quantifying determiner like Thus observing the positioning of determiners in the NP makes these things clearer.", "label": 1 }, { "main_document": "effects of imperialism. At the same time military campaigns were continuing, and Brunt has pointed out that \"prolonged absence on campaigns... must have ruined small farmers\" (Brunt 1971: 77), while the Hannibalic War had left the land of others in ruin. Although this may be an exaggerated or even fictional account by Gaius, it must have had some basis in fact if it was to be believable. Therefore we can see that there was a shift in land ownership from poor to rich that resulted from imperialism and needed to be addressed. Cited on Lecture 5: Land handout (25/1/2005) The military demands placed on Rome by imperialism also created a need for land reform. As mentioned above the poorest men were exempt from military service, so the decline of the peasantry left the army with fewer reserves to call upon for future battles. Appian wrote that \"the people became troubled lest they should no longer have allies of the Italian stock\" (Appian To contemporaries this decline in potential soldiers could be seen in the census returns. In 141 327,000 citizens were registered, compared to 318,000 in 135 and 395,000 (Brunt 1971: 70). I agree with Brunt's view that, although the data is uncertain and does not give us a precise value for the actual population of Rome, the increase between the 135 and 124 figures can partly be explained by previously unregistered citizens being registered, either because they had gained land from the Gracchan distributions or because they hoped to do so (Brunt 1971: 79). Stockton has argued against this, claiming that the uncertainties of the figures makes them unsafe to use, and that the increase of the 124 figure cannot be because of the Gracchan distributions since the 135 figure appears to be unaffected, even though the However, I do not agree with either of these points. First of all, although the figures are uncertain the increase between 131 and 124 is so large that it cannot be effectively explained by a simple case of uncertainty or increased efficiency of the censors. Secondly Appian tells us that the work of dividing public and private land was complicated and time-consuming (Appian The increase in registered citizens in the 124 census, therefore, shows us just how many people were not being registered due to a lack of land, thus meaning that agrarian reform in 133 was important since it allowed the Roman state to exploit its military reserves to the full potential. Yet the above reasons do not provide us with any understanding as to why Tiberius Gracchus was so determined to pass his agrarian law. Although there was a need for reform, Tiberius' apparent zeal towards this policy makes the need for agrarian reform appear far greater than I believe it actually was. In my view, it was Tiberius' personal circumstances that caused him to press forward where other politicians would have backed down. For example, Gaius Laelius had previously attempted agrarian reform, yet like Tiberius he was met with strong opposition and subsequently dropped his policy (Plutarch Early in adult life", "label": 1 }, { "main_document": "that the European Council has influenced Community decision-making beyond its powers: Lenaerts et al; 2005: 386 The implication is that the European Council influences the Community decision-making processes, but it cannot do so within the powers that have been granted to it in the Treaties and hence formal decisions are taken by the other executive Community Institutions. In support of this view Lenaerts et al. argued that: \"in practice [the Council] only gives a legal effect to what the European Council has already decided on the political level.\" The European Council decides to an extent about the policies of the Council, which then legalises the decision. Nevertheless, the Council has no legal obligation to obey their decisions but there is a political obligation as the ministerial members are from the same governments. Lenaerts et al; 2005: 387 Lenaerts et al make similar observation (Lenaerts et al; 2005: 387) Interestingly, the Commission has suggested that: \"[T]he European Council no longer serves just to provide general impetus and arbitrate on specific issues: although the Treaties do not confer on it any explicit decision-making powers, it has become a genuine political decision-maker in practice.\" This observation identifies decision-making processes in the EU which are not based on the legal powers set out in Treaties, but on political influences. European Commission; 2002: 234 Furthermore, the Commission gives an example where regulations falling under the competence of the Community Institutions were agreed on by the European Council without the Community procedures, which most certainly falls under the competence of the Commission and the Council. European Commission; 2002: 209. Moreover, the Code of Conduct for Business Taxation was originally discussed in conclusions of the Economic and Financial Affairs Council of 1.12.1997 Doc. 98/C 2/01, but it was not legally binding (European Commission; 2002: 209) It might be an exaggeration to say that the Commission is trying to understate the importance of such agreement, but clearly, it can be seen that the European Council interferes with an area that is more likely within Commission's competence. Thus, there is evidence that the European Council has influence in the Community decision-making exceeding those powers defined in the Treaties. Moreover, Tillotson et al. argue that: \"[The European Council] is not itself normally engaged in formal Treaty legislative and political co-operation processes.\" It seems logical that the interference of the European Council in the Community decision-making process is the exception, but that should not devalue the importance of those interferences. Interestingly, Tillotson et al. clearly suggest that the European Council can take part in formal decision-making processes. Tillotson et al; 2003: 94 Adopting a similar view, Bast argues that: \"the European Council does not adopt laws... thus its law-making (or, nominally, decision-making) is not subject to the heightened scrutiny.\" It is crucial to note here that the European Council in fact takes part in the legislative process, but it cannot be held accountable or reviewed. Bast; 2005: 1 Craig goes even further in his argument and has stated: Craig; 2005: 1 It has to be agreed on that the political reality is a", "label": 0 }, { "main_document": "representation of members. However the spread of organizing unionism is thin in the UK. The union resources devotes to organizing are low and attempts to build an 'organizing culture' are restricted to small or medium sized unions.(Heery et al,2000c) This is due to the various internal and external constraints faced in organizing unionism. Existing members may resist organizing as they lose out from the re-distribution of union resources essential to accommodate new members. The number of trained organizers in the UK are few and those that exist resist organizing as they view it as an attempt to intensify their workload and as they are mainly skilled in servicing existing membership. Organizers face difficulties in integrating younger and female representatives. Inability of unions to gain access to unorganized sites Employer hostility reflected not through open aggressive anti-union tactics but through pre-emptive sophisticated HRM mechanisms. Existence of inter-union rivalry allows employers to strategically heighten competition and tensions amidst unions which re-emphasizes the need for an employer-focused strategy. (Heery, Kelly and Waddington, 2003, p.84) The future health of trade unionism rests on securing two goals, namely, ( (Blyton and Turnbull, 2004, p.137) Thus trade unions have to design an agenda that will appeal both to the interests of the employees and thus boost membership and to those of the employers to gain recognition as bargaining agents. As far as recruiting new members is concerned, whether trade unions will be able to establish a foothold in the private sector service firms and non-union greenfield sites through strategies of organizing and partnership employed by them still remains dubious due to the various constraints prevalent in the employment of the same. The capacity of unions to develop 'organizational flexibility' (Taylor, 2001, p.12) to attract the new category of non-standard and contingent workers still remains unproven. The continuing rise of never membership across all segments of the workforce renders it difficult for unions to know where to focus their new recruitment and organizing strategies. (Bryson and Gomez, 2005) Employer recognition of trade unions as bargaining agents has been extremely limited, especially due to the exclusion of small private sector firms from the ERA 1999.Presently only an estimated one third of all Britain's employees have their pay and conditions determined by collective bargaining which contrasts with 70% in 1989 (Taylor,2001,p.9) However even in workplaces where unions have managed to retain an institutional presence, the Workplace IRS reveals that it amounts to a 'hollow shell' marked by limited representation, bargaining coverage and influence over management policies (Heery, Kelly and Waddington, 1) The legislation enacted by the Labour government has also failed to bring about any improvement in the declining union membership. Although unionism still remains pre-dominant in the public sector, the future of trade unions depends on securing a foothold in the private services where most of the jobs are now being created. Thus trade unionism in Britain still faces many insurmountable challenges which makes a significant revival in collective representation back to the levels of the 1970s when an estimated 58% of members were union members most improbable. As Metcalf", "label": 0 }, { "main_document": "highways, electricity. The legal system must have fair property rights and protect owners from thieves. The tax system must be well-managed. There must be open opportunities for new investors to start a business. Geographical location of countries still causes problems for poor countries to develop. Jeffrey Sachs of Columbia University argued that technologies developed in the temperate zones may not be applicable to tropical areas as a result of weather difference and soil structure. Human capital investment in tropical areas is still not enough. This fact pulls down the demand which for high tech products. Alwyn Young carefully studied growth of Asian Tigers and concluded that all four countries (Hong Kong, South Korea, Singapore and Taiwan) have remarkable high growth which is most explained by increased input, not by higher productivity. Since in 1960's these countries were quite poor, the labour force was extremely cheap. The number of workers dramatically increased due to women's participation. Large amounts of money were spent on improving the college and university system to improve and thus increase productivity. The education system was reformed at all levels ensuring that all children attended elementary education and compulsory high school education. Domestic consumption was discouraged through government policies such as high tariffs. A high degree of economic freedom, political stability, clear property rights, encouragement of export, high savings rate ensured their rapid transformation from poor countries to highly-industrialized rich nations in thirty years. Economist Joseph Stiglitz commented that in these countries 'the Government created an environment in which markets could thrive'. Because of the focus on export driven growth, Asian Tigers experienced currency devaluation. These economies focus exclusively on export demand and put high tariffs on imports which heavily affect the economic health of their export nations. In addition, these nations have met difficulties after losing their initial competitive advantage, cheap labour. Many economists argue that nowadays, India and China with their fast-growing economies are following the steps of the Tigers. Since gaining independence in 1971, health and education levels in Bangladesh have improved remarkably, and poverty has been declining. Yet it remains one of the poorest countries in the world. The income level is very low. Based on Barro and Sala-i-Martin report, for the period from 1960 and 1985 investment in Bangladesh averaged 4.6 percent of GDP compare to Japan's 36.6 percent and USA's 24 percent. The effect of both low saving and high population growth is as theory would predict. Hostile climates for foreign investment, low investment in human capital, unsustainable public sector spending and weak governance are all causes of its extreme situation right now. Since the intervention of non-governmental organisations the improvements are highly significant. Reducing population growth and attaining gender parity in school enrolments rates are notable achievements of recent years. In the past decade, infant mortality has been reduced by half. Adult literacy rates have been increased on average by seven per cent. On the other hand, inefficient state-owned enterprises, in particular those providing utilities and infrastructure, have resulted to significant loses by not meeting national demand. Poor governance and pervasive institutional weakness", "label": 0 }, { "main_document": "When looking at the mentioned travelling sales person's problem, we see there are obviously an infinite amount of possibilities for the journey. As we do not discriminate on the starting point and know that they must always end at their start position (make a circuit) we can figure out the number of possible permutations on the journey. Using general logic we can see that there must be n! (n*(n-1)*(n-2)*...*(2)*(1)) number of permutations available. This is because if you have n cities you have n choices for the first city (and subsequent end city), then (n-1) choices for the second city and so on until you only have 1 choice for the second to last city (as we already know the last city must be the same as the first). Using this as a guide there must be 120 (5!) possible journeys available to the TSP. This is obviously too many to do simply by pen and paper and would probably be a waste of time actually working them all out by hand, as we could probably see the most efficient way of traversing the circuit by drawing a rough diagram (see appendix 1). By just simply using this diagram and by finding the shortest path from one node to another or maybe a slightly longer path but which includes visiting another node on the way, e.g. it is shorter to go straight from London to Cambridge than that via Stansted, however by going via Stansted it means we can then continue straight on from Cambridge, I have found what I believe to be the shortest path without the lengthy use of finding the length of every circuit possible to be: This circuit works out to be 237 miles. We can obviously not guarantee that this is definitely the shortest path as I have not worked out every possible circuit however within the limits I have this would be my most accurate idea of what the shortest circuit could be. When we talk about a NP Problem where n=30 the number of permutations would be 2.65*10 To verify this very loge number I found out how many processes would happen every year, by doing the following: First finding out how many nanoseconds there are in a year: Then we know there are 3600 seconds in an hour and 24 hours in a day. Therefore there are 8.64*10 Using 365 days a year there is 3.15*10 30! (number of processes needed)/ 3.15*10 This gives an amazing 8.41*10 This is obviously at present an unreachable goal. Something that seems so simple increases so rapidly as the n is increased. From my understanding an NP Complete problem is loosely speaking, a class of problems that are believed unsolvable within a reasonable amount of time in the worst case. Thus, approximation algorithms are very important for solving real-world problems such as the payphone coin collector problem?? There are other methods of finding the shortest possible circuits that would cut the amount of time, such as algorithms which rearrange the nodes and distances to find a path which", "label": 1 }, { "main_document": "shock' on the Salon public in its rupture from the rococo style popular under Louis XV. 2), demonstrate a severity and firmness that 'violated the Boime goes further in calling the Kennedy, Boime, The 3) both appear to be indicative of the Revolutionary ideal of raising the whole above the individual. This principle grew out of Rousseau's 'enlightened' writings on the sacrifice of man's natural freedom to a negotiated state beneficial to all, a concept known as the general will. This is evident in the way in which the That the statue of Roma both screens out the bodies and is Brutus' source of comfort (fig. 4), adds to the sense of subordination of the individual to the state. Ib id. , p.393 Kennedy, Structure reflects theme in a similar way in the The oath uniting the brothers in the interest of the state 'is also the controlling pictorial motif subordinating the individual figures to the compositional whole.\" That no single figure or group dominates the painting seems to indicate that this is a visual representation of the idea of subverting the individual to the general will. This might be interpreted as a call by David for a new form of sovereignty, one no longer vested in a single person, that is, the king. It is not surprising that the Boime, Roberts, Crow, Thomas E., Emphasising this idea of commitment to the protection of the nation is the way in which 'David's art put the Republic above personal sentiment and family attachment.\" Both In the story of Brutus, for example, it is his commitment to the nation that impels the sacrifice of his two sons The central character is shadowed in darkness, perhaps emphasising the way in which his personal sacrifice is of less significance than his defence of the nation. In choosing to depict the 'moment of exultation and self-sacrifice', then, David seems to anticipate the commitment to general welfare made by those taking the famous Tennis Court Oath of 1789. Kennedy, 1989, p.88 Boime, Levey, Michael, In addition, the way in which women are depicted in the Their languid, flowing forms create a sharp divide with the almost unnatural rigidity of the Horattis (fig. 5), creating some form of continuity between them and the rococo style. Roberts, Boime, This, in turn, has been linked to the Revolution via the idea that the values represented in the feminine had to be overcome to establish a new order. In the symbolic links between the women and the The Revolution claimed that the focus upon personal interest displayed by the nobility, and the feebleness of the king in failing to punish this, was to blame for the economic crisis the nation faced by 1789. Dowd, Quote taken from argument against middle-class privilege put forward by Sieyes in pamphlet, , cited in Boime, The principles that David seems to advocate have been seen not merely as Revolutionary, but republican, that is, anticipating the anti-monarchical turn of events later in the Revolution. This presumed relationship derives from both his treatment of the concept of general", "label": 1 }, { "main_document": "means work? / Efficiency - are minimal resources used? / Effectiveness - does it contribute to long-term aims?). Environmental constraints are included as well as a monitoring system. Note that the stages are not supposed to be iterated through in order. Overlappings, backtracking and repeated iteration are accepted as part of the model ([4], pp. 471). SSM originated as a response to failed attempts to apply the more scientific systems engineering to situations susceptible to \"the unpredictable nature of human activity systems\" ([4], p. 470). It borrows methods from the social sciences rather than the natural sciences or engineering, where it is easier to formulate a problem (even though it may be just as hard to solve!). As we will see, its \"softness\" is its major strength and weakness at the same time, making it inappropriate for technical analysis. Even though the above instructions seem precise, I am sure that when applying them to a problem, questions would arise as to how to flesh them out. It is necessary to go through the process several times in order to grasp it. Having not done that, the concepts seem very vague. The \"recipe\" approach is misleading at best and is likely to be unsuccessful. A \"feel\" for the situation at hand is required, which can only be gained by experience. Checkland himself diverges from the original model when he puts SSM into action ([1]). Another point is that the method is not formal enough to be able to measure its contribution to the success of a project. It is hard to tell whether a good engineer or manager would not have done equally well without having studied SSM. Romm picks up this idea in [6] (p. 4). This leads to the next point, that it is all just common sense. Every intelligent person would tackle a difficult problem like that, i.e. identify the problem, express it in an appropriate way, whether textual, graphic or in some other appropriate notation, finding hypotheses (here they are called root definitions), modelling, do a reality check and finally take action. The problems organizations have to deal with are naturally \"messy\" or \"wicked\". But someone lacking the talent would not be able to solve them even when he is given the tool that SSM is, and someone suitable for the job would choose a systematic approach anyway. In the ideal world of SSM, everyone would participate in the enquiry process, would cooperate with the external consultants and would give accurate and unbiased accounts. In reality, people have their own agendas, will be suspicious and reluctant to change. Romm writes: For example, it is unlikely for an employee to openly admit a conflict or a communication problem with their superior, especially if they know it will be made public in the form of a rich picture or the like. So, instead of revealing problems, I believe the rich picture will cover them up. SSM thus maintains the status quo ([3], p. 96). Consequently, a modern view of the organization is assumed by SSM, putting collective action and communication above", "label": 0 }, { "main_document": "CFS faces an increasingly competitive and sophisticated market place. Pressure has been growing to reduce prices in order to maintain market share but this has affected profit margins. To overcome these pressures and to remain competitive the Operations Director has proposed further restructuring of the company. This report has been instigated by the Board of Directors to investigate the proposals outlined by the Operations Director. The report will provide a thorough analysis of these proposals and outline how the project will be implemented, controlled and reviewed. Recommendations will be suggested at the end with more detailed justification provided in the main body of text. Stakeholders are those who have an interest in a project. They may be few (e.g. a University assignment) or numerous (e.g. building a new hospital Good management of project stakeholders is important because \"if stakeholder management is undertaken successfully, the project will run more smoothly\" The expectations of different stakeholders may vary but will also change with time. There are many stakeholders in this project: local community, MPs, society, hospital employees, building contractors to name just a few Orr, Alan. D, (2004), The stakeholder groups in this project are as follows. High Street Agents who would welcome a \"much closer working relationship\" and \"more effective service\". Trade Unions, who will want to protect the interest of the workers, e.g. pay v work load. Thirdly, external customers who would like a \"more personal and effective service\". IT Employees - Responsible for implementing new IT services. CFS Insurance Shareholders who will want to see a positive return on the project Operations Department Employees - directly affected by the reorganisation. The Operations Director - Has overall responsibility for the department. HR Department Employees - Responsible for training of employees. Wider society should be considered as they will benefit from increased taxes on potentially greater profits made by CFS, should the project be a success. Although shareholders have no direct stake in the project, they have a stake in the company and so would wish for it to succeed For effective management of each of the stakeholder groups, their interest / stake and influence on the project must be ascertained. This will enable the suitable method of communication with that stakeholder group to be used. Figure 1 illustrates the grouping of the stakeholders by their interest and influence. From this is can be determined how best to communication with them. High street agents, trade unions, shareholders and IT employees all have low stakes in the project and have little influence. To best manage this group, informing is required. External customers have high influence (they can shop elsewhere for insurance) as do HR as they will perform the training. Fully informing is appropriate here. The operations department employees have a high stake in the project and because their power is low it is best to consult with them. Finally the operations director has both high interest and influence in the project so should be invited to participate. There is no need to communicate explicitly with society on this project. Wheeler, Richard. J, (2005),", "label": 1 }, { "main_document": "immoral which go against public policy. However, Workwell Ltd may decide to argue that they only submitted a 'quote' which does not constitute an offer. But, in Subsequently, before dealing with the second case involving Workwell Ltd and Drainklear, we must again analyse the formalities of a contract. In this case, when Workwell Ltd. invited tenders, Drainklear made an offer by submitting a price of Clearly, offer was made in writing which also mentioned that the 'tender is open for acceptance within 3 months'. This offer which is open for acceptance within 3 months is not legally binding as it is gratuitous. Workwell Ltd. has provided no consideration in return. In Workwell Ltd. has promised nothing in return and Drainklear, the offeror is therefore free to withdraw or revoke the promise at anytime before the offer is accepted which in this case, Drainklear revoked the promise by quoting a new price of Certainly, Workwell Ltd. may claim that the letter of acceptance have been sent well before the 3 month deadline. Since communications between both parties are made by post, the postal rules apply. Postal rules stated that postal acceptance produces an instantaneous legal effect and a postal offer or revocation is effective only on receipt. Therefore, it is arguable by Workwell Ltd. that according to the postal rules established in the case of Court held: Once a letter of acceptance is posted, a contract comes into existence immediately. However, court held in Through these decisions, the letter of acceptance by Workwell Ltd. proves to be invalid on the day it was posted as it was wrongly addressed and only reached Drainklear 2 days after Workwell Ltd. received the revocation letter The letter which Drainklear wrote to Workwell Ltd. explaining that they could not undertake the work for less than In But in this situation, since the letter of acceptance is invalid on the day it was posted, it is considered valid only on the day of receipt and by that time, revocation had been taken place (revocation letter arrived Workwell Ltd. 2 days before the acceptance date). Hence, there is no legal binding contract which undoubtedly gives no allowance for Workwell Ltd. to hold Drainklear at its original price. It was held that a contract was formed on 11th October when the claimant mailed his telegram of acceptance. The revocation was not communicated to the claimant until 20th October and was, therefore, too late to be effective. In conclusion, it is observed that Workwell Ltd. is bound by their bid for the Highroad contract and if Workwell Ltd. decides not to continue with the project for reasons like high cost for instance, Workwell Ltd. is said to be in breach of contract. Workwell Ltd. also may not hold Drainklear to their original price because it is seen that there was no contract and it was not legally binding and that the revocation is allowable before acceptance of contract. Both this situations is likely to give complications to Workwell Ltd. It is suggested that Workwell Ltd. should try and renegotiate the new", "label": 0 }, { "main_document": "was colonised, agricultural practices on the mainland would have been far more established than during the 9/10 The evidence that I have looked at regarding both Mallorca and Cyprus indicates that humans played an important role in the extinction of island faunas through a variety of different means, including direct hunting and more indirect habitat destruction. The idea of climate being an important factor is largely played down as the patterns of extinction are too inconsistent to support this; they occur throughout the Pleistocene and early Holocene at different times and different places regardless. However, it may have weakened the species making the human impact all the more severe. Furthermore, the evidence for extinctions as a whole seem to follow a pattern of human colonisation globally as well as in the Mediterranean. I would suggest that the different causes of the human induced extinction vary between the two islands largely due to the varying strength of subsistence patterns occurring on the mainland at the two different times, and the degree of cultural isolation of the islands.", "label": 1 }, { "main_document": "facility for individuals and organizations to communicate directly with one another regardless of where they are or when they wish to communicate' (Blattberg & Deighton, 1991). The World Wide Web, also known as 'WWW' or 'the Web', is a hypermedia based on the Internet that links up information and resources from all over the world. The Web 'started to be used for marketing in 1995 as corporations like Kraft Foods and Proctor & Gamble turned to the Internet in an effort to have their products available to millions of potential customers' (Sorry, 1995; in Han & Mills, 2006). Browser software, such as Internet Explore, allows the Web to represent a multitude of media - text, video, graphics, picture, and sound - on a local computer screen, regardless of where the source of the information physically located (Berthon et al, 1996). More to the point, the Web doubtlessly is a 'many-to many' mediated communications model, different from the traditional marketing communication models, and that it makes the direct interaction in between marketers, consumers and the medium possible (Hoffman & Novak, 1996). As a result, the Web, due to the features above, enables marketers and advertisers to demonstrate full-colour content of product information and business announcement, verbally and pictorially, provide online order procedure and draw out online customer feedback. The Internet and the Webs weight no less in tourism marketing than other commercial environment. According to the nature of tourism product which has been stated at the beginning of this section, the Internet worth being regarded as the most suitable medium employed in the industry to promote and deliver tourism product and service since its provision is relatively high in information content, anchored in Zinkhan's opinion (2002). As well, the interactivity of the Internet encourages the building and management of relations between tourism service provider and consumer, for the initial reason that relations are particularly substantial in the nature of hospitality and tourism business. In return, online surfers benefit from the interactivity and easy-access of the Internet, which resulted in the changed ways of people's information search behaviour. As a matter of fact, by 2001, more than 70% of travellers had visited the Internet so as to obtain travel info, as revealed by an earlier report from the World Tourism Organisation Business Council. The thinking can also reflect on the phenomenon in recent years that many countries have recognized the potential of the Internet and World Wide Webs (www), and have developed an Internet presence to promote their tourism offerings and increase their share of the competitive global tourism market (Larson, 2004). For instance, on the homepage of its Website, Eurostar daily updates its countdown about the new launch at the UK end estimated on 14 November, 2007, switched from currently used Waterloo International Station to St. Pancras International Station, with the intention of informing, promoting about new features of Eurostar and picturing online surfers/potential customers the anticipation that 'more appealing and impressive travel experience provided by Eurostar is coming up!' ( In brief, in tourism industry, the Internet facilitates rapid information delivery and", "label": 0 }, { "main_document": "Organisational culture influences individual and group behaviour and it can be managed to facilitate success within groups and organisations. There has been academic debate whether highly integrated organisational culture, \"strong\" organisational culture is a key factor in success. It can be argued that stimulation of individuals within the organisation to high performance and contribution to their well-being is feasible for any organisation. Thus, finding a suitable organisational culture is important in human resources management (HRM) considerations. However, the precise nature of an appropriate organisational culture is disputable, for example the official descriptions of companies do not reflect how the life as a part of the organisation is perceived by individuals and groups. In the light of cases in organisational culture it is possible to derive support to both sides of the academic debate. Nevertheless, the extent of integration of the ideal organisational culture seems to be limited, even though there is a clear case for ensuring the element of control in a successful organisational culture. In order to understand the scope of \"strong\" organisational culture, the concept of organisational culture has to be clarified. Schein defined organisational culture in his work: Schein; 1992: 19 Schein's definition is broad and it can be rather easily observed that the definition allows a wide inclusion of factors into concept of organisational culture. However, \"strong\" organisational cultures have distinguishing characteristics. Firstly and most importantly, the vision of the organisational culture is strict in \"strong\" cultures, as Rosenfeld and Wilson define it: Rosenfeld et al; 1999: 271 It must be noticed here that Wilson is a critic of so-called \"excellence school\" that is in favour of \"strong\"-culture theory. But, it is clear that the definitive characteristic of \"strong\" culture is the drive for integration and it discourages anything that is in contradiction of its perceived essence. Secondly, in \"strong\" cultures there is a certain level of control to discourage devious behaviour. Methods used to discourage particular types of behaviour are often sanctions and failures to reward. The integrity in \"strong\" cultures must be enforced, it is arguably not self-sustaining, as sub-cultures will eventually develop and deviate from the original culture. Porter et al argued that there is an observable connection in numerous cases of a \"strong\" organisational culture accompanied with financial success and increased competitiveness. Porter et al based their arguments on findings of Sorensen, who found empirical evidence of existence of a relationship between \"strong\" cultures and financial success. Porter et al; 2003: 34 Sorensen; 2002: 71 On the contrary, Wilson has argued already in 1992 that similar conclusion can be reached, but the underlying theoretical issues are supported by questionable empirical evidence; his argumentation was a reply to such findings by Kotter et al. Wilson; 1992: 73 I) Numerous good companies fail to sustain success II) There are alternative ways to corporate success, such as monopoly III) Most studies have poor sampling and represent only view of top managers IV) All industries do not provide such empirical evidence Adaptated from Wilson 1992 There are clearly cases that favour his view about the limitations of empirical", "label": 0 }, { "main_document": "Effectiveness is usually measured by the ability to produce a desired or intended result. It is therefore necessary to understand and identify the intentions and aims of the UN. However, this task is perhaps impossible. Firstly, because the aims of the UN as set out in its charter can be and have been interpreted in different ways. Secondly, because the role of the UN changes. As Paul Taylor argues: \"the work of these institutions and their role in international society have altered since the late 1980s\" This essay will therefore, identify a number of events or issues which have drawn criticism towards the UN, and will attempt to analyse whether these prove ineffectiveness and if so, why? The following aspects of the UN will be analysed: Taylor, P (2001) \"The United Nations and international order\", I will argue that the very fact that its effectiveness is so difficult to gauge indicates inherent problems of ineffectiveness and that the cause of this debate can be seen to stem from a conflict of fundamental theories which the UN is torn between. When different states and people expect different things from an institution, its effectiveness must surely be limited. Whilst this essay will concentrate on the ideological differences which can be said to lead to ineffectiveness within the UN, the practical difficulties which it faces must be highlighted: the United Nations \"...is charged with responsibilities for virtually every facet of the human and planetary condition. To do all this it is provided with less funds per year than Western children spend at Christmas, and fewer staff than the civil service of a medium-size European city.\" These kinds of difficulties clearly pose enormous problems to the UN. However, I would argue that more fundamentally problematic is the conflict of purpose and expectation which the UN struggles to fulfil. The following two quotations from the UN Charter demonstrate how the role of the UN is often interpreted differently: Urquhart, B and Childers, E \"Towards a more effective United Nations\", Clearly a charter will always be open to interpretation. However, it could be argued that there is a more significant conflict which runs through the UN and its members. These two statements could be said to have their origins in two different theories or ideologies of world politics. The first originating in a liberal-socialistic view of the world, where the aims among others are for people to unite across (or despite of) state borders, to defend human rights and to promote justice. The latter is more representative of a realist view of the world based on state sovereignty (as applied during and immediately after the second world war, when the UN was founded) and the pursuit of power by all states. These underlying theories are evident in many debates which the UN faces and arguably, in the following examples of supposed failure of effectiveness. Ransom, D Upside Down: The United Nations at 60, Brown, C (2001) While, this inherent problem may produce ineffectiveness, for example in the use of abstention over Israeli occupation of the West Bank, there", "label": 1 }, { "main_document": "Windeyer J. at 393 2 A.C. 207, Lord Goff at 262-3 1 W.L.R. 68, Steyn L.J. at 77 Clearly, the evolution of the doctrine of privity is intricately bound with the doctrine of consideration, certain aspects of it in particular. Viscount Haldane L.C.'s assertion, in at 853 A.C. 70, Lord Wright at 79 The view that the doctrine of privity is part of the doctrine that 'consideration must move from the promisee', and that they both perform the same function, is shared by Salmond, J. and Williams, J., The Law of Contracts, 1945, p.100 \"Return to Dunlop v Selfridge?\" (1960) 23 M.L.R. 373 at 382-384 A Casebook on Contract, 7th ed., 1982, p219 (but cf. p. 224) 11th ed., 1986, pp 74-75 However, a number of scholars do not agree that privity is just a part of consideration, because both the requirement of consideration and the requirement of privity must be satisfied in order for the plaintiff to enforce a contract; in this sense, it is possible to regard privity as merely a procedural rule, which comes into play only Greig suggests that \"In many circumstances, there is no practical difference in result between applying the doctrine of privity or demanding that consideration move from the promisee\", Greig, J.L.R., op. cit., p. 989 However, the two doctrines are more easily distinguishable in cases where a contract is made with joint promisees, for example, where X promises to pay a In the event that Y carries out the repairs, but X refuses to pay Z, then, if privity is no more than an aspect of the doctrine of consideration, Z would be unable to enforce the promise, for although Z is a party to the contract, he has provided no consideration. If however, the two doctrines are distinct, Z would not be \"debarred\" by the doctrine of privity. In a sense, privity and consideration would constitute two separate hurdles for Z to surmount and not one. This concept is reiterated in Though the \"High Court was divided upon the construction of the agreement, four of the judges were of the opinion that if, on its true interpretation, the wife was a party to the agreement, she was entitled to receive the royalties payable after her husband's death even though she personally had given no consideration for the company's promise.\" This example affirms that though the two doctrines very closely converge and seen to be similar in certain situations, they can be separated, and McKendrick, op. cit., p.139 Greig, op. cit., p. 989 Leong, op. cit., p. 87 This distinction is apprehended by the Law Commission in its Consultation Paper No 121, This disrelation reinforces the argument that a reform of the third party rule does paras 2.5-2.10 p. 22 The Law Commission also considers the somewhat ambiguous maxim, \"consideration must move from the promisee\", and remarks that it may be interpreted in two ways; firstly, \"it can be taken to mean that to be binding a promise must be supported by consideration\", i.e., gratuitous promises cannot be enforced; it is essentially in this", "label": 1 }, { "main_document": "longer considered a reality (Gittins, 1993). However as well as personal factors attracting women to motherhood, economic and social factors may also be influential. Lancaster has argued that children do provide some economic profit, as they can be a way of claiming higher benefits and gaining access to housing, (1965, cited in Andorka, 1978, p364). However it is now felt that economic factors do not influence the decision as much as in previous times, as women are no longer economically dependant upon men, and children are not needed to provide a source of income. Instead social factors have more of an input. Coleman claims that children are a form of 'social capital' (1988, cited in Schoen et al, 1997, p337), as they provide women with social benefits that they might not otherwise have, and this gain can influence the decision to have children. Having children can mean the development of social relationships that can be beneficial to women, and ensure higher standings within a social network (Schoen et al, 1997). Becoming a parent signals adult status has been reached, and this can result in acceptance into certain prestigious social groups, or the chance for social mobility (Hoffman and Hoffman, 1973, cited in Andorka, 1978, p338), for example some workplaces reward workers who have children as they are seen as mature and responsible. Although this is more applicable to male workers, it can mean that if a man is given promotion the whole family will be seen differently by society. It is also suggested that having children can produce better social relationships with kin networks, meaning more physical, emotional and financial support that may not otherwise have been achieved (Schoen et al, 1997). Having children can also provide women specifically with a higher status in society, as mothers are glorified (McDaniel, 1996), and many women may wish to be regarded highly by society. It is also argued that children can give women more power in the domestic sphere, as nurturing is seen as naturally linked to women, so they are given control over this area (Gittins, 1993), and therefore see it as a way of gaining more power within domestic relationships. Security is another form of social capital that can be created by having children. As well as being a source of security in the form of constant love, they can also provide social and financial security, especially in old age. Morrell found that many women feared old age, as women are now living longer than men (1994) and this fear can mean that women choose to have children to ensure their security, therefore making them an important social resource. These more individual and personal reasons that influence the choice of women to have children often combine in a complex way, however they are not the only factors that affect the decision. National discourses on motherhood and social policy have subtle influences that provide a background on which the more explicit influences of personal fulfilment and economic and social benefits are established. Patriarchal ideology prescribes particular behaviour for marriage and reproduction; the idea", "label": 1 }, { "main_document": "FGM or awareness campaigns. See Togo, CCPR, A/58/40 vol. I (2002) 36, paragraph 78(5). Statements of concern that despite the existence of legislation banning FGM, its practice is still widespread and therefore requires more effective implementation of education campaigns. See, Egypt, CRC, CRC/C/103 (2001) 36, paragraphs 240 and 241. It is clear that legislation and education/awareness campaigns are the two main prongs that UN committees advocate in eradicating FGM, and indeed, the former cannot work on its own. The main criticism however is that recommendations tend to stop at legislation and education. There are occasionally more detailed suggestions on means to eradicate the practice such as enlisting the support of religious and community leaders As can be seen in the list of issues given by the CESCR to Egypt and Sudan, states are asked to explain what measures have been taken, and an evaluation of the success or failure of such measures, but they are not asked about broader health policies or the right of access to a minimum level of health care. This isolates FGM \"from the gamut of auxiliary conditions that anchor and motivate it\" and implies a top-down approach that is unlikely to succeed. See, Sudan, CRC, CRC/C/20 (1993) 22, paragraph 116. Kenya, CEDAW, A/58/38 part I (2003) 35, paragraph 214. Obiora (1996-1997), 364. The above debates illustrate the immense difficulties in bringing about change in people's actions and behaviours through human rights instruments and bodies. Following on from the critique that the wider context of women's lives is almost absent from FGM debates by UN treaty bodies is the notion that the choices women make in their daily lives are often being 'traded' in for others. This means that while women seize upon one right, they are often compelled to give up something for that right in a trade-off. Seif El Dawla Seif El Dawlaal. (1998), 87. The different voices that Seif El Dawla The advantages of using these strategic postures as an analytical tool is that rather than presenting a static picture of unchanging gender relations and hierarchies, it highlights how relations and hierarchies change, and how constraints both emerge and dissolve rather than remaining constant. This is particularly important when looking at FGM that \"has a social function that many young girls and women find compelling - beyond choice.\" Ibid, 101. This provides a much more nuanced, complex and complete picture of the decisions women make when performing such practices as FGM. UN treaty bodies consistently note their concern about traditional harmful practices and the need to combat them through education, however, women are not necessarily ignorant of the harm but Petchesky (1998), 19. Legislation, education, the support of civil society, and religious and community leaders are important 'multidisciplinary' strategies, but at the same time they are not always in harmony which has led Egyptian women for example, \"to negotiate the conflicting pressures of religious authorities, medical authorities, class divisions, and above all traditional gender norms when it comes to FGM.\" Ultimately women continuously adopt strategies that sometimes accommodate and at other times resist. At the", "label": 0 }, { "main_document": "killing four men, \" Law Com No 237 (1996), para 8.21 at pg 106. 5 April 2005. However, one might question the fact that the new proposals might be too harsh on companies. They do not guarantee that companies will escape liability even if all the necessary precautions and measures have been taken to ensure safety in the workplace because nothing is being said about what the appropriate standard expected of the company should consist of. This is a jury question and all will depend on how they interpret the words. Moreover, corporations might be forced to devote huge amounts of financial resources beyond their capacity to cater for all possible eventuality of danger in the workplace, and this is not realistic. The harshness of the new law might discourage the formation of new companies or even lead to closure of existing ones. Consequently, in the long run this may have a huge impact on the economy. Therefore, even if the Government is attempting to rectify the loopholes of the present law, it can be seen that the new proposals also engender new problems. Hence, at times it is necessary to ask ourselves whether corporations should be in the first place capable of being liable for manslaughter. Besides, if companies are found guilty, they will only be required to pay large fines to the victims' families. Perhaps, it is better to have corporations liable for only a breach of duty of care towards its employees to ensure their safety and health in the workplace and consequently pay damages to those affected by the breach. The result will afterall be the same. In this respect, the law of tort might be more appropriate to deal with corporate harm-doing than the criminal law.", "label": 0 }, { "main_document": "Yefang, who in turn had been inspired by the Soviet Yevesi Liberman, who advocated local autonomy of enterprise while maintaining obligations to the state, ideas that Mao rejected as the \"capitalist road\" As a result partly of the differing levels of development in the two countries, the Russians focused their economic reform on the heavy industry sector, an area with more prestige attached in their competition with the United States, while China began the process in agriculture, with Deng believing this would be the most effective method to stimulate the growth of \"productive forces\". The first reform was the introduction of the Contract Responsibility System, which allowed farmers to receive incentive payments for carrying out production contracts, usually given to family groups, first on a yearly then a longer term basis, though the land remained publicly owned. Small scale family farming rather than the huge collectives introduced under Mao meant there was a return to the good land management using specific local knowledge. Though grain remained under central control which enabled for the state provision of food known as the Iron Rice Bowl, farmers were gradually allowed to sell other goods outside of their production quotas on the free market. When this proved to be successful it was introduced across the country, a move which quickly boosted the income of rural areas. While farming moved more towards family management through the contract system, the town and village enterprises maintained the essence of the commune system, despite their nominal abolition in 1982. These small rural enterprises, along with the rapidly growing farming industry, acted as an alternate sector to support the State industries through their transitional phase, something that Russia lacked. Deng Xiaoping, \"As to what kind of relations of production is the best mode, I'm afraid we shall have to leave the matter to the discretion of local authorities, allowing them to adopt whatever mode of production that can facilitate quickest recovery and growth of agricultural production\" Deng Xiaoping, Deng was denounced as the \"No.2 Capitalist Roader in China\" in 1966 leading to attacks on his family. This process was first enacted in the Anhui province, which had been especially hard hit during the Great Leap Forward and then later in Sichuan, along with the establishment of township and village enterprises that allowed the decentralisation of management Gray, Jack, Here we can see the introduction of both decentralised management, which allows for a better use of information, and a better system of incentives, reducing the \"tragedy of the commons\" However, in Russia, the focus on State enterprise and heavy industry did little for the majority of the people, leading to major demonstrations against the government in 1989 about the lack of certain consumer goods, specifically soap and cigarettes One of few significant reforms in agriculture was the abolishment of the policy of livestock slaughter to preserve grain, leading to the Soviet Union becoming a net importer of grain from the 1970's onwards, importing 42 million tons of grain annually during the Eleventh Five Year Plan that began in 1981 This was at", "label": 1 }, { "main_document": "of reforms, in 2002, the Brazilian government launched a new policy for Technical Assistance and Rural Extension (ATER), which aims to promote sustainable agriculture, and generate income and employment. The ATER policy has a more holistic approach, of equity and social inclusion, instead of diffusion of \"technological packages\" as adopted previously. After a series of internal reforms, CATI revised its philosophy and priorities in 1981 and adopted a more participatory and decentralized approach, deviating from the national approach (Pinto, 1998 cited by Lima, 2001). During a further reform in 1997, when several directors of regional and sub-regional offices met to define the new goals of CATI, a new mission was agreed upon, which was to promote sustainable rural development through participative actions involving communities, partners and all stakeholders. Rodrigues (1997:122) Translated from the original The table 1 presented above is an example of the changes which occurred over several decades in Brazil. The function of extensionists shifted from inducing behaviour change to a facilitator of social processes. Later new techniques of learning and teaching were introduced, with more focus on dialogue and solving problems. It is commonly understood that the success of the new service extension of CATI depends not only on designing a new policy, new approaches or improving the capabilities of its staff, it also depends on the capabilities of individual farmers, farmers institutions and rural communities to absorb the new approach. Sustainability is directly related to the capacity of an individual or society to compete in a globalized market and being economically sustainable, preserving the social and environmental system, as Pinheiro According to Rivera (1990) the public sector extension Rivera wrote those observations in 1990 and 15 years later extension approaches are still under attack. To respond the question above, yes, extension approaches will have to change over the next few years in order to fulfil the needs of its clients/beneficiaries, and also in order to be economically efficient and achieve its goals. The study concludes that the services provided by CATI are generally ineffective, if compared to other services provided by similar public organizations in Brazil. The same group, interviewed by email, pointed to a lack of political willingness to replace professionals not well suited to the job and the resistance of professionals to accept the new holistic and participatory extension approach. In theory, the approaches designed by CATI are in line with the holistic idea of extension services and its intervention, demanded by the National policy and recommended by the literature. But in reality, CATI does not apply a mechanism to evaluate its performance and cost effectiveness, in order to have a systematic way for improving its services. Over the years, extensionists from governmental organizations lost credibility among farmers and the society itself. In order to rescue its credibility and dismiss the stigma, agricultural professionals and their leaders will have to work on changing attitudes and behaviour towards the needs of farmers. Lockeretz & Anderson (1993), cited by Drost The future challenge for CATI will be on adopting an intensive participatory approach, improve the quality of its", "label": 0 }, { "main_document": "I happened upon \"Better than Sex\" by chance, at a book sale at the Student Union.Beyond the cheekiness of the title, the cover image and subtitle immediately conveyed what the topic and tone would be, in what I thought to be a strikingly well chosen combination. \"Better than Sex\" was published and promoted by Random House Australia in 2004, but - surprisingly to me, at first - never sold Italian (or any other language) translation rights.After less than two years from a successful publication, because of its current affairs content, it is outdated and realistically impossible to sell: therefore \"Better than Sex\" makes for an interesting case study on why foreign rights are bought (or not bought), allowing the perspective to analyse what happened in hindsight. \"Better than Sex\" was published by Random House Australia in paperback, on June 1st 2004; the authors are two Australian journalists, who write about work environment and office organisation for a Financial Times magazine called AFT Boss.One of them, Helen Trinca, was heavily involved in the publicity for the book, which included a well-balanced and well-paced succession of press releases, radio talks However, her popularity doesn't cross borders and couldn't be used to promote the book abroad.The book was fairly successful in its home market, with sales summing up to 1.827 as of April 2006 Product life-cycle for graph). \"Is Work Better Than Sex?\", June 2nd 2004, radio interview with presenter Richard Glover on 702 ABC Sidney, retrieved on April 29th 2006 from: \"Fair go or anything goes\", Sidney, July 13th 2005, conference on the industrial relations reform, transcript of Helen Trinca's speech retrieved on April 29th 2006 from: and \"Executive Women and Leadership Congress\", Sidney, November 30th to December 2nd 2005, seminar programme and registration form retrieved on April 29th 2006 from: \"Work and family life: what we've forgotten\", by Suzanne Franzway, April 4th 2005, review in Australian Review of Public Affairs, retrieved on May 1st 2006 from: data from the Nielsen BookScan database, provided by Richard Knight At the 2004 Frankfurt Book Fair, it was promoted in the Random House Australia Rights Catalogue Today, the book is still available through minor internet book retailers (Dymocks, Whitcoulls, Bookworld), especially remainders specialists, but not Amazon.Bookstores don't stock it and won't order it, though it can still be found by chance, as I did, at remainder sales.Although the main Italian e-retailers (Books Online, Internet Bookshop) offer a range of foreign books to gain competitive advantage over brick-and-mortar stores, they don't include \"Better than Sex\" in their lists.A Google research for pages in Italian, French or German containing the title and the word \"Trinca\" leads to only one relevant result: the list of newly acquired books of the Mannheim University Library. Random House Australia Rights Catalogue for the Frankfurt Book Fair, 2004, retrieved on March 23rd 2006 from: \"Better than Sex\" is clearly a light-tone sociological study of the developments in the Australian work environment. Although the authors hint at the fact that the discussed trends are international, and typical of the multinational or global companies of our", "label": 0 }, { "main_document": "on such a big scale. Solomon, Jon, Now, let us move on to the analysis of our core question: can we regard Disney's To begin with, we must point out that not only does the In the rest part of this essay, therefore, we shall focus on finding out what makes the In the first place, there are enough clues for us to believe that from the beginning Disney had no intention at all to make the The most obvious example which indicates that Disney was focusing on the film's popularity among the Western audience much more than on its status as a classical film might be Disney's choice of character's names in this film that we can regard almost as 'selfish'. For instance, Philoctetes, the satyr who trains Heracles and helps him to obtain enough strength to become a hero, is obviously given this name merely because it is easy for children to remember him as Phil as well as it can effectively enable this character to remain Greek-ish. Originally Philoctetes is the man who helped Hercules to die and inherited his bow and arrows, and killed Paris near the end of the Trojan War There is no account that says he was half man and half goat, although there is a record that Heracles was a disciple of the civilised and deep-thinking Centaur Chiron. More over, Odysseus and Theseus who Philoctetes said were his disciples are actually the men came after Heracles, and Achilleus who is also portrayed as having been trained by Phil is certainly an ancestor of Hercules and was a follower of Chiron but he had nothing to with Philoctetes at all. From this point of view, we can see that Disney picks up several elements from the Greek myth without paying much attention to the historical accuracy and patches them up in order to reconstruct the ancient Greek world which reflects and is based on American culture. This account on Philoctetes is based on the article in We may, nevertheless, still be able to argue that what Disney did in this film through ignoring film's authenticity and messing up characters' name is not very far from other classical filmmakers did in order to fit the ancient world to modern culture so that the audience would understand and enjoy the film better. However, we can possibly deprive Disney of all the excuses for not being true to Greek myth and putting priority on Disnificating the story through making an analysis on the employment of Megara as a heroine of the film. Except that the name Megara can be abbreviated as Meg that sounds quite American and thus can be considered an advantage at least for a Disney film, this choice can be seen as utterly inappropriate, especially if we take into consideration that the main storyline of this film is the process of Hercules becoming a true hero through realising what the true love is. In original Greek myth, she is certainly one of the Heracles wives, however, she is a one who was treated most cruelly", "label": 0 }, { "main_document": "cut off frequency of 100Hz, and a Gain of 1. In order for the devised system to be used practically, an understanding of the relationship between the inputs and outputs of the system must be achieved. The St. Venant Torsion expression for a round shaft gives the following: 4 Rearranging equation 4 in terms of the shear stress gives the following: 5 For a hollow circular section, the polar second moment of area (J) is expressed by: 6 Therefore, the maximum shear stress applied to the hollow shaft can be given by substituting equation 6 into equation 5, to give the following: 7 By substituting the dimensional characteristics of the shaft (see page 3), into equation 7, the shear stress can be expressed in terms of the torque alone. This expression is shown below: 8 Where The principal strain, By substituting equation 8 into this relationship, the principal strain can be expressed in terms of the torque, Poisson ratio and the Young's Modulus of the shaft material. This expression is shown below: 9 Where C=1851177 In the above analysis, it is evident that the sensitivity of the torque cell is dependant upon the Poisson ratio, along with the young's modulus of the material. In the design specification, the shaft used is described as steel, which offers the following properties: When any of the four strain gauges on the shaft are subjected to a force, the gauge itself undergoes an extension in length, and the cross section is reduced. The effect of this deformation, acts to alter the gauge resistance. The sensitivity of the gauges is expressed as the percentage change in resistance which occurs under a given strain. This is expressed algebraically below: 10 The gauge factor, and initial resistance can be found in appendix A. By substituting these values into equation 10, along with the expression for strain given in equation 9, an overall expression for the relationship between torque and resistance change can be derived: From Appendix A, 11 Where C=1851177 Finally, an expression can be derived, relating the bridge output voltage, to the input torque. From equation 2 (page 6), the bridge circuit is modelled by: From equation 11 (page 11), the change in gauge resistance can be substituted into this equation, along with the parameter R, to give an expression relating the bridge output to the input torque: 12 Where C=1851177 The design brief indicates that the shaft is made from steel, and so values of the Poisson ratio and young's modulus can be entered into the above equation, which yields the following: 13 If can also be assumed that the bridge circuit will be supplied by a simple 9V battery, and so the bridge output is related to the input torque by the following expression: 14 It is now evident from equation 14, that the bridge output voltage will be considerably small with respect to the torque. However, the amplifier circuit designed on page 9 will amplify this output voltage by a factor of approximately 1000, and thus allowing the signal to be processed further. In summary", "label": 1 }, { "main_document": "wheat straw but the field had not been under cultivation presumably for a few years so that plant community was not similar to that formed under crop production system. There was uncertainty whether soil organisms preferably fed on fresh, abundant plant species or dried wheat straws added to the site with different chemical composition. It is argued that chemical content of a plant species, such as Carbon, Nitrogen, polyphenol was a factor which determines the decomposition rate (Coleman & Crossley, 2003. Adl, 2003). This study only measured the loss of litter weight from the start to the end of the experimental period, hence there is no available data on the chemical decomposition process in details. There may be some differences in the process between plant species which were not examined in this experiment in addition to variation in decomposition between plants in mesh bags and those in the soil. Finally, the size of blocks may have been too small (Adl, 2003) and the duration of disturbance by tillage may have been not significant to modify the soil community. Plots of tillage and control treatments were located adjacent to each other and the size of a plot was 50 The field site had not been under cultivation prior to the experiment with high diversity in terms of plant community. This might have been inadequate for the aim of study to differentiate the activities of soil organisms between the tillage treatments. Soil organisms may have been migrated from one block to another irrespective of tillage disturbance, resulting in the elimination of effects of human management. Collected data of both number of individuals and species diversity were insufficient to measure the difference of invertebrates' responses to tillage and no-tillage treatments by statistical analysis. As shown in the decomposition rate, the most important factor that caused these low numbers of invertebrates collected in this experiment may be cold temperature over winter (Dickinson & Pugh, 1974. Lavelle & Spain, 2001). In fact, temperature had been very low, probably lower than 5 Another cause of sampling failure may be the method of invertebrate extraction with funnel. Tullgren funnel under light bulb as heat source was used in this study, but the efficiency of extraction is dependent upon the soil type and invertebrate species. For example, Collembola is likely to stay in the soil which clumps together (Adl, 2003), i.e. clay soil, hence remain in the funnel. Among the total of 12 soil cores sampled, most of the soil was clay-type. Although clods were crushed into smaller fragments by hands in setting-up operation for extraction, this stickiness and clusters of soil may have made extraction difficult. In addition, the amount of soil taken from the whole soil sample was very small, 120 ml from approximately 502.4 ml, hence reduced the possibility of collecting the satisfactory number of organisms. In fact, when all the soil was hand sorted, there were much more number of invertebrates found, particularly from the control samples with some living grasses attached. As data were lost, it is not persuadable to argue the likely effects of", "label": 0 }, { "main_document": "of your hands in a bowl of hot water, the other in a bowl of cold water. After a few minutes place both hands in a bowl of warm water. To the hand that was in the cold water it will feel hot, to the hand that was in the hot water it will feel cold. This simple experiment shows how hot or cold are purely subjective standpoints which have no place in reality, All the bowls of water have a temperature, Locke is not denying that, but none of them are hot or cold in or of themselves, they require a conscious observer to be given such a description. Another argument for Locke's distinction, which he does not give but that has been suggested by modern philosophers, is that in order to speak about any secondary qualities we must make reference to a perceiver. For example to talk about the colour green you need to refer to visual systems; green things are things which appear green under the right conditions to creatures with visual systems, and this does not seem to be required when describing primary qualities such as being round or moving toward the earth. However this very point seems to lead to one objection to Locke's theory; if secondary qualities exist only in the minds of perceivers how can we talk about an dress appearing black under a green light, but actually being red? Locke would answer that when we talk about the 'actual' colour we are simply referring to that which we perceive when the object is under the conditions that we normally view it in. (e.g. white light) Berkeley analysed and criticised Locke's work in his own writings. He felt that Locke's distinction between primary and secondary qualities was weak and that all qualities can be given the attributes Locke places on secondary qualities, that is they all exist only as they are perceived. If Berkeley's criticisms of Locke's theory work then we are lead towards idealism, a very un-commonsensical view of the world which Locke himself certainly wouldn't have endorsed. Berkeley's two main arguments were that firstly that we have no more reason to believe primary qualities, such as shape and extension, are really present in the objects themselves than we have reason to believe that secondary qualities really are present in objects, because we can't talk about an object's extension without referring to its roughness or brownness. I can't describe a table as being merely extended; to be extended it must have secondary properties as well. Berkeley's second argument runs as follows; When referring back to the example of the red dress, the dresses secondary qualities such as its size and shape could appear different under different conditions. As with his first point Berkeley is arguing the distinction Locke has made between primary and secondary qualities is false, and thus what is applied to one should be applied to the other. So Berkeley uses Locke's arguments for the distinction between primary and secondary qualities to his own ends to support his argument that the entire", "label": 1 }, { "main_document": "to abide by them As a result of these revelations, Johnson declares that the unities 'are always to be sacrificed to the nobler beauties of variety and instruction.' (479) The fixed literary rules of the Neoclassical period are certainly absurd to a modern audience, but this is largely because the literalistic view of art that they had then has since been surpassed by the concept of the aesthetic. Lacking such a concept, the literary critics of the 17 Thus to set the action of the play in one place and within the cycle of a day would seem a logical solution, since jumping around in time and place would be unnatural. Nevertheless, a century after Corneille suggested grounds for increasing the flexibility of the three unities, Johnson criticised them, understanding that the audience is always aware that what they are watching is a play. Since then, our concept of the aesthetic has expanded even further. Modern playwrights switch between the past, present and future, the real and the imagined, and delight in the abstract. This is because we are fully aware that the aesthetic is most certainly not the same as nature.", "label": 1 }, { "main_document": "In this experiment it was found: Pipe flow will have different flow friction characteristics dependant on whether it is laminar or turbulent. At high Reynolds numbers in turbulent flow, the roughness value of the material begins to seriously affect the friction factor, and hence its relationship with From this roughness value, we can determine the likely material used by comparing it with known values for different materials, i.e. wood, steel, gold, etc. The friction factor of a pipe at low In turbulent flow, other stresses additional turbulent stresses slow the fluid down further. These stresses influence the value of Because of these additional shear stresses, the head loss in the pipe is greater as the fluid moves along it, resulting in the reduced flow velocity. In laminar flow, the only stress slowing the fluid is the wall shear stress. At low Reynolds numbers, i.e. in the transitional region, the friction factor Turbulent flow takes a lot longer to build up a parabolic velocity profile due to the boundary layer going turbulent early on. The laminar flow forms a parabolic velocity profile faster. Building up the velocity profile creates a positive pressure gradient travelling along the pipe, as the fluid close to the pipe wall slows down, leaving positive pressure behind it. This experiment investigates the frictional forces inherent in laminar and turbulent pipe flow. By measuring the pressure drop and flow rate through a pipe, an estimate of the coefficient of friction (friction factor) can be obtained. Two different flow situations are studied, laminar flow and turbulent flow. The experimentally obtained values of the coefficient of friction are compared with established results by plotting them on the Moody chart. Flow in a pipe is known as Poiseuille flow (Hagen-Poiseuille in the interest of historical accuracy). Figure 1 Let's assume that the fluid is incompressible. It is known that the liquid immediately in contact with the wall of the pipe is stationary (the no-slip condition), and that velocity This function of This is independent of flow direction distance, We consider the shaded element The viscous forces on the faces of this element vary because of the different velocity gradients (one is closer to the wall than the other) and areas (dependant on Therefore, the force on one face is FIG 2.4, P11, Physical Fluid Dynamics - Tritton where The net viscous for on the element is therefore The pressure on one end of the element is and the net pressure force is The partial derivative may be replaced with a total derivative Also, as the velocity profile remains unchanged as we move down the pipe, the pressure gradient (pressure per unit volume) must be independent of Therefore where As we move down the pipe, the momentum of the element Hence, the total force acting must be zero, from (2) and (3): Combining (4) and (5) obtains Integrating gives To find This is at the wall, Therefore the velocity profile is a parabola with a maximum speed of (7) The average speed of flow, or mass per unit time, passing through the pipe is", "label": 1 }, { "main_document": "chapter ends with them sharing a close bond simply through their shared gender 'the thought that inside that tiny body was a womb like hers' (139). Alice Bell fights the welfare workers to the end, refusing to go into a care home and dies, sadly not in her own home, but at least on her own terms 'she wanted to die in her own home. And if that was no longer possible...she would not be here waiting for them when they came' (260). Muriel Scaife's husband dies after a horrific battle with disease but his death brings her closer to her son 'The she held out her arms and her started to cry too... He put an arm round his mother's shoulders. And she started to dry her eyes' (176). Finally, although Iris King murders her daughter's unborn baby, she finds comfort in the form of her other grandchild 'She nuzzled into his chest...and the warmth consoled her' (220). Barker, Pat However this does not apply to every character. The end of Joanne Wilson's chapter holds no hope 'I wish I didn't have to go' (106). Pregnant and unmarried, with a boyfriend who reacts to the news by having violent sex with her 'she was afraid for the baby... he was trying to screw it out of her' (100) Joanne is trapped with no real alternative offered to her. She goes on to marry Ken but there is no sense that this is a happy occasion. Her friendship with Joss is a positive in her life but it cannot get her out of the situation she is in. Similarly Blonde Dinah's chapter is in fact more about her elderly client George Harrison than it is about her. She is defined by her job as a prostitute and although George does attempt to depict her in a more humanistic and positive light 'He listened to her talk... the cracked and seamed face lit up... as she talked about the past, about the people she had known' (226) we only hear her speak as reported by George. She is the only woman in the book who does not have a voice of her own, it has been appropriated by her clients. This is hardly makes for exhilarating reading. There is an argument for a sense of exhilaration through the down to earth, realistic style of Barker's writing. The author herself claims that the characters are based on women she knew when she was growing up, and the blunt northern dialogue is certainly naturalistic 'You mucky bloody sod' (2) 'He could see she was a bairn' (37). The disclaimer at the start of the novel is clearly ironic, preparing the reader for a realist text. However this is not a naturalistic or realist novel in the classical sense. The narrative is highly symbolic, each woman representing a different aspect of the life cycle of one woman. Coming to terms with sexuality, for Kelly this comes violently and far too early, pregnancy, post natal depression, marriage, motherhood, loss of loved ones, abortion and death. The women", "label": 1 }, { "main_document": "their own MNCs while the liberalisation of markets and privatisation broke down communism. In general however, MNC subsidiary motivations can be seen as capitalist movements due to market imperfections. The drive is increasingly for global profits as MNCs are in search of more revenue (the market seekers) and lower costs (the resource seekers). In conclusion, differences among MNCs' characteristics and dynamic environmental conditions mirror the variety of different reasons why firms set up foreign subsidiaries. Nevertheless, a strong ability and desire of an MNC to internalise its possessed O and L advantages, matched with market imperfections and high uncertainty, creates a complete atmosphere for an MNC to set up a foreign subsidiary.", "label": 1 }, { "main_document": "be located at multiple locations around a village, raising concerns about labour and time efficiency and large areas of potentially cultivatable land were wasted because of paths and boundaries separating people's land, Chen 1999). The way land has been managed and cultivated in the past, which was shaped strongly by policy, is not the only factor affecting China's present problem regarding food security. As has already been mentioned, the current state of China's water supply, chiefly in two of the country's largest rivers, has been recently described as unsafe for human contact. The main cause for this sorry state of affairs is again due to the rapid industrialisation occurring in the country. Large investment in industry is not being met with the required infrastructure and services to process waste or emissions. A figure to highlight the severity of the problem comes from World Bank (2001) where 23.4 billion tonnes of sewage and industrial water was dumped into the Yangtze in the year 2000, which was 11% more than in the proceeding year. Organic and inorganic pollutants found in both the Yellow and Yangtze includes human excreta, industrial chemicals, heavy metals, cyanide and solvents, most of which originate from paper, steel, silk and chemical factories. Agriculture is also highlighted as being a contributor to this pollution and in some places, sediment run-off due to erosion was causing problems, both in the rivers and from cultivatable land, (World Bank, 2004). This is not only a problem in itself, but an ongoing problem with regard to agriculture, as water is required for irrigation and obviously the use of contaminated water in this instance would not be allowed. The following people are involved, would play a role or would be affected by plans to addresses the aforementioned problems of food security in China: There are many ways and means of instigating change to fulfil an objective, particularly one as big as the achieving a sustained level of food security. One successful 'bottom-up' project that was implemented in the Loess Plateau with help from the World Bank aimed to improve the potential for agricultural production, reduce erosion and equip farmers with knowledge and a long term strategy for management of the area. The land use of the small watersheds in the area prior to the project were as follows: uncultivated wasteland (40%), cropland (40%), mostly on low-productivity slopeland, trees and shrubs (10%); gullies (5%), and roads, villages, etc (5%). To improve this situation the project terraced 90,500 ha, afforested 90,900 ha and shrubbed 136,000ha, planted 26,700 ha for timber production, planted 30,890 ha orchards, re-established 100,140 ha of grasslands, irrigated 7,100 ha and installed sediment control dams. The World Bank played an important role in the project preparation and implementation and the total cost was $250 million; the cost per hectare was about $160. The objectives of the project were sustainable and coordinated social, economic, resource and environmental development of small watersheds and this proved successful. It was aimed at a local scale and it proved that land conservation is compatible with sustainable and productive agriculture and", "label": 1 }, { "main_document": "present is one of prosperity in the USA, UK and Japan, the three largest markets for Yahoo. This means that marketers can expand their marketing mixes to take advantage of high consumer buying power. However, constant scanning of the marketing environment is necessary as these conditions could change at any point in time. In times of economic prosperity, the level of discretionary income is high. Yahoo's three largest advertising markets are entertainment, cars, and pharmaceuticals. Consumers spend on these three markets with discretionary income, which is generally high in times of prosperity. However, should the economic climate change to a recession, the marketing mix would need to be adjusted so that the Product variable focuses on the functions buyers want and the Promotion variable focuses on the value and utility of the products. The competitive structure in which Yahoo operates is closest to an Oligopoly. Yahoo needs to identify unique marketing mixes in order to carve out its market share. The problem with using Price as part of its differential marketing strategy is that competitors will often match or beat the price. Differential product features can be introduced to make the products and services distinctive, for example in the US, Yahoo's broadband service includes free subscription to its music service, LAUNCHcast Plus. Distinguishing promotional methods can be used, for example Yahoo is the official internet partner of the Fifa World Cup 2006. When creating marketing mix programmes, the marketing function's recommendations must be consistent with the corporate goals and with the colleagues' views, to ensure internal take-up. In this way the internal business itself affects the marketing mix. These forces include anything prone to altering the business's receipt of its required supplies. A break in supplies could affect the marketing mix variables such as distribution of the product, amongst others. Yahoo's principal suppliers are LAUNCH (for music), Overture (for search), BT in UK and SBC in USA (for broadband) and its many Advertising suppliers. These suppliers are critical to the company's operations and so continuous scanning of the marketing environment is crucial to ensure the marketing mix variables remain competitive. Consumers and business customers should be analysed and a marketing mix developed to satisfy these customers' requirements. \"What do customers want?\" is central to the marketing concept and to the formation of a suitable marketing mix. Marketers must strive to satisfy their target customers in a manner that differentiates their product, brand and overall proposition from competing companies' marketing mixes. In the case of Yahoo, its main competitor across the full range of its services is AOL. However, it is important that Yahoo focuses its attention not only on like-for-like rivals, but also on erecting entry barriers to substitutes or new entrants. If it ignores these, then it may face similar problems to Coke and Pepsi who, due to their focusing their attention on each other, allowed new entrants to enter the market for soft drinks. Substitutes exist for individual Yahoo services e.g. Itunes (for music), and many internet service providers (for broadband) and new entrants are continuously coming in (e.g. Pandora,", "label": 1 }, { "main_document": "The most extensive population study of the early modern period was carried out by E.A. Wrigley and R.S. Schofield, In very general terms these criticisms focus upon the following main areas. The sample of four hundred and four parish registers used to estimate national population figures makes up only four percent of the total number of parishes in early modern England. In addition, the sample includes too many large parishes (usually Northern areas) and too few smaller parishes (usually Eastern areas). London is also underrepresented in the study due to its lack of surviving parish registers. Even good registers however were inaccurate for certain periods (such as during the civil war) and thus only for the year 1662 were all the registers fit to use. E.A. Wrigley and R.S. Schofield, Wrigley and Schofield, Wrigley and Schofield, Wrigley and Schofield, John Hatcher, 'Understanding the Population History of England 1450-1750', Despite such criticism the information Wrigley and Schofield provide does appear a plausible reading of the demographic trends of early modern England. After all, the authors are aware that methodological shortcomings exist; this work does not pretend to be concrete evidence but merely a likely estimation. Furthermore the huge financial costs and time commitments involved in this kind of research make it unlikely that another work of this nature will be undertaken. These are the best results we have, and particularly when used in conjunction with localised studies as well as studies of social and cultural factors this is very revealing research. In this essay I am going to argue from a standpoint assuming that the data compiled by Wrigley and Schofield is a realistic estimation of population trends. However, I will argue that an acceptation of the population trends presented by the Cambridge Group does not necessitate an acceptation of their I will argue that fertility I will argue that the essential inter-linkage of these two factors makes any either/or demographical explanation implausible. Instead I will suggest that a model whereby Wrigley and Schofield, The demographic trends that Wrigley and Schofield have mapped out for early modern England begin with growth between 1541 and 1661; this growth was generally steady although it slowed during the 1550s and became rapid in the 1570s and 1580s. Between 1656 and 1686 the population decreased in size before stagnating and slightly recovering between 1686 and 1750. From 1750 onwards the vigorous growth of earlier periods was resumed, although Elizabethan growth rates were not reached until the 1790s. Thus there were two periods of growth divided by one hundred years or so from the mid seventeenth century to the mid eighteenth century when population numbers stagnated. As I have suggested this basic information seems both plausible and realistic, however Wrigley and Schofield's interpretation of this information; an attempt to understand what caused this rapid growth, check and decline followed by renewed growth, seems less plausible. Or in other words, was it fertility or mortality that was the definitive factor in determining these population trends. Roger Schofield, 'The Impact of Scarcity and Plenty on Population Change in England 1541-1871',", "label": 1 }, { "main_document": "The interpretation of the Bible has been at the centre of political and theological debates throughout the course of history, and remains the cause of various social and cultural tensions in the present day. Probably the most key period in the development of biblical interpretation was that of the sixteenth-century European Reformation. Stimulated by the works of Erasmus and Christian Humanism, the Reformation \"sought reform of the universal Catholic Church\" Reformers highlighted errors and forgeries within the Catholic faith, and more importantly asserted the intellectual, and ultimately spiritual, independence of the laity. One of the most notable is King Henry VIII who abused the fractious state of Christianity during the Reformation, manipulating scripture in order to enhance his power. In 1531 the convocation of England accepted Henry VIII as Head of the Church in England 'as far as the law of God allows' enabling the King to grant his own right to divorce. This in turn led to the dissolution of the monasteries which not only suppressed some of the most vociferous opponents of Henry's most recent legislation, but resulted in the crown making great financial profit. \"Reformation\" Oxford Reference Online. Oxford University Press Dr. J. Grenfell-Hill, Lecture on the Reformation (Harpenden: St. George's School, 2001) The oppression that results from such abuse of power and manipulation of Biblical texts and ideas has been translated into literature through the genre of dystopic fiction. The term However, the term Biblical themes have been increasingly drawn upon in order to address contemporary issues in modern literature. Many authors return to what Jasper calls the \"late twentieth century obsession with the apocalyptic\", the literature of which \"lends itself especially to the genre of science fiction\", a key element in the texts I will refer to in this essay. \"Dystopia\" The imagined nightmarish societies represented in George Orwell's In this essay I will explore the ways in which problems of interpretation and the abuse of scripture have been translated into dystopic fiction and the issues that are consequently unveiled. Pimlott, Ben. Introduction Margaret Atwood's The novel is \"set in the near future, in a United States which is in the hands of a power-hungry elite who have used their own brand of 'Bible-based' religion as an excuse for the suppression of the majority of the population.\" In his book Gilead is indeed presented as a society \"under a good deal of pressure, demographic and otherwise\". However, what I would argue to be perhaps more disturbing than the simple irony of a terrifying society claiming to live \" Howells, Coral Ann, Spong, John Shelby, Atwood, Margaret, Clarke, Elizabeth, 'How feminist can a handmaid be? Margaret Atwood's By Barratt, David, Pooley, Roger and Ryken, Leland, (Leicester : Apollos, 1995) .p.235 Jasper, David, 'Literary Readings of the Bible: Trends in Modern Criticism in By Jasper, David & Prickett, Stephen, (Oxford: Blackwell Publishing, 1999) p.46 Daniels, Margaret J. & Bowen, Heather E., 'Feminist Implications of Anti-Leisure', Jasper, David, 'How Can We Read the Bible?', in By Liam Gearon (London: Cassell, 1999), pp.9-26 As in the Old Testament, \"patriarchal authority is", "label": 1 }, { "main_document": "a Bison, you wouldn't leave it just because it wasn't on your itinerary. The evidence presented regarding various middle Palaeolithic sites strongly suggests that there were multiple subsistence strategies being used. Neanderthals were hunting and scavenging in various different ways, some perhaps more dangerous than others, with driving being seen at La Cotte De St. Brelade and specific hunting of large animals at Il'Skaja. This seems to be dependant not on their capabilities but on the landscape they are in and the availability of their resources. They made full use of their region in which ever way was most profitable.", "label": 1 }, { "main_document": "influential characters. The rapport he succeeded to build must have supported him during a time when it was not unheard of for the people who adhered to beliefs contradicting religious teaching, to be severely prosecuted, e.g. Giordano Bruno in 1600. It proved particularly advantageous when Galileo became acquainted with the Cardinal Roberto Bellarmine who was a leading scholar of the church and was partly accountable for Bruno's execution for heresy. Both figures were dedicated Copernicans, but only Galileo escaped the fate of death, though confined to house arrest in 1633, his case was influenced enough by Bellarmine for his punishment to be lenient. Like many concepts rejected by the church, the public teaching of the Copernican theory was forbidden and this obviously made the extension of the work arduous (arduous maybe, but not impossible). During Galileo's time as Professor of mathematics in the University of Pisa, he gave additional teaching to those who could afford to pay for the benefits. This assisted his objective in spreading his own work, as during these private lectures he was able to teach scholars his own ideas, rather than the commonplace knowledge of the time. He effectively initiated his inspirations upon an influential circle which gave height to his stature The Copernican system was finally sanctioned by the church in 1835 partly on account of support from observations of the solar system, enabled by Galileo's development of the telescope. Existing at a pivotal point in history, Galileo is a famous representative of changes which would most likely have been undertaken by countless other individuals in the time following, if it were not for his presence. The arrival of the Renaissance hurried these developments, and an example of this is suggested with the application of a 'renaissance style of art that privileged realism contributed profoundly to Galileo's ability to imagine valleys and mountains on the moon when in fact all he could see in his telescope were shadows' It is essentially impossible to determine the 'first scientist' in the strictest sense, since there is no hard evidence to prove who initially used the all important method we would now classify as 'experiment'. It is reasonable to confer this title instead, on the person who first utilised this skill to the advantage of progress, and considering his tremendous achievements, as a pioneer of the new science, Galileo does seem worthy of this honour. Galileo's development of the telescope was able to shed light on the structure of the universe and gave support to the famous Copernican theory, and there is no argument to the immense advancement that this bestowed on science. In addition, he is credited for a wide range of important discoveries, inventions and hypotheses, including his work on pendulums and theories of motion; mechanics; invention of the first thermometer, and his numerous contributions to amassing knowledge in astronomy. On the matter of Gilbert's influence over the experimental method of Galileo, it is debatable as to whether However this is, in essence the expected progression in light of the topic and the claim that science is built", "label": 1 }, { "main_document": "the treatments on the results only with observation. However, there seemed a certain difference between the samples of tillage and no-tillage in hand sorting samples. The number of individual organisms was higher in control samples than that in tilled ones in many cases. One possible explanation to this result may be the presence of reestablishing grass species in control samples which formed complex rooting systems, provided variety of food resources that increased the assemblage of fungi, bacteria, and invertebrates (Titi, 2003). Hence, there might have been some correlation between autumn tillage and invertebrates diversity in following spring. The undisturbed soil my have higher possibility of containing well-preserved seed bank which would lead to the establishment of diverse plant community at early spring, whilst the ploughed soil my have been reduced the possibility of such recovery within a short period of time (Andrade 2003). Furthermore, block design and the long-term management of the site prior to the experiment may have caused inaccuracy of this study. As already discussed, plot size was 50 The field may have needed several years of management practice to affect biological community structure in the soil food web. However, one study argued that the reduction in predatory arthropods population recovered up to the same level as found in no-tillage treatment within a few months after the tillage treatment (Stinner, 1986). It was probably due to high dispersibility of those species, but the reason for recolonisation of cultivated area was not certain. There seemed variation in the responses of invertebrate species to the disturbance and to changes in habitat (Wardle, 1995), but it may be important to consider the duration, intensity of tillage inputs in the long-term in order to examine the modification in the soil food web. In spite of insufficient amount of data collected in this study, there is increasing number of studies which investigated the correlation of the responses of invertebrate community, or particular species or functional groups with agricultural management, such as tillage, crop species and its rotation. It would be possible to estimate some probable outcomes from this study by referring to other available research. Many findings imply the difficulty of estimating the effects of tillage on species number and diversity due to high variety in both terms, but there were some notable responses of particular invertebrate species which may be induced by this management. Periodic disturbance by tillage can have an impact on the development of earthworm population and its life cycle (Edwards & Lofty, 1982). Increased interspecific competition in the soil, reduction of food resources, and increased predation risk when earthworms were brought out to the soil surface by turn over of soil, may lead to influence the relative proportion of earthworm population size. Some studies suggested the importance of abiotic factors affecting this species such as soil water content and temperature; 10 to 20 Physical damages, i.e. dismemberment, seemed not to be significant to most species of earthworms, but many studies support the preference of no-tillage system by those measurements (Edwards & Shuster, 2003). Much smaller soil organisms; microfauna showed different responses", "label": 0 }, { "main_document": "The advances in medical technology and therapies in recent years have been rapid and have contributed to the increase in average life expectancy. There has also been an overall drop in mortality and morbidity rates within developed countries. On initial examination of this data, it would seem apparent that this trend is indeed advantageous to us and free of any dilemma. However, with further thought it has been shown that quite simply possessing the means to preserve life does not necessarily offer the patient the best option. It is quite possible now to keep a brain stem dead patient alive for many years. But this is neither beneficial nor humane for the patient concerned. The clinician, in this scenario, is prolonging life just because he has the capacity to and not because it is in the patients' best interests. Ethical arguments have therefore brought into contention the role of the clinician in these scenarios- are they prolonging life or are they prolonging the process of dying? For some people 'life' is seen as intrinsically good and valuable and they feel it should be preserved at all costs. But for some people the quality of life takes precedence when trying to determine its value. Without quality, life loses its value and to preserve life over suffering does not seem worthwhile. As doctors we need to observe this assessment of 'quality' and when important decisions are made concerning life the psychological, spiritual and emotional aspects of a patient's life need to be considered. When administrating medications, it needs to be assessed whether the burden of treatments are unacceptably high for the patient and whether extending life would be in the patients best interest. Furthermore would the treatment offered provide a significant improvement or amelioration of the disease process? When making an informed decision, these questions have to be answered. Recent events such as the 'Diane Pretty' case have shown that these questions are difficult to answer. They have bought to the forefront the argument of euthanasia. The word euthanasia derived from the Greek language means 'good death'. Euthanasia is performed either by undertaking acts that directly bring about death or failing to prevent death. The distinction between these creates two subgroups of euthanasia; the former is classed as active euthanasia and the latter as passive euthanasia. Draper, in 1998, defined euthanasia using three key points. He defined it as 'death resulting from the intention of a Within this definition the motive is set and it is this motive that differentiates euthanasia from murder or manslaughter. In events of 'physician assisted suicide', the patient kills himself/herself using methods provided by the doctor. This is often confused to be euthanasia - but it is imperatively not as in this case the doctor did not do the killing. There are other scenarios where practices that involve ending a patient's life may be classed as euthanasia for example, withdrawing or withholding treatment. If a patient refuses life-prolonging therapy, and their decision is voluntary, informed and made with a competent mind and is free of any doctor coercion, then", "label": 0 }, { "main_document": "about us having neural networks that were developed for use in a very different age and environment than the one we live in today. They also make the important criticism that evolutionary psychology ignores culture, however this is be countered by the claim that humans have gained the capacity for culture through evolution, so studying evolutionary processes should teach us about culture (Cosmines & Tooby 1997). However this is unsatisfactory, as Gould (2001) points out that cultural change is directional and rapid, and happens through joining of different cultural lineages which is very different to evolution. History's unpredictableness must also be taken into account meaning that not everything can be explained by evolution, we need to incorporate other avenues of research. Smith (2001) criticises evolutionary psychology's reliance on the computational model of the mind, which has now largely been superseeded by the connectionist model. Smith also makes the important point that the process of reverse engineering adaptations is extremely flawed as we have no good way of knowing what the environment our ancestors evolved in was like. Another criticism of evolutionary psychology's simplistic theorising about the conditions in which our ancestors were evolving comes from design systems theory (Oyama et al 2001) which states that organisms do not simply encounter problems in their environment and develop adaptations - as advocates of evolutionary psychology would have you believe, but rather evolution is co-constructed by the interaction between the organism and it's environment, particularly in the case of humans who took control over nature in a more significant way than other animals. Therefore there is no easy way to guess or work out the conditions our ancestors met and therefore we cannot hypothesise about the kinds of adaptations that might have arisen, which is a massive blow for evolutionary psychologists. It is clear that evolutionary psychology has a lot of problems, and particularly comes unstuck when it tries to explain anything specific that isn't directly biological. The fact that there may have been evolutionary progress between the Pleistocene and now needs to be addressed, in fact evolution is a process with no end bar complete extinction so it is not useful to talk about evolution having happened or not happened, rather the pertinent issue is how much change there has been. The point that we ought not to merely speculate about the conditions in which our ancestors were evolving is also important. However despite these weaknesses evolutionary psychology can be robust enough to make an important contribution, especially if it doesn't stray out of it's depth into the murky cultural waters. Archer's (2001) model of hypothesis generation is important and does come up with viable research in areas that conventional approaches wouldn't, such as research revealing differential mate guarding with different genders (Buss 2001). To make a strong case that such a phenomenon is due to evolutionary processes, it should be a case where there are no obvious societal or cultural explanations for the behaviour - as in Thornhill and Palmer's (2000) explanation of rape which ignored viable societal explanations (cited in Rose &", "label": 1 }, { "main_document": "therefore generalist about pro tanot reasons. This means that a pro tanto reason cannot count in favour of an action in one case and count against, or not at all, an action in a different case. Once a feature counts in favour it always in any situation counts in favour of any action. Ross believes that there are at least five pro-tanto moral reasons Consequently, what we have overall moral reason to do will always be some sort of a function of these five pro tanto moral reasons. These are: duty of fidelity, duty of gratitude, duty of reparation, duty of beneficence and duty of non-malevolence Jonathan Dancy, \"The Particularist's Progress\", in Hooker and Little (eds.), 2 Particularism, in contrast, holds a different view of the pro tanto reasons. Particularist stress that pro tanto reasons have essentially variable relevance, i.e. they completely dependant on the circumstances of the situation and a pro tanto reason can count in favour in one case and count against it in different one. This doctrine is called the holism of reason. Therefore, no principle can determine or capture how a feature counts in a case. Ibid., pg. 2 The first particularist objection addresses the idea that pro tanto reasons can count only in one way, i.e. either in favour or only against an action. This argument is essentially based on examples where moral reasons change polarity. In addition, a property producing pleasure is normally a pro tanto reason in favour of an action. However, where an act would produce pleasure for a sadist, it no longer counts in favour of the action, quite the opposite, the pro tanto reason changes polarity and counts against this action. Dancy mentions that one of the reasons against having public hangings would be that it produces pleasure for the people who watch. Another example where the traditional polarity of a pro tanto reason can switch is the property of keeping promises. Consider when an agent makes a promise to break his next five promises or kill an innocent child. All these promises are essentially immoral and count against an action, which would uphold them. This is however in stark contrast with Ross deontology where a pro tanto reason can have only one polarity. It seems that at least in some examples Ross deontology is wrong to rely on the principles, as they would not correspond to what we would normally consider moral. Now that we have established that there are examples where the polarity of pro tanto moral reasons changes we can move on to second argument where Dancy tries to prove that holism of reason This argument rests on two premises. The first one is that holism of reason is true / valid for non-moral reasons, i.e. epistemic and practical reasons. For instance in an ordinary situation, we have good reasons to trust what our senses like perception or taste tell us. However, if we are under influence of psychedelic drugs the fact that we have a certain perception is no longer a reason for believing that the world is", "label": 0 }, { "main_document": "Promoting competition is ultimately the most important responsibility delegated to competition authorities. But how exactly is this achieved? Competition authorities are equipped with tools to combat anti-competitive practices and behaviour; one of these tools is the powers of enforcement. For the purpose of this essay the powers of enforcement are to encompass many different aspects such as the powers of investigation, powers to impose penalties and powers to order the termination of infringements. However competition authorities cannot apply competition law effectively if the law itself is not bolstered by strict and precise enforcement powers. Therefore this essay will seek to explore how enforcement powers held by the United Kingdom national competition authority, the Office of Fair Trading (OFT), have developed since the implementation of the Competition Act 1998, Regulation 1/2003/EC and Enterprise Act 2002. In addition an analysis of the interrelationship between the European Commission's enforcement powers and Regulation 1/2003/EC will be conducted. Competition law will be an effective mechanism to promote competition if it is backed by stringent powers of enforcement, and provided that competition authorities are confident enough to use them. According to the OFT's website its enforcement role is to \"uproot and deter all forms of anti-competitive behaviour, including cartels and the abuse of market power.\" This has certainly been made easier with the implementation of Regulation 1/2003/EC as the European competition rules, namely Articles 81 and 82 can now be directly enforced by the OFT. However much of the OFT's current enforcement powers are derived from the Competition Act 1998 which reflect those provisions embodied in the EC competition rules. This is in addition to newer and more extensive powers of enforcement acquired through the Enterprise Act 2002 and Regulation 1/2003/EC. These provisions have not only served to strengthen the enforcement powers and role of the OFT in the United Kingdom; they have brought it more into line with those powers held by the European Commission. Indeed one commentator has stated that \"the primary purpose of the [1998] Act may not be to fine-tune the substantive rules of competition law but to provide effective means of taking action against those breaches of competition law which have in the past escaped enforcement action because of a lack of evidence and enforcement powers available to the competition authorities.\" Thus it is clear that the quest to equip the OFT with more effective and extensive enforcement powers in the sphere of competition law had begun as far back as a decade ago. One of the main enforcement powers established through the Competition Act 1998 was the power to enter business premises without or under a warrant. These new powers, as explained by Furse, have also been acknowledged \"in the IIB case [where] the CAT noted that the Act has endowed 'the [OFT], in the public interest, with wide ranging and draconian powers...\" But the enforcement problem still remains, albeit in a different form, if the OFT is reluctant to exercise its new powers no matter how stringent they may be. Indeed the OFT now holds increased powers of enforcement in its", "label": 1 }, { "main_document": "the fact that a key member of the leadership, Wellington, 'declared in 1828 that he could not see any difference between the Whigs and the Tories', suggests that a distinct party ideology was only to arrive after 1832. Sarah Richardson, 'Whigs and Tories before 1832', 23 February 2004. < There has been a great amount of attention focussed on division lists from the early nineteenth century, as historians attempt to find definitive evidence for the existence of political parties. It is suggested that the long period of stable Tory rule, especially during the years of Liverpool's Premiership, helped to 'establish habits of regular support and discipline'. O'Gorman, Eric J Evans, O'Gorman, However, closer scrutiny suggests that the validity of this claim is questionable. Loyalty towards the government does not, especially in this time period, signify party loyalty. Fraser cites Sir T.B. Martin as an example of the continuing deference towards the monarchy that remained in the Commons during this period. Martin was an MP who supported Liverpool, Canning, Goderich Wellington and Grey, claiming 'My Party is the King, and the persons he may think fit to appoint as his ministers will always have my support while I am in office'. He claims that 'the fact that attendance averaged about a third at...divisions...suggests that members were more prone to vote out of some positive conviction on the merits of the question that from a general feeling of obligation to support a party. Failing any personal conviction or interest they would not bother to turn up'. He concludes 'this was not a Tory Party of much national repute or credibility'. Despite this, many historians who endorse the concept of a Tory Party during these years suggest that there were organisational developments at Westminster implemented in an attempt to cultivate such a following. Peter Fraser, 'Party Voting in the House of Commons', O'Gorman, Frank O'Gorman, Fraser, 'Party Voting in the House of Commons', pp.773-4. An integral element of any political party is the organisations and institutions the leadership forms at its centre to ensure its effective functioning. Evans has argued that Liverpool had no choice but to begin this process during the decades preceding the Reform Act, claiming 'they [the Tories] could not keep themselves in office without party organisation' during the 1810s and 1820s. This involved 'the distribution and promise of offices and favours' and is clearly an attempt by the Tories to encourage and reward loyalty. Evans, O'Gorman, O'Gorman, O'Gorman, A more significant development was that the Tories appeared to overcome a major obstacle to becoming a party that had 'hampered them throughout the period of the 'Talents' ministry'. The two appointments in quick succession of Portland and then Perceval quickly overcame this problem, with the party accepting the principle of a government based on parliamentary majority not royal influence. Indeed, the rivalry between these groups was so fierce that a clash over the conduct of the war in 1809 led to Castlereagh and Canning engaging in a pistol duel. Seven ministers including Peel and Wellington resigned in 1827 from Canning's government over", "label": 1 }, { "main_document": "The National Office of Statistics calculated in 1994 there were 38,159 reported cases of depression in the British population; rising to 64,101 by 1998. James (1997) has estimated also that for every one person fulfilling the DSM requirements for depression another two or three people are borderline cases. What could have made these figures rise so dramatically and in such a small space of time? Despite evidence for a genetic predisposition, a depressive 'gene' cannot explain the huge increase in depression. Could it be that the capitalist democracy of Great Britain is structured in a way that is conducive to depressive illnesses? This may well be the case, British people are exposed to many different forms of advertisement which eventually decrease levels of well being, because people cannot afford all of the products they are told they must buy; Depression results in faulty thoughts produced by this process. British health care institutions are not set up to deal with the numbers of people reporting depressive symptoms which means many people will not receive adequate care, creating a constant level of depression amongst certain groups. However, some research has suggested though that one way to prevent depression is to take part in religious activates such as prayer. Easterlin, R.A. (1995) claimed people asses their levels of wellbeing according to the levels of those around them, so if standards of living increase so do peoples subjective norms. This is confirmed by a Japanese study in 1958 and 1987; during this period the average income increased five fold and with it there increased the sales of washing machines, refrigerators, televisions and cars, from one per cent of every household owning these types of goods, to sixty per cent. Yet The findings of this study can be transferred to Britain, through multiple Media Sources people are constantly bombarded with adverts, telling them that to buy a product which will make them happier, after buying the product and a new advert is released informing the consumer that everyone now owns something better. The average level of well being for this person decreases and in order to raise it they believe the must conform and buy the new product. Happiness through material wealth is supported by the government who in pre-election campaigns promise to increase wages in order to make people happier. Tony Blair's 2001 election speech told voters how Labour was pursuing ' People are presented with a distorted world view through multiple media sources, which encourage consumerism. James (1997) describes how American television dramas, which have become much more readily available in Britain, depict three-quarters of characters as male, mostly single, middle class in there 20s to 30s. 40% of characters are employed in a professional or managerial position. Television programs also provide a greater number of real life people to compare the self against. Accompanying programmes are advertisements claiming if buy their product you could have the life depicted in one of these programmes. These ideas may not work when presented once but over time, starting in childhood, the British public are told this is", "label": 1 }, { "main_document": "listener is able to do all of these and therefore understand and engage with the speaker, the definition of a good listener in Daft (2002) is someone who \"finds areas of interest, is flexible, works hard at listening, and uses thought speed to mentally summarise, weigh, and anticipate what the speaker says\". There are barriers to communication that can cause messages to become distorted; they exist in two areas; individual barriers and organisational barriers. In each there are several barriers that have to be overcome, they are shown below: Individual Interpersonal - problems with peoples emotions and perceptions Media - using the wrong channel to communicate with employees can lead to confusion and mixed messages. Semantics - this means understanding the meaning of words and how they are used. Different words may mean different things to people. Inconsistent cues- the verbal and nonverbal communications need to be the same so that confusion does not occur. Organisational Status or power differences - low level workers may be unwilling to pass up bad news creating a false image of the business. Also high status workers may ignore low level employees thinking they have nothing to contribute Different needs and goals - each department will have different needs and goals, the departments to work together and compromise for the benefit of the business. Lack of formal channels - without formal channels of communication the organisation cannot communicate effectively and therefore cannot operate at its best. Unsuited communication flow - if wrong communication channels are used then there maybe insufficient information provided, similarly it is a bad use of resources to use media rich channels for low level information. Poor coordination - departments may become isolated, or management will not know what is happening in each department. This means the organisation is not operating as a whole as is therefore not working at its best. Communication at the farm is very important as without it work may not be completed or tasks maybe carried out the wrong way, this will all lead to a lack in motivation. As each enterprise is being run by one person it is important that their views are listened to and that they have input into how the business is run, they are the ones that are running the enterprise every day so they therefore know it the best. By listening to the workers it can improve the farming techniques that are used which will help to increase the profits of the business. At the farm it is best, and probably most practical to use, media rich channels to communicate, the manager is likely to see the employees every day so can talk to them in formal and informal settings about his ideas and the ideas of his workers. The telephone is really the only alternative method on the farm as the workers will be carrying out practical work most of the time so will be unable to regularly check emails or read letters or memos. A notice board can be used for less important information but this will probably need", "label": 1 }, { "main_document": "If one was to ask a person chosen at random the following question: \"what does the Solar System look like, and how does it act?\", I am sure the answer they would give is: \"it has nine planets, all moving around the Sun in separate orbits, each orbit increasing in radius from the Sun\", or words to that effect. It can be suggested (although I have no proof) that every person in the world views the Solar System as a Sun-centred system, otherwise known as a 'heliocentric' system. However, many hundreds of years ago, a Solar System which put the Earth at its heart (geocentric) was the preferred model. This paper looks at the history and the relative science between the two models of the system in which we live. The structure of this paper will be such that each model will be explored individually, including the history (ie the origin) of the system, then a critical comparison of the two systems will be made. Included in the comparison will be pros and cons, for example why the model works, how it is better than the other (if it actually is), what problems were encountered when formulating the models, and how the problems were overcome. I will start by looking at the geocentric Solar System, as it was the first model to be proposed. Although the first ideas of an ordered universal system came from the ancient Greek philosopher Aristotle, who lived around 384-322 B.C., the first thoroughly developed geocentric model of the Solar System came in about 200 A.D., from Claudius Ptolemy, an Alexandrian Greek. The fundamental idea that the Earth is placed in the centre of the universal system (or the 'cosmos' as it was known in ancient times) came from Aristotle's philosophical views, put forward in his books (titled These views forced the cosmos to be regulated by the concept of place, as opposed to space. In this manner, the cosmos was divided into two specific regions, the earthly region (also known as the sublunary region ie 'below the moon') and the heavens. The earthly region was a region whereby things lived in a living and dying world, ie things were born, grew and matured while living, then died after a period of time [ref.1]. This division consisted of substances made up of the four primary elements: earth, water, air and fire, of which, earth and water had a natural tendency to fall towards the centre. Moreover, because air and fire did not seem to 'gravitate', and fire rose through the air, the complete model of the Earth looked like this: a sphere of earth, surrounded by a sphere of water, a sphere of air and finally a sphere of fire. It was thought that the sphere of fire was the cause of the charred markings on the Moon [ref.2]. In contrast to the sublunary region existed the heavens. This 'place' did not consist of the previous four elements but in fact a fifth element, the quintessence. This substance was not in any way changeable (ie its properties could", "label": 1 }, { "main_document": "An experiment was conducted to explore the design of a measurement system for measuring low-lever force, a kind of cantilever rig which is called Analogue Experimental Transducer was introduced in this laboratory. It was connected with a designed amplifier in order to make the output observable. Fifteen sets of measured loading force and output voltage were obtained, which determined the linear range and sensitivity of the force sensor system. The output reading appeared to be suitable for predicting values, whereas, there were The errors associated with the output may have been caused by improperly zeroing the system when there was no loading force. And an overall equation for output voltage against the loading force were derived and evaluated. In addition, data were collected by using the Labview software with different system configuration. Low-level force can be measured with a beam-type load cells composed of a cantilever beam and two or four strain gauges. The cantilever beam is used as the elastic member. The strain gauges serve as the force sensor. The resistance in the strain gauges is changed as the strain variation in the beam due to force application, in generally, it is measured by a Wheatstone deflection bridge, its circuit configuration is shown as 1 in Appendix 1. The output from the bridge circuit is normally in millivolts which has to be amplified to an adequate level. After amplification, an analogue or digitized device is used to display the signal for further processing or display. General structure of the instrument for force measurement is shown as Fig. 1 in Reference. In this laboratory, a ready made cantilever rig with two strain gauges called Analogue Experimental Transducer is selected as a beam-type instrument for force measurement. Illustration of the cantilever rig was explored through achieving the following four objectives: Evaluate the cantilever system, design and build simple bridge and amplifier circuits using the rig and an experimental bread-board. Derive an overall equation for output voltage against the loading force. Determine the linear range and sensitivity of the force sensor system. Estimate the maximum loading force allowed for the system. According to the fundamental relationship for a cantilever with strain gauges, the strain produced at the strain gauges in terms of the force is Here the strain The strain gauge factor sets up the relationship between the change in resistance and the strain And the Elastic modulus equation The relationship between the bridge output and the relative resistance change of the strain gauges. Here The apparatus was called Analogue Experimental Transducer, which consisted of a balance potentiometer, screw terminals, two strain gauges and a cantilever beam. At the free end of the beam, a nylon screw was attached for adding weights. The diction view of the rig is shown in Fig. 2 of Reference. The following procedure was used to obtain the basic experimental data: 1). Measure the geometrical dimensions of the steel cantilever and the location of the strain gauges. 2). Use the existing bridge circuit on the rig, connect it which the battery of 9V as the power supply. 3). Build", "label": 0 }, { "main_document": "In this assignment, researches have been done on the types of engines that are usually used for racing. The four types of racing engines that was of interest were the Formula, Rally, Sports and Motorcycle racing engines. The specifications of these engines have been summarized in Appendix A and B so that one can compare their important parameters. Appendix A shows all the racing engines, of which their individual performance will be discuss in the content of this report. Appendix B is a table which accumulates all the information obtained about the four racing engine requirement and their overall comparisons are also included in the main text. In this report, we also accessed past and future developments of each of these racing engines. In 1894, the idea of car racing was raised once a series of petrol-fuelled cars were constructed. The first official race was held in Chicago, Illinois on 2nd November in 1895 and racing engines were improved exponentially during the 20 Researches on different engines which usually used in racing were performed and discussed in this report. Engine used for general automotives was internal combustion (I.C.) engine in which the mechanical power of vehicle was produced by combustion of fuel within the combustion chamber. Internal combustion engine could have either two-strokes or four-strokes and either spark ignites or compression ignites. Gasoline, diesel and natural gas could be selected as the fuel in a SI engine. On the other hand, two major types of IC engines were identified: Rotary engine and Reciprocating engine. The major representative of rotary engines in automotive industry was the Wankel engine (Fig 2) which was the most highly developed rotary engine since 1970s; such engine was used due to the compactness and high power performance. However, the development of Wankel engine was suspended in most of the companies due to the arising environmental regulations as well as the effect of the oil crisis. The most recent car operated by Wankel engine was the Mazda RX-7 which produced in 1999. In automotive racing industry, only particular specifications were selected and employed since the efficiency could only be improved by such specifications. Different cylinder configurations such as single, in-line, v-type, w-type, u-type, opposed cylinder, opposed piston and radial could be found in IC engines. However, in-line and v-type were the most commonly used configurations in automotive racing engines. Further more, the common numbers of valves employed in each cylinder were 2 (1 intake, 1 exhaust), 4 (2 intake, 2 exhaust) and 5 (3 intake, 2 exhaust). Generally 4 valves were employed in racing engine. . Compare to normal engines, limits such as peak operating cylinder pressure were pushed up in some racing engines so that a higher performance could be obtained. Besides, the horsepower and fuel economy could be increased by maximizing the cylinder pressure. Although the cylinder pressure could be increased by increasing the compression ratio, alternative technique could also be used since cylinder pressures could be altered significantly by using camshaft selection, carburetion, nitrous and supercharging. Compression pressures could be adjusted drastically by installing supercharging, turbo-charging", "label": 0 }, { "main_document": "decreases, resulting in a fall in the level of employment. In equilibrium all firms pay the same wage above market-clearing, and unemployment, which makes job loss costly, serves as a worker discipline device Unemployed workers cannot offer to work at lower wages, as it would be in the workers benefit to shirk at such wage. Firms have knowledge of this, and therefore would not agree to hire workers in such conditions. Weiss (?) The aggregate non-shirking condition is given by: w ( e + w + (e/q) (b/u + r) ( The NSC equation shows that the critical wage ( Equally, a higher exogenous quit rate ( The critical wage holds a negative relationship with the detection probability ( In addition, the equilibrium unemployment rate must be sufficiently large that it pays workers to work rather than shirk. The following diagram shows the equilibrium conditions: Firms' demand for labour decreases the more costly it is to hire a worker, as the demand for labour is a decreasing function of the wage level. So, the higher the critical wage level required for a worker not to shirk, the higher the unemployment rate that would be produced in the economy as a result, and vice versa. In that sense the unemployment rate and the real wage level hold a positive relationship. In the presence of very high wage levels, employees would not only value work for the high compensation they are receiving, but because high wages correspond with low levels of employment. On the other hand, as the unemployment level increases so does the penalty associated with being unemployed for workers, as a higher unemployment rates entails longer unemployment spells. The utility of being unemployed under such circumstances would be decreased. In this case, the threat of being fired would be the main incentive for workers not to shirk, making wages play a secondary role as incentives, therefore employers could offer lower wages without tempting workers to shirk. The effort model predicts, in fact, that during periods of high unemployment firms will cut wages, since it is easier to deter shirking when unemployment is high. The longer the expected duration of unemployment, the greater is the punishment associated with being unemployed, and hence the smaller the wage that would be required to induce non-shirking. In this sense, there is a negative relationship between unemployment and wages, the higher the unemployment level the lower the critical wage level necessary to deter workers from cheating on the job. The effect of macroeconomic changes on the non-shirking condition and hence on the wage, supervision and effort levels within the firm is less clear As previously mentioned, a higher unemployment rate will typically decrease the probability of an unemployed worker finding a job; therefore, an increase in the unemployment rate would increase the penalty associated with being fired and cause a fall in the non-shirking condition. However, an increase in the aggregate unemployment rate also causes workers to start believing that they are more likely to be fired in the future due to exogenous factors. Under such beliefs,", "label": 0 }, { "main_document": "line on the Moody chart. There are errors associated with this experiment. They are all related to the machine that was used and the two measuring devices. The manometer used mercury as the measuring fluid. There were however many air bubbles in each of the mercury tubes. This will have made the pressure reading inaccurate to varying degrees since some tubes had more air bubbles than others. The method for measuring the mass flow rate was also not entirely satisfactory. The fluid was allowed to flow into the measuring chamber and the time it took for a certain mass to accumulate was recorded. The timing was not that accurate however as the time recorded for the same mass of fluid to accumulate was not always the same. The effect of these errors needs to be considered. This can be done by considering what each measurement was used for. The aim of the experiment was to calculate the friction factor. This is given by the following equation: The manometer readings were used to calculate the pressure gradient ( The mass flow readings were used to calculate the volumetric flow rate which in turn was used to calculate the mean velocity We can see that This means that the effects of the errors associated with the mass flow rate measurements are magnified due to the squared term. We can say that the errors associated with the recording of the mass flow rate are the most significant. We have found that there is a higher resistance to the fluid flow in the turbulent case. This is shown by the fact that the Darcy friction factor is higher for the turbulent flow. The higher friction in turbulent flow results in a steeper pressure gradient i.e. a greater pressure drop and also a lower average velocity. Interestingly the Reynolds number was found to be lower for turbulent flow than it was for laminar flow. This is because the flow was not turbulent in the sense that one of the factors that define the Reynolds number was not changed. i.e. the diameter of the pipe was the same, the viscosity and the fluid density remained the same. The only factor that changed was the velocity, the change being caused by an obstruction in the pipe. The fluid was still pumped down the pipe at the same rate. The errors associated with this experiment stem from the manometer and the device for the mass flow rate being of a poor standard. The manometer had lots of air bubbles and the device for measuring mass flow rate was inconsistent. The errors associated with the recording of the mass flow rate were carried through to the calculation of the mean velocity and then magnified by squaring this term to find the friction factor. The results would be more accurate if the manometer was calibrated more accurately by removing the air bubbles from the mercury. A more consistent way of measuring the accumulation of the fluid would help us to get a more accurate value for the mass flow rate. The main", "label": 1 }, { "main_document": "Corporate social responsibility has become a widely held buzzword in the business community for good corporate practices. However, it has also been the focus of debate on its very nature and theoretical justification. One of these criticisms is that corporate social responsibility sounds fine in theory but is unworkable in practice. This article investigates the validity of this claim first by briefly looking at the nature of corporate social responsibility. Then, we undergo an in-depth analysis of the issues related to its implementation from a practical perspective, thereby identifying the limits of the concept and the obstacles to its realization. We conclude with an assessment of the workability of corporate social responsibility and argue that the current situation is rather disappointing. It is important to have a clear picture of corporate social responsibility in mind at the outset. Unfortunately, the first and most notable feature of corporate social responsibility is the variety of its definitions It has been described as \"a tortured concept within the academic literature\", see P. Godfrey & N. Hatch, \"Researching Corporate social responsibility: An Agenda for the 21st Century\", Journal of Business Ethics (2007) 70:87-98, at 87. There are at least 25 different conceptual definitions of corporate social responsibility within academic literature. An alternate description is that corporate social responsibility is concerned with \"the relationship between companies and society and in particular, with constraining the adverse impact of corporate activity on individuals and communities as a whole\" In other words, while corporate social responsibility is generally about charity and stewardship from companies, there is no real consensus regarding its definition. A. B. Carroll, \"Corporate social responsibility: Evolution of a Definitional Construct\", Business & Ethics 38(3), 268-295. Green Paper Promoting a European framework for Corporate social responsibility (18/07/2001), COM(2001) 366, available at L. Whitehouse, \"Corporate social responsibility, Corporate Citizenship and the Global Compact\", [2003] 3 Global Social Policy, 299, at 300. It is also important to bear in mind that the notion of corporate social responsibility is an evolving concept. The notion of corporate social responsibility evolved over the 20 Milton Friedman's famous critique emerged in the 1970s arguing that, on grounds based on economics and morality, there was no place for corporate social responsibility in businesses. For a history of corporate social responsibility, see A. B. Carroll, \"Corporate social responsibility: Evolution of a Definitional Construct\", Business & Ethics 38(3), 268-295; see also J. Shestack, \"Corporate social responsibility in a Changing Corporate World\", in Mullerat (ed.), Corporate social responsibility: The Corporate Governance of the 21st Century, 2005 M. Friedman, \"The Social Responsibility of Business is to Increase Its Profits\", The New York Times Magazine, September 13, 1970, at 32. See \"What's Wrong with corporate social responsibility?\", Corporate Watch Report 2006, available at Nevertheless, one has to conceptualize corporate social responsibility as a philosophy for purposes of discussion. Corporate social responsibility is a collective name for many different activities that span a variety of disciplines and professional fields. The one agreed characteristic about corporate social responsibility is that it is voluntary. See P. Godfrey & N. Hatch, \"Researching Corporate social", "label": 0 }, { "main_document": "a result being dismissed and effectively secluded in their marital lives. However we also have evidence that shows women who were entitled by their husbands to maybe more freedom of speech than many other women of fifth century Athens. In ' Although Ischomachus is discussing how his wife should 'live up to the rules given her' (tr. Bradley in Ferguson and Chisholm. 1978: 142) and that he already had 'women enough at [his] command' (tr. Bradley in Ferguson and Chisholm. 1978: 136), portraying a similar image to that of Callias or Timarchus, he also refers to the relationship between him and his wife in a far more positive way. He says that once his wife had entered his home he stated to her that household choices would be made as a joint decision and furthermore, \"[their] house was now common to [them] both, as well as [their] estates; for all that [he] had [he] delivered into her care\" (tr. Bradley in Ferguson and Chisholm. 1978: 136). From what Ischomachus says about his marriage we can conclude that he had more respect for his wife and felt that as a married couple they should be as equal as they could in that time and share all the property as a couple. As you can see already there is no way of simplifying all these separate cases. Clearly each one shows that the treatment and position of a woman in fifth century Athens depended to a great extent on the attitude of her husband. A way to conclude would be to simply say that there were some households headed by considerate and open minded men and then there were the contradictory cases to this, (Lacey. 1968: 153) and this was a large factor in the place and freedom a woman held. Although another point to remember is that no matter how liberal a husband may be a woman still held no legal rights over marital property and so in that sense still remained secluded. Another source of evidence that can tell historians about the way of life in fifth century Athens is archaeological evidence, mainly the discovery of decorated pottery that shows scenes of Athenian everyday life. A vase found around 440 BC, now in the Harvard University collection portrays what the Athenian male citizen may have considered the ideal of how his wife should spend her time whilst he is away from the home. According to Dyfri Williams this pot is a celebration of \"the primary functions of a woman as they are set out by ancient writers: to produce and rear children and to contribute to the self-sufficiency of the household\". (Williams ed. Cameron and Kuhrt. 1993: 94) Whether or not this is just cause to say that women lived lives of seclusion in fifth century Athens is debatable. On the one hand a historian may argue that the pot presents the Athenian woman in her most true form, isolated in her home and kept away from the rest of the city. However it can also be said that this is only", "label": 1 }, { "main_document": "gets loyalty from its customers and attracts many potential guests and performs well among the rivalries. Therefore, Marriott can gain profits even though during the difficult trading situation these several years. With the strong financial support, it can do many R & D in order to create and to innovate its products and services like technology of online booking and Wi-Fi wireless Internet access in (PR Newswire, 2004) and Marriott is able to select the right location in light of providing its customers with convenient, which is the critical factors influencing the decision-making of the guests. During the field trip in Liverpool, we had a talk which is held in the Marriott Liverpool City Centre Hotel. At that time, I found that this hotel is a little bit out of my expectation. There are three weaknesses including the human resources, the physical appearance and the capacity. Firstly, in terms of the human resources, it is not difficult to find that the hotel is lack of staff. In such a big brand hotel, it is unreasonable to see that only several staff in the lobby. In regarding to the physical appearance, Marriott Liverpool City Centre Hotel is famous of focusing on their layout design, furnishment, decoration and even the uniform design. However, it gave us the impression of old-fashioned. On the aspect of capacity, Marriott Liverpool City Centre is really small in the capacity since it has only 146 rooms (The Mersey Partnership, 2004). This amount is totally unable to accommodate the extra tourists during 2008. Basically, the major opportunity for the Marriott Liverpool City Centre is the event of European Capital of Culture. As a winning city, the authorities of Liverpool have to rebuild its image to get rid of the negative picture. The regeneration of the city can make it more attractive in order to create a centre of attention to the tourists. As a consequence, it can attract many travelers visiting Liverpool and hence the demand for the hotels will be increased obviously at that time. As estimated, the \"08 project\" can attract an extra 1.7 million tourists to Liverpool (Liverpool 08, 2004). However, according to the Mersey partnership (2004), it recorded that the sum of rooms offered by all kinds of hotels is not able to cover extra guests if the situation remain unchanged in the near future. According to the Mersey Partnership (2004), the supply of hotels in Liverpool city centre increased significantly in 2002 and 2003 with the opening of 4 new budget hotels, adding a further 448 budget hotel bedrooms in the city centre (representing a 19% increase in total hotel supply, and an 83% increase in budget hotel supply). As at November 2003, the total city centre hotel supply now stands at 26 hotels and 2,754 bedrooms, accounting for 40.5% of the total Merseyside hotel supply [Appendix 1]. Moreover, during the period of 1997 to 2003, the Mersey Partnership investigates that the city centre hotel supply has increased by some 75% over the past 7 years. Since 1997, 11 new hotels with an additional 1156", "label": 0 }, { "main_document": "R1. The UK current interest rate is 4.75%, and it is considered to be quite high and uncompetitive for the UK manufacturing industry and the retailers compared to the other countries with lower interest rate. From the Keynesian perspective, the demand for labour is derived from the aggregate demand in good market. (AD=I+G+C+NX). In dig.4, it shows the increase in demand for output will require more labour demand. The wage stickiness and price rigidity imply there are exists the possibility of \"involuntary\" unemployment. In the year to September 2004, the UK unemployment fell by 105,000 to 1.39m people. The number unemployed and claiming Jobseeker's Allowance fell by 95,000 to 834,000 in the same period. Both these figures are the lowest since 1975 Common approaches of fiscal policy: So the aggregate demand raises, hence employment. However, an inappropriate adjustment might have the risk of obtaining a higher voluntary unemployment rather than employment. On the monetary policy side, the same analysis is applied as in 2.1. The lowering interest rates, expanding the money supply, and making it easier to obtain credit is the way to stimulate aggregate demand. As a result of higher domestic investment will offer more jobs in labour market. In the 2004 pre-budget report, the chancellor announced the implementation of the changes recommended by the Graham Review of the Small Firms' Loan Guarantee by end 2005 to encourage more private investments. Meanwhile, a competitive interest rate also promotes foreign investment into the economy from foreign multinational companies Inflation refers to the tendency for prices level over a certain period. It is expressed as two forms in Kenysian model. Once workers adjust their expectations, according to the wage setting formula: W=PeF (v, z), they will bargain for the higher wage and wage-price spiral may occur. where Y=N (production function)= The rising cost will result in a higher mark up In 2004, the UK inflation producer input prices grew considerably from the spring of 2004, due to the effects of rising oil prices. In order to alleviate the inflation, the Phillips curve with adaptive expectation argues there is a short term trade-off between inflation and unemployment. So when the inflation pressure arises, the government tends to adopt contractionary fiscal policy by cutting-back government spending and raising the direct /indirect tax payment. An early example was the Medium-Term Financial Strategy implemented by Mrs. Thatcher, which involved planned reduction in public spending in 4 to 5 years. The lower government spending will shift the AD (dig5) downward, so the exceeding demand can be eliminated. However, the contractionary fiscal policy is unpopular from the political consideration since the shift back of AD curve may cause the economic recession. In May 1997, Labour government decided to move decisively to establish the independence of the Bank of England. One of the Bank of England's key responsibilities is to conduct monetary policy. The Bank's role is to deliver price stability by setting short-term interest rates. The inflation target is set each year by the Chancellor of the Exchequer (new target: 2%) and the Monetary Policy Committee will try to", "label": 0 }, { "main_document": "was broken down into simplified rectangular sections, where by applying the equations of equilibrium and compatibility, equations to define the proportion of torsional load in the separate rectangular sections, and hence the shear stress at yield load in the rectangular sections were determined. The maximum shear stress was calculated using the Griffth method which accounts for the additional fillets and tapered flanges. The predicted yield load for the I beam section was found to be 700Nm using the Tresca failure criteria and 808Nm using Von Mises. A spread sheet was created in order to investigate the effect of changing certain variables used in the calculations. Of interest was the effect that varying the depth of the flanges had on the yield load. With a flange depth of 9mm (as used in the hand calculations) the yield load for Tresca failure criteria was shown to be 700Nm. Increasing this flange depth to 10mm raises the yield load to 789.5Nm, rasing it further to 13mm produces a yield load of 1246.5Nm. Similarly varying the flange taper angle or the diameter of the inscribed circle had minimal effect on the yield load. It could be postulated therefore that the most effective way to increase the torsional stiffness of an I section beam would be to increase the depth of the flanges. It should be noted that the theoretical calculations used to determine the yield load of the I section beam use a much simplified model compared to the actual component. The values for shear stress in the flanges and webs are assumed to occur in the central sections of their respective rectangular sections. For the case of the web this shear stress value is considered to be an accurate representation of the actual shear stresses since it is in the centre of the longest edge of the rectangle. Referring back to Prandlt's membrane theory it was shown that the maximum shear stress in a rectangular section is near the ends of the rectangle, the gradient of the ensuing concaved membrane is constant along the longest edge of the rectangle and hence the shear stresses are constant, Figure 5.1 depicts this shear stress distribution. The shear stress values for the flanges are not as clearly defined since in the centre of the rectangle where the shear stress value is calculated the web section joins to the flange. This can be illustrated by a plot of the shear stress across this region in the flange taken from the results obtained by the FE Analysis shown in Figure 5.2. This plot shows that the shear stresses are not constant across the centre section of the flange, hence the calculated shear stresses are only a prediction, and accuracy of the results can not be relied upon. Figure 5.2 also shows the stress plot along the web, showing the constant stress along most of its length. The hand calculations assume warping occurs and therefore contain no restraints on either of the ends of the beam. The practical results and FE Analysis restrain the I section beam at one end, preventing", "label": 1 }, { "main_document": "little symbolic play suggesting a limitation in their non-verbal intelligence (Leonard, 2000: 120). Similar findings have been reported by Brown et al (1975, as in Leonard, 2000: 120) who discovered that preschool children with SLI had more difficulty than age matched controls using objects in a pretend manner. The ability to inter-relate items was also found to be problematic in SLI children with Udwin and Yule (1983 as in Leonard, 2000: 121) finding age matched controls to perform better on tasks which aimed to elicit concepts of time and space using a miniature toy set. However, this concept was also adopted by Terrel et al in 1984 (as in Leonard, 2000: 121) but using a control group who were matched, not by age but by expressive vocabulary ability (i.e. MLU). This study found that those with SLI out performed the language matched controls, thus seeming to support the original criteria for SLI. Further investigations followed the notion of age matched versus language matched controls in a symbolic play situation and the research has proven relatively inconclusive on one front. Those children with SLI do demonstrate a diminished performance in comparison to their age matched peers; however the language matched controls have proven to differ greatly with some studies finding SLI children to have poor performance whilst others show equally good or even better performance than the MLU controls. Despite the inconclusive nature of the language matched groups the age matched groups were found to be significant with non-verbal deficits seeming to exist within SLI children on symbolic play tasks and the suggestion that poorer play correlates directly to less developed language (Leonard, 2000: 123). Mental imagery tasks were also found to pose a difficulty to those with SLI with similar patterns of results regarding age matched versus language matched controls appearing (Kamhi, 1981; Camarata et al, 1981). Those with SLI were found to have difficulty in predicting the direction of water in a tilted glass with many believing that the water would not remain horizontal but parallel to the bottom of the glass regardless of the direction it was being moved in. On an assessment battery of non-verbal tests, Johnston and Ramsted (1983 as in Leonard, 2000: 123) found mental imagery proved to be the area of most deficit with SLI participants failing to identify shapes they had blindly felt correctly. Tests on conservation and seriation have also been performed on those with SLI and present mixed results. Whilst Siegel et al (1981 as in Leonard, 2000: 126) report those with SLI perform lower than their age matched peers on both measures, Johnston and Ramsted (1983) only found seriation tasks to be deficit with the SLI children involved performing approximately two years below their chronological age. Kamhi (1981, as in Leonard, 2000: 126), however, comments that the differences found on these measures are not statistically significant implying that the deficits found in conservation and seriation are not as poignant as those discussed previously although some pattern of deficit can be assumed. Hypothesis testing and analogical reasoning tasks have also been found to", "label": 1 }, { "main_document": "industrial debris (Grandjouan These provide evidence for late fourth-century craftsmanship and diverse commercial activity involving distant trade links with other Hellenistic cities (Grandjouan By this account, it would appear that there were many areas of vitality and dynamism in Hellenistic Athens, something which could potentially take the emphasis away from the cultural image of the city. As exemplified in the preceding discussion, the precise dating and function of many Hellenistic structures are subject to debate. However, most problems pertaining to the reconstruction of Hellenistic Athens are the result of biased approaches and interpretations rather than the nature of the evidence. It appears to be problematic to place Athens in the wider Hellenistic world since comparisons with major Hellenistic centres reinforce that Athens is not a Hellenistic but a classical city Therefore, arguments against the decline of the The complicated politico-military events affecting Hellenistic Athens inevitably had a bearing on the built environment, as there is evidence for economising, abandoning or the ceasing of building activity and destructions. The relationship between specific historical events and the urban landscape is not always well understood given that ancient sources are biased or fragmentary. Thus, the tendency to make various architectural structures 'fit' certain historical contexts could involve circular arguments. The prevailing reconstruction of irreversible decline in the civic traditions of Hellenistic Athens can be questioned. The intended continuity with classical religious traditions served as a vehicle for the persistence of civic spirit. Furthermore, continuity in the Athenian epigraphical habit reflects that Hellenistic Athens had not lost its civic self-confidence but still functioned as a political body. Hellenistic Athens was far from a foreigners' puppet. The various building gifts by external benefactors show that Athens acted as a 'stage' for their self-advertisement. Consequently, there is no reason to equate colonnaded porticos with a 'Hellenisation' of Athens derived from abroad. Besides, these external donations were part of bilateral relations and historically contingent foreign policies followed by the Athenians. Although Hellenistic Athens was widely valued as an educational and cultural metropolis, the model of a cultural Mecca has been overstated. This is because it has tried to explain the paradoxical 'survival' of Athens despite the loss of political might. The study of Hellenistic Athens is hindered, in part at least, by hardened orthodoxies imbued with negative bias. Despite this, the most important problem area seems to be the lack of archaeological rather than historical research in the dynamics of Hellenistic Athens There is a need for scholarly accounts that move beyond geographical (horizontal) comparisons with thriving Hellenistic cities and chronological (vertical) comparisons with the classical era. Ancient authors, who have paid excessive tribute to the marvels of classical Athens, appear to have influenced academic scholarship discussing the Hellenistic city of Athens as an 'ideal' of what it ought to have been. Such themes impede an unbiased identification of Hellenistic Athens with its contemporary built environment. The fact that Athens during the Hellenistic period exemplifies little internally-driven public building programmes needs to be objectively addressed and accounted for. This limited building activity can also be regarded as a particularly", "label": 0 }, { "main_document": "nationalistic sentiments in order to ensure stability in the territory; as a result, the education curriculum was more or less \"depoliticalized\" (Fairbrother 2003, Fairbrother 2005, Lam 2005, Ma 2004). However, after the handover, the Special Administrative Region government assigned a new mission to the schools by advocating that the promotion of national identity should be incorporated into the local civic education curriculum. The apolitical youth produced by the colonial education is undesirable; but the new \"political mission\" assigned to the educators after the handover is no less problematic. The situation has been thoroughly discussed by substantial researches, both before and after 1997. In general, the civic education was criticized as \"apolitical\" both before and after the handover; the democratic value was neglected (Tse 1997) and students were continuously taught to be \"economic animals\" (Lam 2005). More importantly, the related post-colonial researches revealed the skeptical (or even repulsive) attitude of local politicians and educators towards the new \"political role\" of the schools. Arguments were raised whether \"loving China\" equals \"loving the communist party\" (Fairbrother 2005), and whether the image of China should remain homogenous and positive in the nationalistic education (Morris, Kan & Morris 2000). A survey which was conducted after the handover showed that the majority of local secondary schools rejected the notion of \"totalitarian nationalism\", which demanded uncritical devotion to the state and the leadership of the ruling party (Leung & Print 2002). The schools expressed worries about turning the nationalistic education into an irrational patriotic education. Hong Kong people's repulsive attitude towards the political situation in China can be explained in a number of historical and social factors; however its discussion would be beyond the scope of the present research. For reference please refer to Eric Ma on \"Top-down Patriotism and Bottom-up Nationalization in Hong Kong\" (2004); and Anthony Fung on \"Postcolonial Hong Kong Identity: Hybridising the Local and the National\"(2004). People are generally bound to their homelands either by political or cultural will (Barnard 1988, Wiborg 2000). If we take them as the two \"approaches\" in constructing the national identity among people, the above research findings have already proven that the former (political) is not feasible in Hong Kong. On the other hand, the construction of national identity through cultural recognition is by no means easy. The Chinese literature and Chinese history curriculum in the local secondary schools have been under severe attack as being rigid and irrelevant to youngsters nowadays, the number of schools that provide these two subjects is continuously dropping (Hong Kong Economic Times 2005). How can we cultivate the sense of belonging from the culture which students find detached and boring? My discussion seems to imply a dead-end to the path of nationalistic education. However my personal experience told me quite the opposite; and it gives new significance to the present research. When I was a reporter in the education section of Mingpao To my surprise, it was arranged for students to visit at least one manufacturing plant or giant enterprise everyday during the visit. The trip itself was more like a business tour which", "label": 0 }, { "main_document": "These ironical aspects are both evident in the plot and in the relationships between characters, whilst simultaneously being imbedded in the structure of the text. The act of writing a letter implies an intimate and deeply personal relationship, yet this notion is subverted by Laclos, who explores the ways in which this form of expression can be corrupted and used perversely. This is apparent in la marquise de Merteuil's correspondence with Madame de Volanges, in which she advises her not to grant her daughter permission to marry Danceny on mercenary grounds, stating ' In a letter to Valmont, however, her true feelings are revealed and her desire to destroy her confidante is exposed, writing ' This highlights the ironic transformation of the letter from an intimate document expressing genuine emotion, into an instrument of deception. The order and time that the letters are received imbues the text with an additional source of irony. This is particularly evident in letter 126, in which Madame de Rosemonde praises la pr Whilst it is possible that this letter would have prevented the liaison between them, it is received by la pr The letter is therefore denied the ability to facilitate a change in la presidente's actions and is used by Laclos as a symbol of impotence and dramatic irony. Laclos, C., Ibid., p. 304 One of the reasons as to why Merteuil and Valmont behave as they do towards other people is that they are jealous of each other's exploits, yet 'seek each other's admiration and plaudits.\" Letter 10 is one such example where la marquise includes a lengthy description of her satisfaction with her new lover Belleroche: ' This rivalry of lovers continues throughout the novel and characterises their relationship. It is all the more ironic that, whilst they 'consider each other as the only audience whose applause is worth having, so that they depend on each other for praise which they are both equally reluctant to give' Davies, S., Laclos, C., op. cit., pp. 55-56 Thody, P., It appears a safe assumption that Valmont is in love with la marquise and throughout the novel he constantly tries to impress and seduce her. The irony lies in the fact that Valmont, in order to be rewarded with one night with la marquise, attempts to seduce la pr The resulting confusion for Valmont paves the way for his downfall. The scapegrace's basic tenet is that falling in love leads to a loss of control, a langour to which he will never succomb; he and Merteuil are ' Valmont projette de lui-m ' Therrien, M. B., It becomes almost immediately obvious that Laclos is writing not entirely about true love but about the dangers of seduction as well. It could be contested that the sentient characters, ironically, are not bound merely by love - but almost by duty. La pr In letter 77, Valmont skilfully uses reversal of reality to manipulate the truth and also la pr He employs such verbs as ' Laclos, C., op. cit., pp. 201-02 Ibid. There are also smaller yet equally", "label": 1 }, { "main_document": "In contrast, a share repurchase gives investors an own option whether to sell or not depend on the timing of cash needed, the tax condition and the current performance of the company etc. Dittmar (2002) investigates the reasons behind decisions by firms to repurchase stock over 1977-1996, and she concludes a principal severed as a yardstick in judging the rationale of a payback program: a corporation should repurchase shares if the management believes their stocks are undervalued in the financial market meanwhile there are no other better investment opportunities available. If the shares are repurchased near the undervalued price, the company will earn a rate of return greater than market required cost of equity. According to the theory, managers should have more information about the true value and the prospective of the firms than outsiders. Therefore, the share repurchases send a favorable signal to the market as the management perceives the current share price is lower than the intrinsic value. Vermaelen (1981) claims abnormal price increasing 3.37% in the US after the announcement of a repurchase plan by increasing its return on equity and earning per share. In addition, repurchasing details also affect the significance of the signaling power: The higher the percentage of stocks companies buyback, the greater management's conviction; The size of the premiums reflect the management's expectation the scale stocks are undervalued; The inside selling creates extra motivations for employers to work harder when the company's performance closely associates with their wealth. Guay and Harford (2000) research discovers the price reaction to positive dividends increases is more positive than the reaction to repurchases. Some latest studies indicate a number of repurchase program are announced with the attention of misleading investors. A few companies announced buyback plans were merely trying to boost the share price in the short term. Moreover, similar to dividends payment, a big repurchase announcement may suggest the firm has exhausted profitable investments opportunities because payback is rational only if investments expected to yield a rate of return less than share repurchases. Brennan and Thakor (1990) recommend signaling impact does not determine the choice between repurchases versus dividends. They also notice if there exists a fixed cost to gathering information, shareholders with large position would have more incentives to become informed than the others with small positions. The uninformed investors run the risk of being expropriated by others with more information, hence stock repurchases may act as a redistribution mechanism from small shareholders to the larger better informed ones. The manager repurchases shares to block a takeover only if the cost of doing so is not too high. Since the cost is inversely related to the value of the firm under his management, a repurchase signals that the value of the stock is high, blocking a takeover. While a repurchase increases the expected value of the shares, it also makes to stock riskier. The model also indicates that there are too few takeovers for efficiency. Share repurchases may contribute to more responsible use of free cash flow. Firstly, managers return excess cash through repurchase programs can be", "label": 0 }, { "main_document": "method of highlighting the great differences dividing mankind from God's divinity While Donne's outspoken nature towards God can be criticised as confident to the level of arrogance at times, he is firmly put in his place as this sonnet develops. He opens the poem by challenging God with the following argument: 5 John Byrne, John Calvin, (Glasgow, Bryce, 1762) p. 749. Lawrence Beaston, \"Talking to a Silent God: Donne's Holy Sonnets and the Via Negativa\" (Marquette University, 1999) < John Donne, Patrides (London: Dent, 1985), p. 440. In Calvinist terms, Donne has answered his own question in the framework of this interrogative by returning to the fall of man. Donne refers to Satan as 'the serpents envious' and blames the tree of knowledge for bearing fateful fruit. Yet the poet has consciously emitted the role of Adam and Eve in this biblical story, who engineered their freewill to succumb to temptation. Donne perceives freewill as a curse on mankind, which animals and inanimate trees are lucky to have avoided. He argues that simply being a member of the fallen race staggers his spiritual progress as it 'makes sinnes, else equal, in mee more heinous' (6). While these arguments conduct the opening octave of this sonnet, Donne forms the volta at his moment of realisation that he is unworthy to challenge the mystical ways of God: 'But who am I, that dare dispute with thee?' (9). Sentiments of Calvin are echoed here in his belief that it is not fitting 'that man should freely search those things which God hath willed to be hidden,' (Calvin, Within the brevity of the sonnet form, Donne has expressed the identical crave for knowledge that was responsible for the fall of humankind. He displays man's vulnerability to temptation, which constantly leads him off the path of righteousness. Byrne published that 'by Adam's fall, his posterity lost their freewill,' so their acts of good or evil are 'predestined by the eternal and effectual secret decree of God' (Byrne This idea implies that man is unable to please its leader because the human race cannot escape from the weight of original sin brought into the world at its creation. In the first of the Donne can accept that his fallen race is vulnerable to 'our old subtle foe,' (11) causing sin to pervade every part of one's being, but he struggles to comprehend why God chose to release death into the world as punishment for this fall. He opens his sequence of divine meditations by challenging the methods of his leader with an interrogative directed towards God: Donne's inability to, in any way, control his path after death creates the fear exposed throughout this sonnet. Without freewill, he cannot do any evil or good that God will consider in assessing if he is deserved of an afterlife. The opening octave of this Petrarchan sonnet is weighted with terms that emphasise the pace and unexpected nature of death. Donne describes it chasing him in 'haste', how it meets him 'fast' and he dares not to move his 'dimme eyes any", "label": 1 }, { "main_document": "The aim of this experiment was to carry out an enzyme assay to study the effects of pH, temperature and product inhibition on an enzyme and calculate the Michaelis constant and order or reaction. This was carried out by assaying alkaline phosphotase under different conditions measuring the extent of reaction with a spectrophotometer. It was discovered that the optimum pH was 9.5 whilst the optimum temperature is 42 Phosphate was found to carry out product inhibition and the reaction was determined as first order. The Michaelis constant was calculated to be 0.0871. Enzymes are biological catalysts (they accelerate the rate of a reaction without themselves undergoing any net change) and catalyse most of the chemical reactions taking place in the cell. They carry out this catalysis by providing an alternative reaction pathway with a lower energy transition state. Enzymes are highly specific binding a specific substrate in the active site. The amount of enzyme activity and elements of the kinetics of an enzyme-catalysed reaction can be measured by measuring the rate of appearance of one of the products of the reaction. The effect of a variety of factors on rate of enzyme activity such as pH, temperature, concentration of substrate and enzyme and presence of inhibitors can also be determined using this method. Inhibition of an enzyme involved in an initial step in a metabolic pathway is often carried out by the end product. This is known as feedback inhibition and is an important regulatory strategy. A similar effect, product inhibition occurs when the product of an enzyme catalysed reaction inhibits the enzyme carrying out that reaction. An enzyme catalysed reaction has a reaction rate order. If the rate of the reaction at any instant is proportional to the concentration of the substrate then this is known as first order kinetics. Enzyme kinetics can also be described using the Michaelis-Menten equation. This involves the use of two parameters to describe the kinetic properties of enzymes - V K These are linked in the following equation where V Here alkaline phosphotase was used to study principles of enzyme kinetics. Alkaline Phosphotase is a phosphotase (hydrolyses organic esters to inorganic phosphate and alcohol) which is optically active at alkaline pH. It is an excellent enzyme to use in the determination of enzyme kinetics as it is robust and easily assayed. The synthetic substrate p-nitrophenyl phosphate (pNPP) is a convenient choice of substrate to demonstrate the activity of the enzyme as is it colourless until hydrolysed by the enzyme to the products inorganic phosphate and p-nitrophenol (pNP). p-nitrophenol is yellow at alkaline pHs (although colourless below pH7) and so the hydrolysis of pNPP can be followed using a spectrophotometer if the solution is made more alkaline after completion of reaction (eg. By adding NaOH). This also stops the reaction as at a very high pH the enzyme is denatured. pNP is also very stable once the reaction is stopped. The aim of the experiment was to quantitatively assay the activity of alkaline phosphotase under different conditions in order to study the effects of temperature, pH", "label": 1 }, { "main_document": "wrong cause. In the Islamic world, radical leaders like Saddam Hussein, has called for a Al Qaeda, the Islamist militant group behind the 9/11 attacks has also recruited members based on the extreme version of the duty of Jihad, reinforced by videos that show Muslims being killed in Chechnya, Iraq, Kashmir and Lebanon. In an article in a time magazine, it is written that \"guns and prayer go together in the fundamentalist battle\" This skewed view of Islamism neglects the reality that extremism is no more the monopoly of \"Islam\" than it is the monopoly of other religions, including Christianity. In actuality, radical \"Islam\" is certainly not advocated from its religious base, rather from \"secular radical ideologies exogenous to Islam and the Islamic world\" Joseph S. Nye, Jr., \"Just don't mention the War on Terrorism\", International Herald Tribune, 8 February 2007, pp. 1. Lueg, Hunter, Most evidently, the manifestation of the Islamist threat is the rise of radical Islamist militant groups. The \"West\" and Islamic states have every reason to fear radical Islamist movements that operate independently of governments as they are often more mobile on attacks and also more harder to pinpoint - states are answerable, movements are not. Furthermore, the success of one group may lend moral, material and ideological support to other radical movements on the world scene. Recognizing this threat, it would be futile to try and resolve the problem without addressing the underlying factors that radicalized Islamism into its militant form. Fuller and Lesser, Ibid., pp. 118. Khashan, \"The New World Order and the Tempo of Militant Islam\", pp. 21. Halliday, Situating this Islamist threat in time and place, it is vital that states are aware of some generic factors that could possibly lead to the rise of militant Islamism. As the French scholar Gilles Kepel reiterated in his book, More often, geopolitical events added fuel to the fire. For example, the oil revolution of the 1970s and the ensuing oil wealth enabled Saudi Arabia to propagate its Islamic movement, Wahhabism \"which provided fertile ground for the growth and maturation of militant Islamic tendencies\" Gilles Kepel, Hunter, Indeed, the \"West\" fears Islamism for its link to Islamic terrorism which has manifested itself all over the world in different forms. It is a legitimate threat yet it is na Terrorism is essentially a political struggle fought in the name of religion by those who feel powerless seeking to undermine the perceived power of a targeted group. Therefore, the West's containment of radical Islamism or contemporary terrorism is legitimate only to the extent that it also respects and recognizes the benign elements of other Islamist groups which are all part of this widespread revivalist trend. Many other moderate Islamist movements have emphasized change through non violent channels; the political and social transformation of society. Opting for a military confrontation without understanding the diversity of Islamism will only carry the \"risk of politicizing hence radicalizing - increasing the numbers of the pacifist Islamic multitude\" Khaled Abou El Fadl, \"Islam and the Theology of Power\", Khashan, \"The New World Order and", "label": 1 }, { "main_document": "number of insurance taken by each customer. Evaluation: Obtain figures of total number of insurances issued and compared at the end with new number. Description: To have all staff members able to deal with types of insurances except life. Evaluation: Mystery Shopper: A person appointed to deal with staff members for various insurances and evaluate their performance. Description: To have X% staff feels part of strong cohesive teams. Evaluation: Carry out survey for the staff affected by operational change and get views. Project risks are uncertainties that threaten the goals and timetables of a project. When these uncertainties occur they can delay deliverable dates and cause possible budget overage that can undermine the confidence in the project and its leader. To identify the risks I have adopted the Failure Mode and Effects approach because of its natural way of expressing events. The probability of each uncertainty and level of impact occurring has been estimated on an arbitrary scale of 1-10. Risk Management is a process of the identification, measurement, and control and financing of risks, which threaten the existence, the reputation, assets or the personnel of an organisation or the services, it provides. Risk 1: Mitigate and Share Outsourcing the software development activity of the project could reduce the risk. Good management skills would be needed to manage the control. If risk appears in the future it could be shared between both the companies hence reducing the overall impact on the company. Risk 2: Mitigate and Share The probability of risk could be reduced by putting extra time and cost in providing extensive training to the staff members to handle various types of insurances also taking into consideration their concerns. External trainers could be recruited if required to provide excellent training to employees after pilot phase evaluation. Risk 3: Mitigate and Allow Reduce the risk by having strong firewall and anti-virus software's meant to keep the data safe from outside interference. Risk 4: Mitigate and Allow Consulting them and negotiating on issues concerning them could mitigate the risk of trade unions having unhealthy attitude toward re-structuring. The process has been discussed in details under the project implementation strategy. Risk 5: Mitigate and Allow Putting extra time and effort in involving employees through out the complete process in order to motivate and gain commitment could reduce the risk. Establish hotlines to counter the spread of rumors. Make presentations; hold enterprise wide meetings and ensuring that top managing is accessible to quickly address concerns. The project implementation strategy discusses various key strategic issues that were required to be considered in the project. The key strategy variables are discussed below: The speed with which the change is implemented depends mainly on two factors: Urgency of situation and responsiveness of the organisation to change. The situation is not an urgent situation and the general response of the employees toward re-structuring has not been very positive. Having a high pace change means reducing the involvement and empowering of employees that could lead to difficulties in future. A medium pace would be ideal in this situation where issues", "label": 0 }, { "main_document": "longer-term disabilities in speaking, hearing or vision, the age and candidates' sex. On the other hand, the topic of reading text should be interesting to learners and appropriate to their age and sex. Additionally, tests should allow for the provision of accommodation for students with disabilities as FCE does. If candidates have any longer-term disabilities they can ask for special arrangements to be made (UCLES, 2001: 8). Particularly, students with visual difficulties can be given extra time to complete a paper or read the questions in a different way or write their answers using the Braille version, or be given the questions in enlarged papers. Finally, they can have a reader reading the questions out to the candidate (Cambridge ESOL, 08/04/2006). Special consideration is also offered to candidates \"affected by adverse circumstances before or during an examination\" (UCLES, 2001: 8). Furthermore, the reading part of FCE exams caters for learners' psychological characteristics by including texts from a wide range of sources (UCLES, 2001: 10). Texts referring to activity holidays, to shopping or to someone's experience of learning to fly \"motivate a deeper reading since they are linked to students' academic knowledge and leisure interests\" (Alderson, 2000: 29). So students' schemata are activated and their varied language abilities are tested. Topics (...) which might introduce a bias against any group of candidates (...) are also avoided\" (UCLES, 2001: 6). Finally, the test considering learners' experiential traits ensures candidates' familiarity with the exam format and the task types by the availability of coursebooks, practice materials, and past examination papers (UCLES, 2001: 8). Moreover, the specification of the testers' expectation leaves students with adequate time to prepare themselves for taking the test. So the test challenges learners' reading abilities while catering for any special as well as general needs. This validity type refers to whether the cognitive processes needed to carry out the task are appropriate or not (O' Sullivan, 2005: 16). It is divided into executive processes (goal setting, visual recognition, pattern synthesizing) and resources (language and content knowledge). The first element of executive processes provides candidates with a clear idea of the reading purpose so that they can employ the appropriate reading strategies (UCLES, 2001: 10). Candidates \"are expected to (...) read semi-authentic texts of various kinds (...) and to show understanding of gist, detail, and text structure and deduce meaning\" (UCLES, 2001: 7). The goals of the reading passages are also determined by the instructions provided in the rubric prior to each test. The first activity for instance asks learners to choose the most appropriate heading from a list (Appendix). Visual recognition, the second element of theory - based validity, is ensured by the actual presentation of the test. The input is clearly printed and well-presented. The various parts are easily recognized as they are written in bold letters whereas overload of information is avoided so that the material can be legible. Moving on to the executive resources available to candidates, it should be noted that grammatical, textual, functional and sociolinguistic knowledge is demanded from them. Students' textual knowledge is examined in", "label": 0 }, { "main_document": "by this time, the opportunity was almost always declined. Within the practitioner-patient partnership there will always be a certain delineation of roles, this being an appropriate and necessary aspect of professional conduct. There will not, for example, be an even exchange of information; patient-centred care requires that conversation will centre on the needs, concerns and health perceptions of the patient, and obviously not on those of the practitioner. Nonetheless, partnership A practitioner with good communication skills will be able to gather data needed for an admission using a conversational style of interview, and will probably begin to acquire the trust of the patient and a great deal of extra relevant information besides (Brown, 1995). A similar technique for use in gathering qualitative research data, the \"Discovery Interview Process\", has recently been discussed by Wilcock The technique aims to highlight ways in which care might be improved by encouraging patients to \"tell\" their \"stories\", with as little prompting or direction as possible by the interviewer. This information can then be dissected and interpreted by skilled researchers to inspire changes in practice. While this technique is designed for research purposes, its benefits may be pertinent to the development of a practitioner-patient partnership. By allowing the patient to freely direct the information he or she gives to an even greater extent than in the conversational approach, the discerning practitioner will gain an understanding of where the The patient-practitioner relationship is by no means the only partnership within the health care setting. Even within one skill-team good communication must take place in order to ensure quality and some degree of continuity to the care given - and the patient's overall experience is greatly improved if good communication and a shared sense of responsibility exist amongst the entire multidisciplinary team contributing to the patient's care (Payne 2000). Unfortunately in practice there are many barriers to ideal communication. One study carried out in 1998 (Coiera and Tombs) highlighted the levels of distraction and interruption and the incidence of poor communication between hospital staff in a busy ward. The reasons for this appeared to be a preference for interruptive methods of communication, such as bleeps, and an underlying lack of support and understanding where different roles were concerned. In his commentary on Lingard In other words the lack of resources that leads to under-staffing or minimal staffing levels and a heavy dependence upon non-permanent agency-employed staff contributes to higher levels of stress in practitioners and less room for communication improvement. It is very difficult for partnerships to be established when there is no constant \"team\", rather an ever-changing collection of practitioners with no long-term commitment to the staff or patient relationships (Firth-Cozens 2003). It would seem that the problems highlighted in the various studies are deep-rooted in modern health care practice and often a result of waning resources, making improvement difficult. Perhaps the picture is not as bleak as it at first appears however. Experience of working in a ward setting suggests that good management is crucial to the formation of relationships and can overcome a number of the", "label": 1 }, { "main_document": "the results confirmed the earlier statement that people will only buy from a Web site they 'know and trust' (2000, p. 106) and 'price does not rule the Web, trust does'. When customers trust a site there is a bigger chance that they will give their personal details. The authors carry on saying that another great advantage of online purchases is that shopping patterns of customers are transparent, and can be tracked by each click they make while in traditional retailing this is not really possible (unless tracking their credit card payments, which except for credit card issuers is not a possibility for companies). This feature makes it extremely easy for marketers to get to know their consumers and offer them products and services according to their preferences and shopping habits. However Reichheld and Schefter (2000) found that less than 20% of the online companies take advantage of this opportunity thus neglecting chances of up selling. Finally the design of a Web site should be customer-oriented. Chaffey (2003, p. 288) suggest that information should be: These are the aspects that consumers appreciate and make them 'stick' to a Web site. If they find easily what they are looking for and the content is recent and clear with offered explanations when necessary, they will be satisfied and visit again. The WAI Web Content Accessibility Guidelines consist of fourteen general principles of accessible design (W3, 2005). There are three identified priority levels, divided into checkpoints: The priority one checkpoints are expected to be all met by the online companies in order to make their Web site accessible to a wide range of people, including those with certain levels of disability, who are potential customers. This leads to identifying the customers of the coursework by describing the customer profile. The travel package is chosen for a young couple aged 27 (lady) and 32 (gentleman), empty nesters, who are belonging to group 'A' in socio-economic aspect. They are highly educated and health conscious, with a great deal of disposable income deriving from their high positions in their jobs. The purpose of their travel is a birthday treat for the gentlemen as a surprise from his fianc It is a leisure holiday however not once in a lifetime trip, moreover as the gentleman is motivated for Thailand, mainly because of its culture and gastronomy this is planned to be the first of the many trips in the future. Activities at the destination are planned to be mainly dining out in fine restaurants in Phuket and Bangkok, exploring the fresh fruit and fish markets and relaxing at the beach in Phuket enjoying local speciality cocktails. These activities are not included in the package, as it is supposed be a surprise organized by the fianc It is a two centre holiday, and their budget is Explanation of travel package characteristics The aggregator is THG Holidays is specialist tour operator, created in 2002. The company is a division of Thomson Travel Group which is part of the World of TUI (THG, 2005). Travel package offered by aggregator, THG Holidays is", "label": 0 }, { "main_document": "Tribunal is a space for the Roma to control (and not merely participate in) the processes of decision-making and judgment in relation to the matters which affect the daily life-conditions of their communities. In relation to the specific situation, it is apparent that the Roma have suffered as 'victims' of the health violations. As to the wider background conditions leading to such violations, it is important to recognise a shared responsibility, as complex interrelated global practices, inaction and relations each contribute to long-term circumstances of structural injustice. Accordingly, given this outlook, it would be beneficial ultimately to remove the focus on a 'victims' perspective' as without accepting shared responsibility, it is difficult to re-claim the responsibility, power and direct space for re-claiming truth, of each individual, which can be focused towards shared goals (representing 'a multitude of singularities'). For an interesting discussion of the concept of 'political responsibility' (which expands on the notions of 'fault/strict liability' and emphasises the shared responsibility of all, through action and non-action in complex global processes, for violations throughout the world, see: Iris Marion Young, 'Responsibility and Global Labor Justice' (2004) 12(4) The Tribunal recognises the right of the Roma to effect the implementation of their alternative visions of social relationships in ways which reinforce and celebrate the diversity of humanity, for humanity. It is envisaged that the Roma community and the ERRC will be the joint initiators of the Tribunal, the former for its personal experience and the latter primarily for the following reasons: knowledge of the history and discrimination against the Roma communities of Europe generally; European Roma Rights Centre, 'Justice for Kosovo' (2005) 3, 4 The Tribunal will be held during the period of UN discussion regarding the future status of Kosovo, UNMIK, This could be defined as a 'strategy of democratisation', whereby \"facts and figures are given a voice, a face and a story, and [the story] told in public by people whose lives have been damaged\", see Ulrich Beck, Through its (in)actions, the UN/UNMIK has violated the health, lives and livelihood prospects of the Roma community housed at the IDP camps in Kosovo. As a starting point, the Tribunal will consider the following violations: Against the Roma community: a wrong of inaction: neglect in the duty to take positive action to rectify health violations expeditiously a wrong of silence: neglect of the duty to give privilege and duty to voices of suffering Against humanity (in addition to the above): a wrong of exclusion: complicity in the exclusion of the voices and visions of communities of less power in the current hegemonic world order, as a result of the privileging of economic and political interests over peoples' aspirations of security, well-being and justice The Tribunal will undertake to gather evidence and documentation systematically in order to meet the most rigorous requirements of international legal norms with a view to exposing the violations of, and gaps in relation to, existing promises of human rights. Further, participants will be requested to provide sex- and age-disaggregated data where available, to reflect the different impact of the", "label": 1 }, { "main_document": "which the numbers start declining. The feed ban was put in place in 1988; this meant that cattle born after this date in theory should not contract BSE. However for those cattle that had eaten the feed containing meat and bone meal the disease would have been present in their system, and due to the long incubation period of around five years cases still occurred up until 1992 and 1993 at its peak. After this cases started to decline as shown in the graph. This stopped the export of live cattle that were born before the feed ban was in place - 1988. June 1990 - Ban the use of Specified Bovine Offal (SBO). This prevented brain, spinal cord and intestines etc of cattle over six months of age from entering the human food chain. The risk to human health is still unknown but this was a preventative measure, stopping bovine tissues that may contain the disease agent from entering the human food chain. These measures resulted in further declines in cases. However higher numbers remained then expected after the exclusion of meat and bone meal and offal from ruminant feed. Despite the ban it was found some material was still getting into cattle diets due to cross contamination in feed mills as meat and bone meal was still being used in horse, pig and poultry food, along with failure to keep SBO totally separate at abattoirs. Blowey (1999) states: 'It was found only 1.0g of infected tissue was needed to be ingested by a calf to cause disease'. This meant that measures needed to be tightened to prevent cross contamination. These were further measures to protect human health as there was increasing press coverage and concern from the public about human health safety, threat of CJD not known. This lead to political crisis within days and the introduction of further control measures. 1996 - EU commission declared a worldwide ban on British beef, semen, embryos and any products containing bovine material. This was a control measure to try to protect the rest of the world from BSE and vCJD. Trade however had already taken place up until 1988 in feed stuffs and animals up until 1996 and therefore BSE had already reached countries such as Portugal and Ireland as shown before. 1996 - The Over Thirty Month Scheme was introduced (OTMS). This was the UK government response to vCJD and the worldwide ban. The government needed to be seen to be doing something to protect human health and trying to control the disease due to the massive press interest, and the OTMS was this. This prevented all cattle aged over 30 months from entering the human food chain, as these are thought to contain the risk. The scheme provides compensation to farmers to have OTM cattle incinerated and disposed of. 1996 - Improved record keeping measures. This was a big step in trace-ability of cattle that is increasingly enforced today. This was again a result to the vCJD effect. From 1996 all cattle born have a passport. It became law for", "label": 1 }, { "main_document": "breakthrough in the field of biomedical engineering as it has made Oxygen saturation measurements easier for the paramedics and less painful for the patients. Earlier the blood was drawn from the patients several times a day and its oxygen saturation level was measured by gas chambers that used chemical detectors or haemoximeters that used spectroscopic principles similar to the pulse oximeters. Pulse oximetry is now used in a variety of applications like Intensive care Units Pulse Oximetry has now been found to be extremely useful in detecting and screening Congenital Heart diseases Although the Pulse Oximetry Process is widely used in many applications and is fairly accurate. It also has some limitations like any other instrument. If there is a reduction in the blood flow due to vasoconstriction caused by hypertension, Cold, Cardiac failure, etc, then the signals obtained are insufficient for analysis. The bright light in the operation theatre and the shivering of the patient could cause error in the readings Pulse Oximetry cannot distinguish the various forms of Haemoglobin and hence it will cause a serious error in readings. For example Carboxyhaemoglobin is registered as 90% haemoglobin hence the SpO2 level will be overestimated and when Methane blue is used in surgeries it combines with the haemoglobin to form methaemoglobin which will be regarded as 85% oxygen saturation. The presence of nail varnishes can also cause an error in the readings. However Pulse Oximetry is still a breakthrough in the field of Biomedical Instrumentation and the technology will be used by the medical professionals for a long time to come and of course the technology shall also evolve with time to suit the needs of the hour till a better technology with a drastically better edge takes over from it.", "label": 0 }, { "main_document": "heat removed, as well as dimensions (especially thickness) and shape of product, heat transfer process, and temperature. The International Institute of Refrigeration (1986) defines various factors of freezing time in relation to both the product frozen and freezing equipment (Persson and Lohndal, 1993). The most important are: Calculation of freezing time in food systems is difficult in comparison to pure systems since the freezing temperature changes continuously during the process. Using a simplified approach, time elapsed between initial freezing until when the entire product is frozen can be regarded as the freezing time. Plank's equation is commonly used to estimate freezing time, however due to assumptions involved in the calculation it is only useful for obtaining an approximation of freezing time. The derivation of the equation starts with the assumption the product being frozen is initially at freezing temperature. Therefore, the calculated freezing time represents only the freezing period. The equation can be further modified for different geometries including slab, cylinder, and sphere, where for each geometry, the coefficients are arranged in relation to the dimensions (Plank, 1980). Plank's equation was used to calculate the freezing time of an apple using liquid nitrogen. The maximum time to freeze the apple using liquid nitrogen was 2.51 minutes. The minimum time to freeze the apple using liquid nitrogen was 70.5 seconds. The freezing rate ( At a particular location within the product, a local freezing rate can be defined as the ratio of the difference between the initial temperature and desired temperature to the time elapsed in reaching the given final temperature (Persson and Lohndal, 1993). The quality of frozen products is largely dependent on the rate of freezing (Ramaswamy and Tung, 1984). Generally, rapid freezing results in better quality frozen products when compared with slow freezing. If freezing is instantaneous, there will be more locations within the food where crystallization begins. In contrast, if freezing is slow, the crystal growth will be slower with few nucleation sites resulting in larger ice crystals. Large ice crystals are known to cause mechanical damage to cell walls in addition to cell dehydration. Thus, the rate of freezing for plant tissues is extremely important due to the effect of freezing rate on the size of ice crystals, cell hydration, and damage to cell walls (Rahman, 1999). The figure 4 shows a general behavior of the dynamics curve of freezing preservation. Rapid freezing is advantageous for freezing of many foods, however some products are susceptible to cracking when exposed to extremely low temperature for long periods. Several mechanisms, including volume expansion, contraction and expansion, and building of internal pressure, are proposed in literature explaining the mechanisms of product damage during freezing (Hung and Kim, 1996). In small samples, high freezing rate produces a large number of ice crystals, in large samples, nucleation is only produced in the zone that is in contact with the refrigerant. The size of the ice crystals depend on freezing rate. Freezing damage is associated with ice formation, either directly through the mechanical effects produced by ice crystals or indirectly by an increase in", "label": 0 }, { "main_document": "Mexico respectively, both took the leap in \"imagining distinctive national identities but paradoxically lagged behind in achieving (true) national unification.\" From here, food played a major role in fostering national identities and consciousness. The emergence of a creolized form of national cuisine in both Mexico and Belize testify to the integrative ability of food. Pilcher, , pp. 66. The twin processes of the consciousness and emergence of a national cuisine in Mexico, mirrored an \"... Aztec God, (that) wore many masks.\" This complicated process of national cuisine formation passed through three distinct periods. Following Independence in 1821, elites often defined the national cuisine in European terms, excluded the lower classes and their food, \"tamales and other corn products from respectable dinner tables.\" In 19th century Mexico, national cuisines thus reflected more cleavages as European culinary experiences reigned supreme over Mexican indigenous food. Wealthy Mexicans often adhered to French Following on to the Porfirian period in the early 20th century, the revolutionary concern for a national workforce led to the This elitist food discourse prompted a nation-wide campaign to replace corn with wheat in the national diet. Through languages of nutritional science, elites during this discourse claimed that the race of wheat was more superior because it was affiliated to the work of culture and hence it was the key to national progress. At the end of this arduous process of cuisine formation, Mexican's national cuisine emerged in a Creolized form as a dish known as In a symbolic sense, Mexican national cuisine has emerged because of, rather than, despite the increasing articulation of regional and ethnic cuisines. In reality, food that divides nations may in turn be the uniting ones. Pilcher, , pp. 46. Pilcher, , pp. 153. Pilcher, , pp. 63. Likewise, the development of a national cuisine in Belize also reflected the role of food in creating national cleavages in some cases while uniting them in others. Even with the existing racial cleavages during the colonial period, there were uniting forces that laid the foundations for the gradual emergence of Creole food as the \"national cuisine\" of Belize. Given the small size of the colony and degree of inter-ethnic interaction, people of all races and ethnicities were forced to work together across cultures and class. All these multiethnic culinary sites allowed people of all cultures and classes to work together, \"engaged in various kinds of creolization that led to convergence.\" Wilk, \"Food and Nationalism\", pp. 77. Wilk, \"Food and Nationalism\", pp. 77. Wilk, \"Food and Nationalism\", pp. 77. In contrast to the Mexican case study above, the formation of a Belizean cuisine reflected more contemporary forces that also fostered national consciousness. Firstly, there was a rise in awareness of local food with the huge increase of Belizeans emigrating overseas. Among expatriate Belizean Americans, food became the central source of national identity and a focus of sentimental link to the home country. As such, Belizeans began to reconstitute their national identities in foreign places through the adoption of ethnic and national cooking. More so, this rise in national consciousness was sustained", "label": 1 }, { "main_document": "The cost distribution from cost centres to market segments is spilt into two stages. The first step is to distribute the expenses from the cost centres like various departments (for example, Food and Beverage, Rooms, etc.) to activity centres in which activities take place (for instance, reservations, check-in and check-out, etc.). The next is to dispense the activity costs to the proper customer groups (Dunn and Brooks, 1990). After identifying both revenues and costs to each segment, revenue managers are able to recognise the profitability of each type of customers and then make long-term decisions based on the information provided by MSPA. Similar to the MSPA approach, Customer Profitability Analysis is another method of allocating costs to customers and it reports revenues, costs, and profit by market segment (Noone and Griffin, 1997). In respect of analysing customer costs, it is necessary for CPA to select an appropriate costing method. Noone and Griffin (1997) have stated two types of costing techniques in their study, one is conventional costing methods and the other is activity-based costing (ABC). Basically, the difference between these two costing methods is the manner the overheads assigned to products. The ABC approach does not allocate costs by using an index of volume but based on the activities that cause them to be incurred, and overhead costs can then be precisely assigned to those customers to whom services are provided (Noone and Griffin, 1997). Therefore, activity-based customer profitability analysis can then be able to provide revenue managers with the important information regarding their customer base and help management to solve some of the major issues that YM approach fails to answer. Both of the MSPA and CPA have the similarities and differences between each other. In terms of the similarities, activity-based costing is adopted in both methods as a means to categorise costs derived from the activities rather than the product. Besides that, their focuses are both on the maximising the profit rather than the revenue. However, in relation to the difference, MSPA has an attempt to back up the YM decisions while YM is in turn forming a basis to support the CPA approach (Burgess and Bryant, 2001). The developments of the approaches have been widely accepted but little practical progress has been achieved (Burgess and Bryant, 2001). The main reason is that many managers found it difficult to carry out because of the complexity of the data required for analysis (Burgess and Bryant, 2001). Nevertheless, according to Burgess and Bryant (2001, p148), 'the opportunity is now emerging to utilise modern technology to identify the relevant costs, by customer, and then to produce market-segment based information in order to achieve a full analysis'. And the information provided by the analysis offers revenue managers the clearer insight into the actual profitability of market-segments and enables them to improve their decision-making for the short-term, as well as the long-term. Revenue managers cannot rely on yield management system exclusively and revenue maximisation is no longer to be the main objective of their tasks. In order to maintain the long-term profitability and growth of", "label": 0 }, { "main_document": "referenced by; personalization, based on user preferences; and add-ons such as blogging tools and event calendars. Wikis are simple enough to be used with little training or administrative support. Contributors are encouraged to \"publish\" as early as possible. And their peers can fix any errors that contributors may make, without having to ask permission first. Wikis are significant because they bring together authorial, collaborative and administrative functionality, simplify it, and make it a natural part of the navigational and reading experience for any end user who cares to use it. This encourages passive readers to become active contributors by leaving comments, making changes and even reorganizing or streamlining the site. The wiki concept was originally developed by Ward Cunningham in 1995. Since then, dozens of wiki implementations have been promoted, including a number of open-source projects like OpenWiki ( Wikipedia ( It was founded in 2001 by Larry Sanger and Jimmy Wales. As of December 2004, there were 13,000 active contributors working on over 1,300,000 articles in more than 100 languages. Wikipedia articles tend to be neutral in tone, and when the topic is controversial the varying viewpoints are explained in addition to offering the basic facts. When anyone can edit what you've just posted, such fairness becomes essential. But these different viewpoints are not necessarily authoritative. A lot of the value of a blog network is the social hub that is built from relationships. People read each others' blogs to see what their friends and acquaintances are up to, and then they add value by linking, commenting and elaborating on what has already been posted. Similar social interactions take place in wikis, but at a faster pace and with a more intensely collaborative feel. Because so few restrictions are placed on participants, members of a project group may feel they have to exercise some form of censorship. This could take the form of building up some of the minimalist forms of wiki components, correcting the grammar and syntax of other contributors' articles, or persuading others to accept your viewpoints. This interaction between contributors, united in a shared idea, helps build important social ties. Jay David Bolter in Writing Space talks about the \"interactive relationship\" between the author and the reader. With Wikipedia, for example, different texts from different genres can be interlinked with a contributor's own work to present an entirely new offering, and thereby merging the usual barriers between author and reader. \"Boundaries between culture producer and cultural consumer break down.\" One of the most serious concerns about collaborative authorship is whether or not the content can be trusted. In the absence of a publishing gatekeeper, an anonymous group author is unlikely to have the same clout as a renowned single author. If a writer is perceived by the reader to be an authority on a subject, then the information that the writer gives out is almost always seen as content that can be relied upon. Wikimedia ( All articles submitted to Wikipedia undergo peer review and any amendments are saved and linked. It is thought that, eventually, this constant", "label": 1 }, { "main_document": "This essay investigates drag in the design of airplanes and ships. Using Buckingham Pi method could derive the key parameters relating to drag Let D be drag force, Since there are four variables (n) and only purely mechanical effects are involved, m (basic dimension) = 3, n = 4 and n-m = 1, therefore By comparing coefficients, a=1, b=2, c=1 Therefore, There are three different types of drag that may be experienced by a body in a flow and 3 other types specially found in ships and aircraft. The entire skin friction drag occurs within the boundary layer. The amount of form drag is generally related to general size and shape. This could come from a wing of finite length which adds downwards momentum, thus changing the pressure distribution or arises at the wingtips, which causes higher pressure below the wing and lower pressure above it will cause the air to \"roll around\" the wingtip, creating a vortex which is shed behind the aircraft. It appears, often, on aircraft flying at high-subsonic speeds and boats. They can also form at much lower speeds at areas on the aircraft where the Bernoulli effect accelerates local airflow to supersonic speeds. This can be reduced by designing shapes as close to For bodies of very high Reynolds numbers, it is common to assume that turbulant boundary layer is throughout the body surface. A turbulent boundary layer produces a greater skin-friction drag but a reduced wake drag. It is surprising since the higer flow speeds associated with turbulent boundary layer have been expected to be associated with an increase in drag when compare to the lower-flow-laminar boundary layer. At high Reynolds number, wake drag can far exceed skin-friction drag. Because of that, the best regime for flow over a wing during normal flight would be best associated with turbulence as this can dramastically reduce wake drag which is far exceed skin-friction drag during normal flight. Pressure drag is to do with the turbulence created and is dependant on the reynolds number and waves drag is negligible as bow waves are created at the front of the ship and if they are shorter in length than the ship they have no effect The only drag factor is the pressure drag given by the reynolds number. Assuming that the tank is Looking up the moody chart, it is found that the Substituting this result into The drag force is 4062N ( assuming that the tank is 15m x 10m ) Power required = (4062) (5.14) = Increase of roughness in hull can never be underestimated as it could result in an increase of skin-friction drag, which would require additional power with increased consumption of feel to maintain the vassel speed and can be given by the equation below: where R is the drag, Studies showed that ships generally get tougher due to mechanical damage such as corrosion, cracking, blistering, etc. of the hull surface. Foul released technology products (e.g. paint) can be used to avoid fouling (drag caused by living creatures on the ship) by providing a slippery", "label": 0 }, { "main_document": "should take the culture focus of the studies into account when applying the findings to tourism practice. Another major element of both Schmoll's and Mayo and Jarvis's models are social influences that affect individual travel behaviour, such as role and family influences. Findings from the research conducted by Shim (2005) show that, generally, in senior market spouses have a negative impact on travel, which suggests that tourism marketing manager should attempt to convert the spouse from a travel gatekeeper to a travel conspirator (Shim , 2005). According to the authors a message that promotes a hedonistic approach to travel expenditures may accomplish this. Schmoll's model addresses advertising and promotion as another important element influencing the travel decision process. Mayo and Jarvis's model, however, ignores those external stimuli and from that perspective could not be very useful for the tourism marketing manager. Schmoll's model could prove useful when considering the influence of advertising and promotion on the travel decision process of the senior market. The research from Shim (2005) shows that mass media play an important role in mature consumers' search for travel information, influencing different dimensions of seniors' behaviour and attitude regarding pleasure travel. Hence, the authors suggest that television, magazines and other mass media should be seen as a useful source of access to senior market for companies in the pleasure travel and tourism industry (Shim , 2005). The second segmentation variable chosen for the purpose of this essay are young couples without children (newly married) as one of the stages within the family life cycle. The basic idea of the family life cycle concept is that most families pass through an orderly progression of stages which are defined by unique combinations of socioeconomic and demographic variables (Wells and Gubar, 1966). Various authors argue that the stage in the family life cycle has an impact on consumer travel behaviour (Wells and Gubar, 1966; Lawson, 1991; Oppermann, 1995). According to Bomball (1975) young married couples posses the following characteristics: at this stage consumers are generally in better financial position than later in life, they display a strong tendency to purchase expensive items and are highly susceptible to advertising. Wells and Gubar (1966) and Hong (2005) similarly suggest that newly married is the stage in the family life cycle which is especially important for heavy leisure and tourism spending. Stampfl (1978) supports this theory indicating that young married couples spend a lot of money on durables and vacations. Moreover, he believes that at this stage shopping energy levels are high, a lot of buying is done impulsively and decision making process is based on joint decisions. Various authors (Consenza and Davis, 1981; Sheth and Mittal, 2004) agree with Stampfl, stating that the decision dominance structure at this stage of the family life cycle is syncratic (joint), indicating that during the decision making process both spouses contribute but neither dominates. According to Litvin (2004) both parties may compromise, bargain, or persuade each other, but in the end, both spouses make the decision and more or less agree that it is the right one. Consenza", "label": 0 }, { "main_document": "in the Cuban revolution contributed directly to its success. The Che shown above is the 'insurrectionary theorist, strategist and tactician' Is it possible that without Che, Fidel would have simply been the leader of another failed rebellion in a struggling Latin American country? It is possible. Without Che who would Fidel had appointed as a comandante? Would Fidel have been able to find another man as completely loyal and devoted to him as Che? When the triumphant Fidel rode into Havana on January 8th his brother Raul was on his right and Che Guevara was on his left, thus symbolizing the high regard that Fidel had for his compa Donald C. Hodges, Che contributed an enormous amount to the revolution, but it wasn't all about what he did, there was also another side to Che that contributed to the insurrectionary phase of the revolution. There is 'Che the guerrilla hero, the martyr and myth' Even if it can be argued that Che's legend didn't really become global till after his death, during the insurrectionary stage of revolution the myth surrounding Che had already begun to grow. Donald C. Hodges, Che lived in the true revolutionary way 'todo o nada', 'all or nothing' The fact that Che has become such a martyr his that he wasn't afraid to die for his beliefs 'In the arduous profession of the revolutionary death is a frequent occurrence'. Che's personal ideas on what a revolutionary was continue to add fuel to the legendry fire engulfing his name. Che was a prolific writer and his diaries and essays have provided future revolutionaries with guidelines, instructions and beliefs on how to become the 'twenty first century man' Che's idea of this new man was a theory quite closely related to his ideas on guerilla warfare; he believed there would be a new generation of men who would be willing to sacrifice themselves for the good of Latin America; through armed struggle. Che's essay on 'Socialism and Man in Cuba' truly expresses Che's deep belief in his ideas, 'There is no life outside the revolution. In these conditions the revolutionary leaders must have a large dose of humanity, a large dose of a sense of justice and truth' Che Guevara, Che Guevara, Che Guevara, Ernesto 'Che' Guevara quickly moved from being just a revolutionary to becoming a symbol for revolutionaries, a process that accelerated after his sudden and suspicious death. So what did he mean for the Cuban Revolution? His constant literary and oratorical attacks against US imperialism showed how important the future of Cuba was to him and how important the future of Latin America was. Che firmly believed that \"Yankee Imperialism' was responsible for many of the problems faced by Latin America and other third world countries; he also believed that armed struggle was the only way to combat this growing threat. Che's personal characteristics are also crucial to his myth and why he meant so much to Cubans. When Che was proclaimed by Fidel to be 'a Cuban by birth' the people were able to see for", "label": 1 }, { "main_document": "in own-branded products, therefore it is important to inform customers that specifically at Waitrose they can purchase organic products from the renowned producers. The product should have its own unique identity and be linked with the area of origin. It is worth noting, that only heavy buyers spend more on each of the analysed products categories, namely meat, fish and vegetables. The remaining two tend to reduce their overall meat consumption in favour of fish and vegetables, what is especially evident in the 'light buyer's' spending patterns. This is particularly important as far as the visual aspects of the future marketing campaigns are concerned, as it indicates which products should be displayed in TV commercials or in promotional folders. Members of all of the distinguished groups use the media such as TV or radio more or at least as frequently as the average organic consumers. The heavy buyers tend to favour radio over TV and this fact could serve as an indicator for organic radio campaign. It is highly likely that the radio programmes informing about various aspects of organic food consumption would be well received by them. Therefore it is worth considering a sponsorship of such programs and building up positive relationships with the TV and radio reporters. The light buyers could easily be reached by the Internet, as they form the most often web-surfing group. Hence the web site should be informative of attractive and interactive design, if this group is to be encouraged to increase their purchases. As an additional and final point it should be added, that for the whole investigated sample, the factors most strongly and positively correlated with the spending on organic foods were household size, annual income and customer's age. It should be therefore taken into account, that competitors also could target this particular characteristics. Although this report does not offer a simple, quick-fix marketing formula for sales and revenue increase, the ACC research shows that the attention has to be paid to a wide range of factors if the successful marketing strategy is to be implemented. The conducted analysis revealed the distinguished profile of the Waitrose organic customer, who tends to be older and better off than the average buyer of organic-labelled food across all of the identified spending categories. Such person lives in a larger than average household, is open to the media and, unless is the heavy buyer, consumes reduced amounts of meat in favour of fish and vegetables. The tailored marketing campaign should therefore be stressing the health, environmental, taste and well-being aspects of organic food consumption. It should also strengthen the link between the social image of the three consumer groups and the consumption of organic products, presenting the organic purchases as the 'eco-logical' choice. After the non organic consumers were filtered out, there were only 7 consumers left, who purchased organic foodstuffs in Waitrose. This number was too small to conduct reliable statistical inference. In spite of this fact, the analysis was carried out. Most of the data were expressed not in absolute terms, but in percentages, in order", "label": 0 }, { "main_document": "support for the cause of independence. This had flourished after the publication of the pamphlet \"common sense\" upon which the declaration of independence was based. The taxes and levied issued by the British government on the American colonists caused severe discomfort. Yet not even the taxes and law which were put upon the people were enough to force people to support the American cause. When an interview of a veteran of the war was held, one may see the fuller picture of why people decided to join and fight. When asked about the Stamp Act he replied \"I never paid a penny for one of them,\" the Tea tax: \"I never drank a drop of the stuff.\" When directly asked why he fought the \"redcoats\" he replied \"we always had governed ourselves and we always meant to. They didn't mean we should.\" So the main reason for the support of so many Americans came from the determination of self government. Even after their defeat at Yorktown, the British had a chance of destroying or paralyzing the American army as they still remained in control of the seas and therefore had the ability of constant reinforcement. Yet they chose not to. The British were defeated by the resolve of many American people who would fight for the cause in many different ways, whether fighting in the regular army, whether you were a simple farmer, only taking part in guerilla warfare or supplying the American troops with food or shelter. It may also be said that the Americans would have been at a loss if it hadn't been for the intervention of the French and other European powers to check the power of Britain on a worldwide scale. By the end of the war, the British troops had become fairly adept to fighting in the terrain of the North American continent, yet the guerilla tactics and sharp actions of American commanders such as Washington's crossings of the Delaware meant that the British were to be caught off guard and loose crucial battles. Despite wining the majority of skirmishes, the British not only failed to deliver a decisive blow equivalent to the American victory at Yorktown, they also suffered the heaviest casualties of the war. With a sporadically attended and ragged army, Washington was able to deliver the biggest blow to the greatest empire of the times. John Williams. 'The American War of Independence 1775 - 1783. 200th anniversary' (Invasion publishing Ltd, 1974) p. 2", "label": 1 }, { "main_document": "to that used in the HPLC method. Dilution enables the scaling of the response of compounds so they do not exceed the limit of the detector. Complete dissolution is also important as a partially dissolved sample is not representative of the reaction, but also solid can be injected into the system, which can cause blockages. By comparing the chromatogram of the reaction to known compound retention times the progress of the reaction can be determined by checking for the absence of the starting material and the presence of the product. My work has relied heavily on HPLC to detect the presence of certain impurities in a solid that had been isolated from a reaction spiked HPLC can be used qualitatively to determine the presence of the impurities, but also quantitatively to calculate the mass of the impurity present in the sample. A spike is when a relatively very small amount of a compound under analysis is added to a much larger quantity of a different compound Most HPLC apparatus uses UV detectors, which measure the UV absorbance of analyte compounds as they elute from the column. UV absorbance is a constant for a molecule of a compound, over a concentration range, and so is directly proportional to the concentration of the compound. The relative response of a compound can be calculated by measuring the response of a known quantity of known strength. From this calculation a scalar can be applied so the response in the chromatogram represents the physical proportion. The proportional relationship between response and concentration also means the strength Providing the concentrations of the standard and sample are known then a direct comparison of the response can be made to determine the concentration of the compound in the sample solution and therefore the actual amount of the compound in the solid. The comparison can only be made over a given range of concentrations in which the response is linear. Strength is defined as the mass of a compound in 100g of solid. The advantages to HPLC are many and varied. As well as being fast, simple and accurate HPLC is also greatly versatile, due to both phases having a large number of variants. GC can only approximately 20% of organic compounds, as it requires samples to be either volatile or able to evaporate without degradation. Simple dissolution is all that is needed for HPLC and allows the analysis of the majority of compounds. J. M. Miller, 8, pp. 184 However, highly insoluble molecules may not dissolve to a high enough concentration to give a response. Also because the response of each compound vaires with how well it absorbs UV light a compound may not be detected due to its poor absorbance, or it may not absorb the light at all. However, compounds without a UV chromaphore can be analysed using a different detector, or by GC. Another disadvantage is the time needed to develop the best method, which is dependent on the number of compounds that need to be resolved. Without that method some compounds may have the same RT", "label": 1 }, { "main_document": "signals a shift in focus from the group identity of the tribe to the single identity of the narrator. It reiterates that the narrator is the last one left. 'Her first week' also begins Although here the effect created is slightly different to that in passage A, as the pronouns are third person singular rather than first person plural. The latter type is more likely to make the reader feel included in and close to the action. But while passage A builds suspense by using the hermeneutic code (by raising questions), it is created in 'Her First week' using the principles of the proairetic code, a type of narration that concerns actions and their immediate consequences. An example in lines 8-9 can illustrate this; \"...damp laundry in the dryer, I'd slip / a hand in, under her neck\". The phrase \"I'd slip\" at the end of line 8 is by itself a complete syntactical unit, and has a different meaning if it is interpreted apart from its context in the next line. The reader is compelled to read on to find out the consequence or resolution of this potentially tragic action. Olds frequently uses this 'extension structure' Line 21 is another, more prominent example; \"Every time I checked, she was still / with us...\". By itself the line is a complete clause, and it leads the reader to think perhaps the baby has stopped breathing, when in fact the next line confirms the opposite. Earlier on in the poem 'fallen' and 'tumble' are also enjambed, and both are actions with potentially disastrous consequences. This technique helps to express the flashes of paranoia and worry that parents experience on a daily basis. Mick Short, It is clear the focus of the poem is the baby (the title is in a sense directed to her), or more specifically the nature of a mother/daughter relationship. The 'centre' of passage A is the Anishinabe tribe. Incidences in the text are described in relation to, and by their impact on them. In the first paragraph they are attacked from all directions; \"the spotted sickness from the These details establish the tribe as the main focus of the text by positioning them as deictically and geographically central to the action. A characteristic of 'I-narrators' is that they can express a biased, personal opinion. In this case the tribe's experience is presented as a struggle rather than progress, and many of the nouns in the passage are highly modified. For instance in the phrase \"bitter punishments of early winter\", each aspect of the noun group has unpleasant connotations. 'Punishments' is the noun head (and just by itself implies pain), 'bitter' is a harsh pre-modifier, and 'of early winter' is a post-modifying phrase. Although the latter appears to be fairly neutral, out of all seasons winter has the most unpleasant and depressing conditions. It is interesting that the narrator defines his age as \"no more than fifty winters\" rather than in terms of years, implying that winter is the most challenging time of year. The same is true for other", "label": 1 }, { "main_document": "cost reduction) appears to be the same, the process used to achieve that result can have longer-term implications. Cooperation may reduce costs through joint improvement while heavy-handed cost pressure might force a supplier to cut corners, resulting in poor quality. A cost saving measure compares the actual cost of an item or family of items over a period of time. A cost reduction is a decrease in cost resulting from a change in purchasing practice brought about by individual or group effort aimed at achieving such a decrease. In the following paragraphs, we will discuss about three cost saving methods: globing sourcing, product specification improvement and best price evaluation. The level of non-domestic purchases by U.S. firms has increase during the past several decades. In today's competitive markets, firms must search globally for quality products at the lowest total cost. The need to evaluate world sources of supply to remain competitive has become more of a factor for today's purchasing professional than it was twenty years ago. International competition requires purchasing from sources best able to support a firm's competitive position in the marketplace. In does not imply a conscious effort to favour foreign supplier over domestic suppliers. Purchasers would rather source from a geographically closer supplier if the local supplier is competitive at world-class levels. There is no real advantage to longer material pipelines, everything else equal. Recently, some larger U.S. firms have shown an increase willingness to work with domestic suppliers who have the ability to become world-class performers. Companies recognize the value of a local supply base capable of delivering best-in-class performance. Over the long run, the resources committed to developing local suppliers often provide benefits that outweigh the expenses involved. COST/PRICE BENEFITS This is probably the most common reason why firms purchase worldwide. After considering all the costs associated with international purchasing, savings of 20 to 30% are not unusual on some items. Cost differentials between countries can arise because of lower profit margin, or exchange rate differences favouring the offshore producer. Firms are quick to point out that purchasing should only consider suppliers capable of meeting rigid quality standards. Price by itself should not be the sole criterion for a souring decision. Not all firms should commit the time and resources to develop integrated procurement systems. For example, a smaller manufacturer competing in regional markets against other regional producers probably does not have the need or capability to pursue anything beyond basic international purchasing. A firm with a single design and manufacturing facility will not require sophisticated global sourcing systems. A firm must assess the level of international or global sourcing required to remain competitive in its industry. The level required is a function of four variables: competitive forces, customer requirements, level of global competition, and the location of the best suppliers for the specific purchase requirement. Once a firm identifies its deport that level and identify its current sourcing capabilities. Identifying the operating requirements and current capabilities helps highlight any potential performance gaps between where a firm is and where it should be. Firms often follow", "label": 0 }, { "main_document": "it probably speak with accent and after another age of 16, no one can obtain standard pronunciation without accent (Flegeal. 1995). However, the age division is not absolutely correct, which can even be seen from the above chart that the subjects who started before age 6 could also got low marks and this result was also confirmed by Flegeal. (1997 in Piskeal. 2001) again. Likewise, 6% Italian subjects who began after age 12 in Flegeal. (1995) still achieved standard criterion of English pronunciation On the other hand, although the general viewpoint about the age of learning L2 is the earlier the better, Long (1990 in Piskeal. 2001) found that adults and adolescent learners might have a temporary preceding advantage of imitate certain sounds in L2 over younger children but the advantage disappeared with the time passing. Besides those argument, another question may be about other two age factors, namely, age of first exposure and acquisition, because for the people living in the expanding circle of the English language like China, they may have no opportunity to learn English as immigrants do but many people believe that accent-free speech requires learning at early ages. Therefore, it is doubtable if those three age division are still suitable for this kind of people for they do not live in a convenient environment of studying English and most children who start early to learn English can only speak to their teachers in class (Wan 2005). With such low percentage of English learning time, it is less possible for them to achieve authentic pronunciation even if they start before the age of 6. However, it is just individual suspicion without evidence from experiments and research. Length of Residence (LOR) is another well-studied factor among different variables. It means the length of a speaker's residence in the target language environment and it only concerns with the period rather than the starting point. However, it is a controversial factor proved in certain research (e.g. Purcell & Suter 1980, Flege & Fletcher 1992 and Flegeal. 1995, all in Piskeal. 2001) and disproved in other research (e.g. Oyama 1976, Tahtaal. 1981 and Elliott 1995, all in Piskeal. 2001). According to Flegeal. (1995), it shows that LOR is a very small although important factor among the variables because LOR is related to AOL. For those subjects, the earlier they learn English, the longer the residence length is, but LOR has only effects on foreign accent within the 'initial phase of learning' (Piskeal. 2001). In other words, AOL is still the primary factor compared to LOR, in the critical period, AOL with corresponding LOR can affect the accent, but LOR will not influence the accent as an independent variable after the learning age. Another interesting conclusion pointed out by Uematsu (1998) after a research on 48 Japanese school students in the U.S., in which he compared the AOL, LOR, foreign accent and TOFEL results. The degree of foreign accent of the students highly correlated with AOL but not the LOR factor; in contrast, TOFEL results corresponded with LOR of those students but not", "label": 0 }, { "main_document": "on the manor court. Negotiations took place between manor lord and his 'subjects' over the terms of rent, fines and manorial privileges; controlling access to water and commons, regulating the farming cycle and worked out the terms of temporary and permanent enclosure. Although such representation was exclusive to those to which the franchise extended, it nevertheless provided a channel in which some people could play a part in politics, 'a complex procedure of consideration of issues in both the House of Commons and the House of Lords' Ibid., p. 96. Beat K Beat K There existed parliamentary institutions, the Ibid., p. 41. Ibid., p. 39. Ibid., p. 50. Ibid., p. 47. David Sabean in his analysis of the period, also argues that there were institutions that allowed for the people to participate in. In the sixteenth century, inhabitants of the duchy of W However, we should be careful not to over emphasise such an assertion and place it within its context. David W. Sabean, Ibid., p. 14. It is important to recognise the differences within Europe during this period when assessing the people's involvement within politics, as there were clear divides between and within composite states and kingdoms. Wayne Te Brake, Ibid., p. 186. Ibid., p. 186. Te Brake argues then that in order to appropriately assess the people's part within politics, there is a need then to 'look outside of the normal channels of electoral politics'. The people could be seen to form popular groups, representing themselves through collective violence in order to bring about, or change, elite preferences and behaviour, particularly when challenged by taxation. Te Brake, Victor Magagna, Victor Magagna also rejects the notion that the relationship between the people and the elites was a static one, pointing to the outbreak of the Peasant War, which was 'an attempt to redefine Lordship and power'. Such demands and the formalisation of revolt marked the role of the people in politics, challenging their prescribed obedience. Ibid., p. 71. Ibid., p. 71. Ibid., p. 72. French communities were also able to act politically. Although neither the King nor the aristocracy regarded ordinary rural people as anything more than a 'source of revenue', excluding them from the assemblies of estates and noble courts, the people were able to involve themselves within politics. Ibid., p. 137. Ibid., p. 135. Ibid., p. 136. Charles Tilly in his assessment of seventeenth century France challenges, however, the notions that these rebellions were acts of politics, but rather acts of economic anguish. As Tilly concludes, efforts to seize peasant labour, commodities and capital 'violated peasant rights, jeopardised the interests of other parties in peasant production and threatened the ability of the peasants to survive as peasants'. Tilly's analysis thus needs to be placed within its context, where definitions of politics failed to extend beyond formal intuitions; reinterpretations of its constitution thus enables us to reinterpret Tilly's research as supporting the interpretation that the people played a part in politics. Charles Tilly, 'Routine Conflicts and Peasant Rebellions in Seventeenth Century France', in Robert P. Weller and Scott T. Guggenheim", "label": 1 }, { "main_document": "code for asking the products details should be in a procedure because it will need to be run at least 3 times to enter all the details. When calling the procedure, a \"for loop\" could be used instead of calling the procedure three times. This can be done thanks to the array and the position global variables that have been declared. The second procedure could be used to start the order placement. It will first ask the user if he wants to place an order, if yes, it will then ask him if he wants to add a product. If his answer is yes, the order placement will then began with entering the name of the product. This procedure could in fact contain all the other tasks of the program but it would make it more difficult to understand. This is why this procedure could call other procedures to make it easier. So, the last task before making any calls will be to check that the product entered does exist. The other procedures are as shown above. There will be one to: You will notice that the order of the records, global variables declaration, procedures and main body of the program has changed a little bit. This is simply because in order (for the program) to be able to recognize all the names, procedures, etc... it needs to be written in a specific order. In fact, it simply has to be declared before using it. Although, the program also needs to show the different steps of the orders placement in the right order. This can simply be done by calling the appropriate procedure at the appropriate time. So in the program, I had to first declare the record type. Then, it uses it by declaring the global variable P which is a variable of the type of the record declared before. Secondly the program needs the user to enter the products details. Then, the program will call the PlaceOrder procedure to start the ordering process. But if you look in that particular procedure, you will notice that it calls four other procedures within it. This means that those four pieces of code As you can see above, the screenshots demonstrate that asking the product 1 details works perfectly fine. There is no need to show the product 2 and 3 details as it uses exactly the same piece of code in the program (ask products procedure - se Delphi source code). The only test that I also did to test the program is entering a correct value for the stock value of product 3 to see if the program continued with the placing orders process (see last screenshot on this page). As indicated in the Test Plan, three problems appeared when implementing and running the program. The problem was that the program would not accept 1000 as an ID for any of the product details. This was quite important as it did not match the specification requirement. In order to solve this problem, here are the steps that I followed: In this", "label": 0 }, { "main_document": "actual and predicted responses are underdamped response as shown in figure 8, whereas the figure 9 shows that the actual and predicted responses are critically damped response. Overall, these predicted models are effective in modelling. As a result of the figure 10, the model output can be shown as where e(t) is a white noise with zero mean and variance In addition, the figure 11 shows a z transform of predicted response, which presents the stability of system model. This predicted response seems to be stable because the two poles locate inside the unit circle. As a result of the figure 12 the model output can be shown as below. Furthermore, the locations of two poles are inside the unit circle of the z transform shown in figure13. This means that this predicted response is stable. This model uses three coefficient of 'a', which is different from previous model. As a result of the figure 14 the model output can be shown as below. The z transform in figure 15 shows that the predicted response is stable. As a result of the figure 14 the model output can be shown as below. The predicted response is also stable, but there are two poles which are complex values (see figure 17). This model is slightly different from those previous models. In other words, both the number of coefficients of 'a' and 'b' are three. As a result of the figure 18 the model output can be shown as below. Moreover, the predicted response is stable (see figure 19). As a result of the figure 20, the model output can be shown as below. The predicted response is also stable (see figure 21). This model uses 4 coefficients of 'a' and 3 coefficients of 'b' to predict system response. As a result of the figure 22, the model output can be shown as below. Furthermore, a z transform shows that the predicted response is stable (see figure 23). As a result of the figure 24, the model output can be shown as below. The predicted response is also stable as shown by a z transform from figure 25. The results of this model after changing in data set are similar to the previous model using the same data set. In other words, both actual and predicted responses are critically damped response and the predicted response is also stable as can be seen from the figure 25 and 26, respectively. As can be seen from figure 26, this figure plotted in logarithm scale shows the loss function The values of the loss function for the model1 and 2 are far higher than the model3 and 4. The model3 and 4 are, therefore, more preferable as the model1 and 2. This is because the model3 and 4 are large enough to cover the true system. In addition, the model3 is preferable as the model4, but not remarkably better than the model4. The reason for this is that the model4 is more complex than the model3, but the values of the loss function resulting from both data", "label": 0 }, { "main_document": "binary semaphore that is initialized to zero. This has the effect that any thread that does a P operation will be blocked until another thread does a V. This kind of construction is very useful when the order of execution among threads needs to be controlled. (Wikipedia, 2007) A monitor can be used for synchronizing two or more computer tasks that use a shared resource. Not only it ensures task an exclusive access to resources, but also to synchronize and communicate with other tasks. A monitor consists of: A contains a set of data items and a set of procedures, called entry routines that operate on the data items. The monitor data items can represent any resource that is shared by multiple tasks. A resource can represent a shared hardware component (e.g. hard drive) or a software component (e.g. file). Generally monitor data can be manipulated only by the set of operations defined by its entry routines. (Belzer J.al. 1987) Mutual exclusion is enforced among tasks using a monitor - only one task at a time can execute (called 'active task') a monitor entry routine. Mutual exclusion is enforced by locking the monitor when execution of an entry routine begins and unlocking it when the active task gives up control of the monitor. If another task invokes an entry routine while the monitor is locked, it is blocked until the monitor becomes unlocked. The monitor invariant in this case simply says that the balance must reflect all past operations before another operation can begin. It is usually not stated in the code but may be mentioned in comments. There are however programming languages like Eiffel, which can check invariants. (Wikipedia, 2007) To avoid entering a busy waiting state, processes must be able to signal each other about events of interest. Monitors provide this capability through condition variables. When a monitor function requires a particular condition to be true before it can proceed, it waits on an associated condition variable. By waiting, it gives up the lock and is removed from the set of running entry routines. Any process that subsequently causes the condition to be true may then use the condition variable to notify a process waiting for the condition. A process that has been notified regains the lock and can proceed. In computer science, \"The basic operations in message passing languages are \"send a message\" and \"receive a message.\" Since a message must be sent before it can be received, message passing imposes an causal order on the actions of the program\" (John H., 1999). Destination of a send operation and the soured of receive, seen as a pair, is called communication channel. Forms of messages include function invocation, signals, and data packets. There are few different models of message passing. As fundamental the message passing model is defined as: Other models include: (Maui High Performance Computing Centre, 1996) Message passing uses two communication mechanisms: Asynchronous message passing mechanism buffers the communication between sender and receiver. That allows the sender to continue execution after sending a message. This is analogous to", "label": 0 }, { "main_document": "standardised reference materials. The control sample results are added to an XRD database to help establish the frequency at which the same combination of pigments and extenders in a certain colour category occur. This data can help determine the significance of a diffraction pattern match and therefore its evidential relevance. Achieved by considering the amount of crystalline material in the sample, as if high or with the sample containing an unusual crystalline component the relevance of the results will be substantial. Common drugs encountered by the police on the street are those found as loose white powders, which are rarely identifiable by visual analysis, these drugs include cocaine, heroin, and types of amphetamine. This type of drug is often mixed with other substances known as diluents or adulterants. XRD is not normally the first type of analysis, but can often follow the initial visual examinations by eye and occasionally by use of visible light microscopes, and the chemical testing used to correctly identify the drugs present. XRD's usual role is to either: - 1) Identify the exact chemical form of the drug, which could be salt, base, or acid, or to 2) Correctly identify any diluents or adulterants present or lastly 3) to compare one drug confiscation with another. Paper is another material that can call upon the use of XRD for analysis, paper is often presented to a forensic analysis when used to write ransom notes, threatening letters including hate mail or used to wrap up drugs. Again XRD may not be the first point of call but may be put to action after an initial visual examination by botanical or fibre experts. XRD is used to identify the fillers present in the paper and provide information of the percentage crystalline composition of the cellulose. Each dependent in the quality of the paper and differ from where they have been purchased. There are many advantages and disadvantages associated with the use of XRD as an analytical technique. Some of the advantages to this technique include, its non-destructive nature mainly cause by the ease of sample preparation, meaning that the sample is preserved and can therefore be reanalysed afterwards using other techniques, and not only does it provide qualitative information identifying any crystalline phases present but also a quantitative estimate of their amounts. It requires relatively small amounts 1mg, which is important as specimens given to an analyst from a suspect / scene, can often be small. Which can also often be a contaminated, multi-phase specimen, but this also doesn't affect XRD analysis as it has the ability to analyse many types of material organic, inorganic, and metallic and alloys simultaneously. Plus it identifies compounds present as opposed to just the elemental analysis that techniques such as x-ray fluorescence, XRF, perform. Allowing XRD to differentiate between differing hydrate forms and polymorphs (compounds that crystallise in different forms.) There is also because it is an established technique, a large library of known crystalline structures, available from the ICDD powder diffraction file, which contains ~500,000 XRD patterns taken from pure substances of metals, alloys,", "label": 1 }, { "main_document": "protect themselves. This might nullify their plea of self-defence since the use of weapons might imply excessive force and the act of procuring it might be regarded as a deliberate and calculated move and not in immediate response to an attack. Aileen McColgan, \"In Defence of Battered Women Who Kill\" (1993) 13(4) Helena Kennedy, Also, the circumstances under which a battered woman kills her abuser often do not reflect standard notions of 'imminent danger' since she might strike when her abuser is made vulnerable by sleep or alcohol which might be perceived as unreasonable since the woman was in no imminent danger at the time of her attack. This male view of the law of self-defence, coupled with gender stereotypes, limits the ability of judges and jurors to define a battered woman's conduct as reasonable. Sharon Angella Allard, \"Rethinking Battered Woman Syndrome: A Black Feminist Perspective\" in Natalie J. Sokoloff with Christina Pratt (eds.,) Aileen McColgan, \"In Defence of Battered Women Who Kill\" (1993) 13(4) However, the notion of 'battered woman syndrome' has been criticised due to its over reliance on the psychological condition of the woman as 'learned helplessness' whereby she is reduced to a passive victim of long term abuse without any agency to resist or escape from her situation. This notion individualises her situation and denies the structured nature of violence against women which is entrenched in the societal fabric and legitimised by the legal system. In Ngaire Naffine's opinion, under the BWS, the woman's response is seen as exceptional and pathological bordering on abnormality which is evident from the fact of psychiatrist's expert testimony thereby reinforcing the notions of irrationality or disorder on the part of woman. Ngaire Naffine, \"Sexing the Subject (of Law)\" in Margaret Thornton (ed.) [1992] 1 All ER 306 In addition, black feminists have contended that the notion of BWS is predicated on the gender characterisations which are mostly applicable in a white heterosexist society of women being emotional, submissive, dependant, etc while ignoring the historical and cultural experiences of black women wherein centuries of slavery and racial discrimination have made them actively resist and counter both racism and sexism. Sharon Angella Allard notes that while this theory might explain the reasonableness of a battered woman's behaviour, the construction of 'woman' that imbues the idea of BWS is based upon limited societal constructs of appropriate behaviour for white women and the racist stereotypes and media demonised images of black women as aggressive, strong and violent do not fit into the description of passive victim required by the BWS in order to plead lenient penalty on the ground of 'learned helplessness'. Allard gives the examples of two battered women, one being white and the other, black, who were both accused of battering death of the children in their custody. While the white woman was let off since she was quite emotionally tortured by her abuser, the poor black woman was charged with manslaughter, hence showing that the treatment of battered women varies in the court, depending on whether she is white or black. Due to", "label": 0 }, { "main_document": "For a new business, business plan is an outline prepared for the purpose of capturing credit and supports. This business plan is trying to get financing for the new startup company-- My Mp3--. This plan focuses on the financing aspect to submit a comprehensive business plan over a five year period. After a brief executive summary, there are various revenue and expenditures categories that will be applied to the new business. Subsequently, the possible source of finance to fund the project will be reviewed and analysed. My Mp3 is a new small-sized public limited e-company which is worked out its business online. The web page will be designed to sell various brands' Mp3 and Mp4 with the focus on the market mainly in China. Customers can easily compare different brands' products within similar functions (the price-value) and purchase their favorites on a discount price on www. mymp3. Com. While, there are also additional services including downloading music. The digital products - such as the Mp3 and Mp4--- are popular fashion products which are received by a board target. It is a growing industry where the customers and digital products providers are keeping changing their digital products frequently for updated or new-fashioned products. Moreover, although the competition is increasing by new market enters; a qualified service with well designed use-friendly webpage will help My Mp3 to success in this industry. The operational management team will include one full-time office clerk and one manager (the owner) to manage the website, post-sale and after-sale customer services, and one part-time IT professional designers to help the setup work for web pages. Cash flow is regarded as the vital factor for a business in most contexts, which is a evidence if business is in a credit risk. This business plan comprises the revenue models and the cost models for the new business. There are two main income resources in My Mp3: the revenues from product sales and revenues from service sales. The detailed quantities are forecasted in a prudent (or conservative) practice, shown in the table 1 in Appendix 1. Those figures are assumed in an average basis for the five year. The first incomes model is form the sales of Mp3 and Mp4 that is the dominating revenues (appropriate 60 percent) in this business. My Mp3 will promote these digital products from different companies, which are available for both those famous brands and new jumped-up companies. It is forecasted that there is an average weekly sale of 10 items (Mp3 and Mp4) for the five years at least, with price range nearly from It is usually 10-20% lower than high street price. There are nearly 52 weeks a year, therefore, the annual revenue is appropriately going to be The loss form a discount on the price can be offset by a saving from the payment for a marketplace rent. The advantage of this revenue model is that all business can receive cash immediately, which can be a vital factor for a new company to success. Secondly, the advertising revenue is another vital income for an online", "label": 0 }, { "main_document": "Collins 2003: 28 Plato Republic 364b The distinction commonly made between 'rational' and 'magical' healing is the approach taken by the healers. Only one is based on logical and natural causes and is grounded in the result of observation, this is the definition David applies to the work of the Egyptian Yet we have also seen that at least in Egypt, 'magical' cure was connected to complex philosophy concerning the nature of time and the cosmos. This also appears true of Greek 'magical' cures as Graf points out, both 'rational' and 'magical' approaches are similar if not identical: Graf in Meyer and Mirecki 2001: 39 This is intriguing since Collins, when referring to the Hippocratic notion of divine influence, cites his observations from the Hippocratic text Essentially then, the 'rational' and 'magical' approaches in Greece were both based on observation and complex theories and thus neither should be called 'irrational'. It appears that the 'rational' and 'magical' approach to medicine are very closely related in Greece and although perhaps not as integrated as in Egyptian medicine, the two can not be separated by methodology or religious philosophy. Dickie makes an additionally fascinating observation about the nature of Greek 'rational' medicine. From one of the poems of Pindar we have a description of the medical activities of Asklepios, the father of medicine These activities are: Dickie 2001: 25 At least concerning the mythology of Asklepios, 'rational' and 'magical' medicine appears to be intertwined. None of these considerations are mentioned at all by David when arguing that Egyptian medicine had a 'rational' side. This in hindsight is surprising since these insights would be beneficial to her case attacking the theory that we should assign the term 'rational' and 'irrational' to Greek and Egyptian medicine respectively David in Horstmanshoff & Stol 2004: 137 Let us proceed to examine the interaction between Egyptian and Greek medicine. There are two relatively minor points to be observed and may well simply boil down to personal conjecture. The first concerns David's insistence that the Greeks inherited many elements of medical practice from the Egyptians Personally, I believe that we should not simply assume any more recent civilization incapable of making discoveries independently of older cultures. Furthermore, this aspect of study is notoriously difficult and subjective. The second point is slightly more serious, David states: David in Horstmanshoff & Stol 2004: 144 David in Horstmanshoff & Stol 2004: 145 Other scholars would suggest a more rapid interaction than David suggests. Frankfurter states that Egyptian priests were interacting with the Greek elite as early as the Ptolemaic period Furthermore, the Greek Magical Papyri indicate Greek and Egyptian magical/medical interaction; some of the papyri are dated to the first century BC Frankfurter 1998: 223-224 Betz 1992: xxiii-xxv My final criticism concerns the lack of working definitions within the article. Namely the use of the terms 'rational' and 'irrational' are both unhelpful and not applicable to Egypt or Greece. This problem is not an isolated one and various scholars utilize a variety of terms when discussing this subject. Although I have difficulty in", "label": 1 }, { "main_document": "In an increasingly globalizing environment, the written press is gaining power worldwide. In Asia today, this is particularly visible in the relationships between Japan and China. Due to these countries' oppositions during World War II, Sino-Japanese ties have never been serene Today, however, tensions between these countries are resurfacing at a worrying pace. 2005 has seen the mobilization of over 400,000 Chinese signing online petitions to oppose Japan's bid for a permanent seat on the U.N. Security Council; mass anti-Japan protests opposing the publication of a text-book said to \"gloss over Japan's wartime atrocities\"; and high Sino-Japanese discord over the control of gas exploitation in the East China Sea Many point towards the media as the source of such sudden hostility. Hokkaido University professor Kiyoshi Takai claims that \"biased reports [are] breeding misunderstanding and hatred\" between these countries In particular, the controversial history textbooks published in Japan have provoked countless articles in the Sino-Japanese press. By analyzing these countries' newspapers' treatment of this issue, and by comparing these views to those of the international press and to a more political analysis of the situation, one may evaluate the media's role in shaping readers' minds - not only in Japan and China, but worldwide. \"International Herald Tribune\", 12 February 2002 \"The Japan Times\", 18 April 2005 Chung-Yan, \"China & Japan - superficial, biased reporting hurting ties between nations\" Articles covering the textbook issue in the Sino-Japanese press show an evident reporting bias; as noted by Kiyoshi Takai, \"for the Chinese media, Japan-bashing has become an obsession\" Excerpts from China's \"The People's Daily\" are mostly written in an objective style: the author seldom offers personal judgement, and facts are quantitatively presented. In subtext, however, these articles often appeal to anti-Japan sentiments. Article 1 describes Chinese president Hu Jintao's recent speech, in which Japanese are described as, \"arch criminals who . . . had their hands blotted with the blood of the people around the world\", and who \"wantonly trampled underfoot the beautiful land of China\" Despite the article's carefully detached style here, by exclusively citing such quotations it becomes clearly biased. Likewise, behind A.2's apparently objective description of anti-Japan protests in China, the author adopts an individual focus and introduces readers to \"an old man who experienced the World War II [and who] said the war had broken his family up\", stirring resentment in his readers Insisting on such individual focus particularly strongly, A.3 appeals to readers' emotions by detailing the wartime hardships of a Shanghai farmer, who \"lost his mother in a Japanese bombing attack\", whose father and brother \"were stabbed to death by Japanese soldiers\" and whose grandmother \"died in prison . . . after she was forced to eat human flesh\" The Chinese media thus starkly differs from the approach that political scientists may take to explore the textbook situation: these articles adopt an individual focus often avoided by political analysis for objectivity's sake, and only really return to discussing the textbook issue itself in their last lines. Nonetheless, this approach may be aimed not at igniting anti-Japanese sentiments in readers,", "label": 0 }, { "main_document": "Influenza A virus causes pandemics as continued selective pressure from the immune response causes frequent mutations at antigenic sites. These accumulate most frequently in the five epitopes of the haemagglutinin subunit, forcing the immune response to alter to achieve virus clearance. Occasionally, genetic reassortment can occur between two co-circulating influenza strains. New haemagglutinin and neuraminidase subtypes are acquired, resulting in antigenic shift and substantial mortality due to the lack of pre-existing immunity. This laboratory investigation demonstrated the fundamental assays, haemagglutination assay, haemagglutination inhibition assay and infectivity assay aid influenza characterisation. It was demonstrated that antigenic drift can happen gradually if one epitope mutation is selected for at a time. If two epitopes are selected, the virus is completely neutralised. Influenza A is a major pathogen and a leading cause of death in infants, the immunocompromised and the elderly In the US alone, an average of 51,000 deaths occurs annually due to influenza This highly contagious respiratory disease causes pandemics and epidemics resulting in high mortality and morbidity, which could be prevented through vaccination. The 1918 pandemic of H1N1 is ranked as one of the three most destructive human epidemics with a death toll approximately 25 million It is estimated that the annual epidemics have had a more significant impact on human health than the three pandemics combined Since 1918, the other pandemics have been H2N2 1957, H3N2 1968 and H1N1 again in 1977 It is believed that we are overdue for another pandemic The threat of a new pandemic is believed to come from 'bird flu' or H5N1. Influenza strain A/Hong Kong/156/97 initially appeared in Hong Kong in 1997 from crossing the species barrier from birds to humans Since then it has resulted in the culling of millions of birds, 147 human cases with 78 deaths As yet, H5N1 has yet to acquire mutation(s) which would allow efficient transmission between humans. As the virus is already demonstrating resistance to the anti-viral drug amantadine Influenza viruses are classified within the orthomyxovirus family, figure 2.1. There are three main types A, B and C of which A and B cause serious disease Influenza A is classified into subtypes based upon the degree of homology between the haemagglutinin (HA) and neuraminidase (NA) surface proteins The 16 subtypes of HA and 9 of NA The main antigenic sites are the three envelope proteins, of which, HA and NA are the most important. M1 is the least antigenic of these, forming proton channels which span the envelope The HA protein trimer has an important role in viral entry and uncoating NA cleaves the terminal N-acetyl acid or sialic acid by a hydrolysis reaction. This is important for viral egress as it allows the removal of surface sialic acid at the release site to prevent binding upon budding and subsequent re-infection of the cell. HA and NA proteins are the main antigenic sites targeted by the adaptive response. The five antigenic sites on the HA protein Regular selection pressure from antibodies and the lack of proof reading ability in the viral RNA polymerase This finding was confirmed by", "label": 1 }, { "main_document": "with little experience of actually handling the equipment, however both M. Xanthus and E. Coli were successfully observed under the microscope. The E. Coli were particularly impressive in that they occasionally displayed the tumbling behaviour. However it became apparent that the E. Coli were well fed and therefore had no need to grow the flagella. More experiments were conducted using starved bacteria - and these were more successful in that the transitions between the tumbling phase and the translation phase could clearly be observed for individual bacterium. Indeed had more time been available quantitative data may have been obtained in the manner described in the previous section. The recording of data was complicated immensely by fluid effects. By far the overwhelming motion of the bacteria is due to a drift cause by fluid motion and pressure gradients under the cover slip. The fluid effects are quite drastic when another fluid (such as succinic acid)is added to the sample. Although quantitive data could be collected in this fashion - fluid drift effects could have to be accounted for in the analysis, or a way to counter the flow should be designed into the experiments.", "label": 1 }, { "main_document": "abandoned in light of anomalies. When a research programme becomes less progressive than a rival theory, history shows that scientists will follow the more progressive theory predominately. The 'hard core' and 'protective belt' aspect of scientific theories could be used to determine whether a theory is scientific or not. If a theory is found to have no protective belt then it is irrefutable and so non-science. This approach allows theories to have irrefutable parts, unlike Popper's criterion, and still remain scientific. Paul Thagard approaches the problem of demarcation from the other side in his article They are: I find these criteria to be fairly vague, they lack the precision to definitely separate science from non-science and leave much open to debate. What is a \"long period of time\" and how many unsolved problems constitutes \"many\"? Regardless of these issues the criteria can still be employed against some pseudosciences, namely astrology. Kuhn's argument for saying astrology was a pseudoscience is that it has no puzzles to solve. Thagard says that astrology does have puzzles to solve, such as \"the significance of the three planets, Neptune, Uranus and Pluto\" (1978 cited Curd & Cover 1998, p.31). These unsolved puzzles force Kuhn to say astrology is not unscientific, which is a flaw in Kuhn's argument, because it clearly is. In order to make astrology a non-scientific pursuit Thagard defines a discipline to be pseudoscientific if there is little attempt made to solve these problems. This is a subtle change on Thagard's part, shifting the standpoint from existing problems to attempted solutions. Thagard's attempted answer to the problem of demarcation is not ideal. By not defining science, only pseudoscience, unintuitive results occur whereby research programmes like biorhythms are not denounced non-science simply because there are no alternative programmes to be less progressive against. The formulation of Thagard's criteria means that a theory can be scientific at one point in history and pseudoscientific in the next. He admits in his article that astrology was not pseudoscience to Kepler because \"existing alternatives were scarcely more sophisticated or corroborated than astrology.\" (Thagard 1978 cited Curd & Cover 1998, p.33) This would make the property of 'being scientific' time dependant, and although Thagard, Lakatos and Kuhn all agree that the property of 'science' should be viewed in the social and historical context, I feel that science is time independent and that its definition should reflect this. Later on in his 1988 book titled In response to this issue he discusses a new criterion that \"pseudoscientific theories are often highly complex and riddled with ad hoc hypotheses,\" (Curd & Cover 1998, p.74) which provides a way of spotting pseudoscience even if there are no known alternatives. But despite Thagard's criteria building from Popper, Kuhn and Lakatos's thoughts we still do not have a clear definition of science that can determine the pseudo from the genuine without exception. Another potentially productive way to define science might come from analysing the general principles that most people consider to be science, share. This method is not knowing what science is beforehand and then trying", "label": 1 }, { "main_document": "homologous regions to the bZIP (leucine zipper) class of transcription factors and the ant homeodomain in , 1992). Unlike the bZIP proteins, however, SKN-1 lacks a dimerisation segment and binds DNA as a monomer (Fig 3). During cleavage of the zygote, SKN-1 protein is loaded unequally between the P1 and AB sister blastomeres, an effect mediated by MEX-1 and PAR-1. SKN-1 is required for differentiation of the EMS daughter of P1: in In , 1993). SKN-1 activates the transcription of at least two genes, The MED transcription factors appear to specify the fate of the EMS cell, since expression of the SKN-1 protein is present in the embryo from the 2-cell stage until the 12-cell stage, when it suddenly disappears. However, zygotic expression is required post-embryogenesis, for the development of intestinal cells (Maduro , 2001). Following cleavage of EMS, SKN-1 mediates the endodermal development of the E blastomere. The transcription factor POP-1 in the posterior EMS daughter receives signals from MOM-1 and MOM-2 (Wnt pathway components) and a receptor tyrosine kinase from P2 to allow SKN-1 to switch on endodermal genes in that cell, which becomes E (Fig 4; Maduro The somatic descendents of the P2 blastomere (C and D) require another putative transcription factor, PAL-1 is regulated by the MEX-3 protein, which binds to PAL-1 is also inhibited by SKN-1, preventing it from acting in the EMS cell (Hunter & Kenyon, 1996). After the second cleavage, equal levels of SKN-1 are found in both daughters of the P1 blastomere, EMS and P2. However, only EMS produces pharyngeal cells and P2 is still able to develop into a healthy germline in The maternal protein , 1992). In embryos from , 1992). PIE-1 is a novel 38 kDa protein whose structure is not similar to any known transcription factors, although it contains two CCCH zinc-finger domains that have been shown to bind RNA in other proteins, implicating a role in mRNA cleavage, processing and/or turnover. However, later studies provided evidence for a role in the direct repression of zygotic transcription by RNA polymerase II (RNAPII). Firstly, PIE-1 is present in the nucleus as well as the cytoplasm (Mello , 1996). Secondly, PIE-1 disappears from the germline shortly after division of P4 into Z2 and Z3, coinciding with the onset of zygotic transcription, as determined by the appearance of the H5 phosphoepitope on the carboxyl-terminal domain of RNAPII (RNAPII-H5). Somatic blastomeres acquire the ability to transcribe new mRNA minutes after separation from the germ lineage, and furthermore, newly transcribed mRNAs are additionally detected in germline blastomeres in Also, the ectopic expression of PIE-1 in somatic cells by transformation of the It is suggested that PIE-1 suppresses the action of maternally encoded transcription factors such as SKN-1 from activating zygotic genes that promote somatic fates through the general repression of mRNA transcription by RNAPII in germline blastomeres, thus protecting the germline fate. , 1992). The mechanism of transcriptional repression has recently been dissected (Batchelder , 1999; Zhang , 2003). Unexpectedly, instead of transcriptional initiation, PIE-1 inhibits transcriptional elongation. The process of elongation requires phosphorylation by", "label": 1 }, { "main_document": "a planned market mix of 111, to make space for the leisure traveller consequently 19 guests will be left disappointed. Of course there is the fear that these guests will turn to competitors due to inflexibility and intransigency over room rates and availability. There are obvious concerns that need to be addressed when it comes to revenue management and the impact on the guest. When different market segments are competing for rooms during periods of high demand the revenue management approach gives preference to high paying segments. This means that those who are price sensitive and unable or unwilling to pay may deem this to be unfair especially if they are loyal customers. In reality guests prepared to pay the high rate will receive accommodation they require at the time they require it and at the same time the hotel achieves a higher level of revenue. This may be seen as unethical because selective bookings means lying to customers and the Hotelier Proprietors Act 1956 states 'a hotelier must offer food, drink and sleeping accommodation to any traveller who is able and willing to pay a reasonable sum and is in a fit state to be received.'(Huyton et al 1997). Regular guests should be identified through guest history records and offered exceptional rates so that dissatisfaction is kept to a minimum. The integration of customer relationship management with revenue management is the key to the future of the hospitality business. Hotels need to establish the long term value of each customer relationship by means of ranking them to identify the most valuable and then servicing them differently (Haley and Watson 2002). This approach is evident in the Industry in Radisson International, Rosewood Hotels and Resorts and Harrahs Entertainment is the current industry leader of CRM. Short term goals will be achieved but loss of customers and straying attention of service will result in considerable financial cost. Regarding the 'customer' RM system does have its limits being the assumption that historical trends will continue which may be deemed narrow minded. Identifying and adjusting to external events which has an impact on previous data is needed. To introduce competitive data in a meaningful way is a difficult problem. This problem relates to the staff which are important in analyzing demand. Employees usually live in the local area and are aware of public issues, read local newspapers and gain views from friends and relatives about events and competition so are an asset in predicting demand. Revenue management is not an extension of the past because the supply side is shaped by plans and strategies of both the hotel and its competitors and the demand side is shaped by human nature as shown by Jones and Hamilton 1992. It provides sophisticated ways to support reservations but responsibility for final forecast decisions must be with the management team and as noted by Lefever (1988) technology has not reached a point where it can totally replace human intuition. The 'people element' is stressed by Lee-Ross and Johns (1997) and how the implementation of revenue management systems impacts on", "label": 1 }, { "main_document": "Of all the areas of social science studied today, world politics may be one of the oldest and most persistently controversial. Over the years, changes in political interactions have sparked the emergence of numerous different approaches to the field. Today, the 'world views' that remain dominant are Realism, Liberalism, and Marxism. Each of these theories rests on distinct assumptions which shape both their interpretations of world events, and, more importantly, how international politicians consider the world when taking decisions. An understanding of these world views is therefore essential to students of world politics, as they not only help us to interpret the past, but to understand the likely future development of international relations. As we enter the 21 To comprehensively study world politics it is necessary to evaluate the efficacy of these approaches in explaining today's world; one may do so by comparing Realism, liberalism and Marxism based on their main assumptions and on their degree of empirical backing throughout history. Such an approach to this century's dominant world views will hopefully reveal their major failings and advantages, enabling students of world politics to decide which theories to pay more heed to and which, if necessary, to reject. The main assumptions of realists, liberals and Marxists critically influence the study of world politics, as they define the field's key concepts. Realists, firstly, view the state as the main actor of world politics, which seeks to maximize national interests in a hostile international environment. Being devoid of central authority, the international system operates in anarchy; it therefore possesses no mechanism for punishing aggressors, and states must rely on 'self-help' to survive. Underlying this concept is structural realists' assumption that the drive for power is a fundamental aspect of human nature, which is reflected at the state level. Because states primarily seek to gain power relative to other states, they are accordingly in constant competition; there can therefore be no trust in the international system, as the stakes involved - possibility of war, for instance - are too high Consequently, as liberal realists such as John Mearsheimer point out, \"states in the international system fear each-other\" In a realist world, cooperation between states is hence impossible, making the outbreak of war a perpetual possibility. Baylis, John and Smith, Steve. Mearsheimer, John J. \"The False Promise of International Institutions\". The conclusions drawn from liberal assumptions are vastly different. Rather than establishing the state as international politics' main actor, liberals claim that individuals - and, according to liberal institutionalism, other non-state actors such as intergovernmental organizations - play the greatest role on the world stage, and that a state only possesses the authority invested in it by its people. On the economic level, liberalism advocates minimal state intervention and free trade abidance to market forces; on the political level, liberals conclude that states seek absolute rather than relative gains, through interstate cooperation rather than competition. While realists see war as a necessary tool of international relations, liberals therefore condemn war, attributing it, like liberal internationalists, to excessive government intervention in trading relationships, or, like liberal idealists", "label": 0 }, { "main_document": "The English East India Company had been established in 1600 as trading Joint Stock Company. Therefore, it sustained a current account deficit in the opening century or its trade with Mughal India. However, by the time the British Crown under Queen Victoria formally replaced it as the official administrators of matters pertaining to India in 1858, it had turned its deficit into a massive surplus. Whereas initially gold and silver bullion flowed into India to account for the expensive luxury commodities being purchased from her, by the turn of the 19 Such changes in the fortunes of the East India Company must be attributed primarily to the successful political conquest of the lucrative trading regions of the Indian sub-continent, as the company was able to remove competition and establish a hegemonic presence in these markets. That the company found itself in such a position meant that its priorities, organisations and policy formulation had been undergoing changes through out. However, the question we have to address is to what extent were these developments shaped by the engaging of the company and the Indian mass populace (most notably the I contend that the East India Company adopted a \"top down\" approach, whereby they engaged the elite and influential sectors of the populace actively, and which consequently led them to have widespread influence in all the social strata. While not ignoring the impact of company policies on the majority of the population, we need to be careful not to over emphasise it. This interaction between the A not so obvious question needs to be settled first- \"What is an Indian elite?\" That political rulers need to be classified as 'elite' is self explanatory, since with them lay the control of the political economy of India. However, I have also included merchants in this category, since a large part of the company's plans revolved around capturing trade routes, or collaborating with Indian merchants to increase profitability. Moreover, most merchants and Kalyan Chaudhuri, Two conceptual puzzles need to be solved to achieve a clear understanding of these wide ranging trends. Firstly, we need to comprehend how the priorities of the East India Company changed and the reason behind these shifts. The supply-demand model of a market is useful in this regard. The fundamental incentive for freelance and company traders was put simply by P.J, Marshall- \"They were in India to get rich.\" The vast and diverse riches of India was the raison d' However, as a result of these actions, the fundamental characteristic of the company's role in India had also undergone major changes. With the establishment of political strongholds in India, the East India Company was transformed from being \"an organisation originally created to accumulate profits\", to one that \"drew its basic sustenance from land revenues\". For example, the company had to maintain a permanent bureaucracy and a standing army, the latter being used for dual purposes. First, it was used to defend the company's investment interests such as trade routes or agricultural/manufacturing hubs (such as, Calcutta or Madras). Second, it was also used as a", "label": 0 }, { "main_document": "this analysis is the called Cumulative sales usually follow 20/80 rule. However, a The largest customers are usually most profitable or most unprofitable and are rarely in the middle. Therefore, to restore Large plc's market share, Large should focus on managing the customers on the left section of the whale curve. For instance, managers should be prepared to offer discounts, special services to retain the loyalty of these customers if competitor threatens. This avoids Large plc from losing market share to its competitors. Furthermore, its effectiveness is shown from the analysis of Kanthal case study (2). It is contended that, Kanthal's number of profit customers increased from 150 to 170 after implementing this strategy: to give up shares on low profit customers which turns out to be their 2 largest customers. Furthermore, the current ABC system should seek to extend its analysis from product lines to channels to locations or from brands to customers and regions. With this, managers are able to see the profit made by locations and customer which provides insights to understanding the product market and customer segments. By being closely in touch with segments, marketing department can respond quickly to even slight changes in what target customers want and also, the kind of competitive advantage to seek. This gives Large plc a high potential to restore its market share through identifying which of the activities that are adding value by customer and region. Furthermore, it is contended that ABC identifies to a certain extent the value adding or non-value adding activities. However, this is highly subjective and will depend upon the perspective of the person doing the classification. (i) Balanced Scorecard approach emerges from the need to link both financial and non-financial measures of performance and identify key performance measures. Particularly with having a more customer-focused approach, it is contended that BSC provides a comprehensive framework for managing customer relationships (4). This is in line with Large plc's strategic vision. Using this method, managers are to identify the customer and market segments by developing performance measures that tracks Large plc's ability to create satisfied and loyal customers in each targeted segments. Performance measures typically include market share, customer retention and loyalty and new customer acquisition. It is vital that Large plc is able to segment its market accordingly and understands the customers to ensure that it is identifying and exploiting different market opportunities besides increasing its market share. (ii) Evaluating the potential of restoring Large plc's market share, BSC ensures this by having performance measures relating to market share. For example, market share can be measured in terms of sales revenues, unit sales volumes or number of customers. By having this, managers can indicate whether strategy adopted is achieving the expected results in targeted market segments or otherwise. In this instance, it is possible that Large plc has not recognised the strategic segment to market its product. Therefore, it is observed that overall market share falls because there is a possibility that Large plc's product appeals to a segment which have a relatively small market. Using the BSC", "label": 0 }, { "main_document": "fatigue by examining the fracture region. A fatigue fracture will have two distinct regions; One being smooth or burnished as a result of the rubbing of the bottom and top of the crack. The second is granular, due to the rapid failure of the material. A diagram of how a facture may look can be seen in Figure 7.5, which shows how a defect on the surface of the component allows crack propagation, each clamshell indicating a load and unload cycle. The spacing and number of clamshell marks indicates the magnitude of stress that the component was under during operation. Figure 7.6 shows the failure surface of the plain beam due to fatigue failure. The picture shows that the fatigue originated from a defect in the surface of the beam, where a crack grew down through the beam until it finally fractured. The surface is smooth near the origin point with a gradually coarsening fracture surface as the crack progressed. Although the image isn't particularly clear there appears to be a mark on the surface which may have been the cause of the initiation of a crack, since this defect would be acting as a stress concentration. This report has investigated the effects of loading beams both plain and with stress concentrators present past its yield point. The effects of post yield loading on fatigue life has been predicted using fatigue theory with comparisons made with results obtained through practical testing. It has been show how post yield loading and unloading sets up a state of residual stress/strain in a component, which causes the component to become deformed but also has shown to improve a components fatigue life under certain loading conditions. The process of how fatigue is initiated has been discussed, common failures starting at a defect on the component surface, although in some cases due to a defect within the material structure. In both situations crack propagation is caused by a defect acting as a stress concentrator, crack propagation rate being dependant on the tensile stresses the component is subjected to during operation. Fatigue failure only occurs in components subjected to tensile loads, thus by producing residual compressive stresses in a component through varying processes it is possible to increase a components fatigue life. The tensile loads that the component is subjected to must first over come the residual compressive stresses. Fatigue failure can occur for a component operating well within its physical material limits, therefore there is a requirement that a suitable method for predicting a components fatigue properties be used. S-N curves derived from testing of many specimens provides the most accurate way of predicting a materials fatigue properties, although the use of equations derived from common S-N curves can be used when an S-N curve is unavailable for a given material. Predicted fatigue life using either method provides an accurate indication of a components life, allowing estimations of maximum allowable stresses for a components required life cycle. The best method to guard against fatigue failure is to ensure that the component is subjected to only compressive stresses.", "label": 1 }, { "main_document": "To study the velocity profiles for both laminar and turbulent flows through a pipe and identify the characteristics of each flow. In order to measure the pressure drop between two points along the pipe later in the experiment, the Reynolds number is required in relevant calculations. The properties of the working fluid must also be known as a pre-requisite for experimental analysis. The first step was therefore to establish values for the density and viscosity of the working fluid, which was oil in the current experiment. In order to measure the density of the fluid a sample can be weighed using a density bottle whose volume is also noted. The procedure was to first measure the weight of the empty density bottle and then to measure the weight of the bottle filled with oil of volume 50ml (i.e. the capacity of the bottle). The difference between the two values is therefore the mass of oil. Substituting these into the above equation therefore yields a value for the density of 866 kg m In measuring the viscosity of the working fluid the following relationship is used: Where The procedure was to use a Ferranti viscometer to take measurements at different speed settings to enable a number of values to be obtained for Readings were recorded for the five speed settings and the corresponding The shear rates are obtained from given values for the corresponding speed settings: The values for viscosity can then be obtained. These are tabulated below: It can be observed from the table above that the readings for speed setting 1 and 2 are recorded as '100+'. This is due to the limited scale of the Ferranti viscometer used so that at these speeds reading of beyond 100 could not be recorded. The results for the viscosity therefore yield a mean value of To determine the experimental velocity profiles for the laminar and turbulent flows in the pipe, some data is required so that empirical results can be interpreted graphically for analysis (see results). The experimental procedure for obtaining this data is the use of a pitot-static tube to measure the distances across the diameter of the pipe from a nominal point outside and the corresponding head differences are then noted for points two points, S and T, on the pipe. The head differences are obtained by reading the central manometer pipes for S and T, the scale for which is in millimetres. The apparatus used also enabled the measuring of mass flow rates. In the current experiment the procedure was to note the time taken for a quantity of oil to fill a tank. Three measurements were taken for a mass of 50lb so that a mean value for time (in seconds) could be calculated. This method was carried out for both laminar and turbulent flows. (see appendix 6.1 for calculations) The pressure drop between two points along the pipe can be measured experimentally and compared with theoretical values for the corresponding laminar and turbulent values. Two points The experimental data was collected and tabulated for both the pressure and", "label": 0 }, { "main_document": "to group the purchasing activities so as to be most effective. One common approach is to do this by commodity or material groups, where each buyer deals with a particular range of items; for example, one buyer may be responsible for raw materials, another for mechanical components and another for electrical/electronic materials. Apart from the advantage of specialization in a particular range of goods, this helps to avoid duplication of research and negotiation effort at plant level. It should also facilitate data collection and communication inside the department and with other sections of the company as a whole. It can strengthen the buyer's negotiating position through consolidation of total requirements and can reduce time spent in negotiation. In addition, liaison with suppliers is often improved by this means. However, if this approach to division of work is followed, it is important to bear in mind that provision should be made whereby a colleague can take over responsibility for a particular group of materials in the absence of the buyer normally responsible. One method which is commonly used to deal with this matter is to 'pair' buyers. Not only can this help to overcome such temporary problems, but it can be a useful means of staff development. In the organization show in figure 2, buyers 1 and 2 and buyers 3 and 4 might work together in this manner. In larger organization(see figure 3), each section will tend to become self-sufficient in this way. Development of staff in such cases, however, may involve moving people between sections. Buyer specialization by commodity or material group may often be the best way to subdivide the work of a supplies department, but it is not always the case. In the construction industry, for instance, individual buyers or buying sections are often responsible for all purchases for particular contracts. Often these contracts amount to huge sums and the construction site may be many miles from where the responsible buyer is located. In such circumstances, a single contact facilitates liaison, even though there may be advantages in concentrating purchases for negotiation. A combination approach is frequently favoured, where buyers assigned to particular contracts place orders against contracts negotiated on a 'commodity' basis for the company as a whole. These contracts will typically, be negotiated by members of staff who have been designated as major contract buyers. Usually this work is undertaken in addition to order functions. With regard to the allocation of purchasing tasks, responsibilities and authority, three different levels may be differentiated: The strategic level covers those purchase decisions that influence the market position of the company in the long run. These decisions primarily reside under the responsibility of top management. Examples of purchase decisions at this level are: The development and issuing of operational guidelines, procedures and task descriptions, which provide authority to the purchasing department. The development and implementation of auditing and review programs in order to monitor and improve purchasing operations and performance. Decisions to outsource activities, which currently have been executed by the company to outside suppliers. Establishing long-term contracts and contacts with", "label": 0 }, { "main_document": "that is: then substitute (6) to (5): Since this model assumes that government is pegging exchange rate, therefore Denote this exchange rate as Therefore the monetary policy is constant if the government plans to peg the exchange rate at It is time to explore how foreign reserve falls under conditions above. First break down the money supply as follows: In which D and R represent domestic credit and foreign reserves. Furthermore, according to assumption (2), the relation between the budget deficit and domestic credit and public bond is: where G stands for government spending and T is the tax revenue. G and T are in real terms therefore the increments of D and B are discounted by price level P. As indicated in assumption (2) that the borrowing capacity from public has already achieved its maximum level, B is not growing anymore and hence Now, because of Assume that the rate of domestic credit expansion is equal to Recall the equation (8), if the government wants to peg the exchange rate at its long run equilibrium level, Since M = D + R, it can be obtained that: Now substitute Based on this result, the country's reserves will be exhausted sooner or later and hence its currency will be forced to devalue. After the discussion in foregoing sections, it is time to investigate the most important part of this model: the timing of the collapse of regime. For simplicity, Flood and Garber (1984) solves this problem by introducing the concept of \"shadow exchange rate\". Copeland (2000) defines this term as follows: Laurence S. Copeland, 2000, \"S\" is essentially equal to: where Kt is the so-called \"basic fundamental\", in which In this model, For the equation (10), Now denote the timing of collapse as Under the condition of rational expectation, speculators have perfect prediction of exchange rate in Also, because reserves are exhausted as soon as the collapse, money supply in post-collapse period would only consist of domestic credit, which implies: where Also, recap the idea of equation (8) that: Since now Now substitute equation (13) and (12) to (11): Eventually, it is time to answer the crucial question of this model: the timing of collapse. According to Copeland (2000): \"the crisis will occur when the shadow exchange rate is the same as the fixed rate.\" As both of the variables are ready, the timing can be derived as follows: However, this is not the end of story. A term of \"collapsing bubble\" needs to be introduced to equation (15) and hence this equation is finalized: Why this term is necessary? Because speculators will not wait until government has run out reserves; instead, they tend to attack the financial market when reserves have fallen to a certain low point, this \"bubble\" term implies that collapsing point is always earlier than that reserves are exhausted. In the next section of essay, some empirical evidences are given. Two examples are discussed in this section. First comes the devaluation of the pound in 1967: On the night of 18 This decision followed weeks of increasingly feverish", "label": 0 }, { "main_document": "the animals are left completely helpless, for they have no educated language and therefore no expression. When they witness Napoleon's cruelty towards Snowball (36) Instead, they must step back and watch, as they fear the brute force of the bloodthirsty dogs. Under Napoleon's headship there is no longer education for animals that are not pigs and they become na The individuals lose their sense of self as they are scared of rebelling against what is the rebellion and therefore they feel it must be the life that they rebelled in order to have. Even the limited language that they do have is used against them, Boxer's maxims of Yet his ultimate truth is perverted and oppressive, and not surprisingly approved by the pigs. The demonstrations and processions provide the animals with an opportunity to believe that they are free and can express themselves, yet this is strictly under the guidance of the pigs. The pigs also allow Moses, the raven, to tell the animals of his journeys to Sugarcandy Mountain, but again this false image of utopia allows them to believe in freedom and happiness and encourages them to work in the light of a better, even if intangible, future. Although Napoleon is not a specially gifted speaker he has the young pig Squealer to talk for him. Squealer talks extremely manipulatively and powerfully. His first duty is to rationally explain the pig's extra food as necessary to their learning and an essential factor to Jones not returning. Fear is an important part of Squealer's early propaganda and it is so effective because the animals see language as transparent, innocently believing everything he says to be true. Perhaps Squealer's most powerful coup is his recreation of history to fit with Napoleon's present aims. Established or written history represents memory, which represents truth. So through Squealer's manipulation and mastery of language he has the ability to recreate the past that leads to his control of the animal's memories and the party's unchallenged control of truth. Squealer is able to rewrite the history of events that the animals were even part of themselves, (54) As memory is based on truth, and truth can be known through language, the animals are convinced that Squealers knowledge of language is so far superior to theirs that when he says Snowball was a traitor they believe it must be so. Such is the power of language that the animal's memories of the past can even be changed in the present. The animals have no proof of their convictions other than what they are told by Squealer. He refers to The written word bears extra power in a similar way to the old Major's written commandments and these are continually changed and adapted as the pigs become more and more human. The explanation for the changes to the commandments is given as the animal's poor memories as their past values and beliefs fade away into oblivion. Language has power over memory and the animals simply (87) The animals become collectively reliant on the pigs and lose their sense of", "label": 1 }, { "main_document": "relationships they have to recognize them as such and have to be able to fulfill their contractual obligations\". In this light, institutional constraints are key to enabling trade and profitable economic exchange, as they mitigate the FPOE by providing \"rules\" to the \"game\". North, D.C. (1991), Different types of institutions - both governmental and private-order - have thus been formed throughout history to counter the FPOE. In Medieval Europe abuse of foreign merchants by local rulers was widespread - as noted by England's Edward I in 1923, as a result \"many merchants [were] put off coming to the land with their merchandise\" Institutions employing multilateral reputation mechanisms consequently evolved to constrain rulers' actions: a large subgroup of merchants would cease trade with a ruler if one of their members was abused. However, traders themselves did not always comply with the rules set by their organizations - in imposing an embargo on Norway in 1294, for instance, German towns had to post ships in the Danish straits so as to intercept any defecting German traders Such costly means of enforcement were gradually countered by the establishment of merchant guilds, more complex institutions that heavily ostracized defecting traders as well as conditioning future trade with rulers on past protection. Inter-city organizations such as the German Hansa, or the subdivision of city administrations in Italian city-states, thus succeeded in mitigating the FPOE at both the ruler and inter-merchant level through their use of institutional constraints As trade grew in scale, institutions had to further be formed at the global level, to govern relations between merchants and overseas agents. Such institutional frameworks are found in 11 These traders again reduced the problem of trust by imposing constraints on each-other through third-party enforcement (fear of collective punishment through the Maghribis' extended kinship networks ensured agents' honesty in trading, whilst the more individualistic Genoese system necessitated government enforcement and the creation of legal contracts to stabilise trade) In both long-distance and intra-European trade, therefore, institutions have been vital means of mitigating the FPOE, and of thus enabling capitalist gains through comparative advantage, division of labour, technological innovation, and specialization North's insistence on the importance of institutions to economic development therefore seems well-justified. Greif, A. (2000), \"The Fundamental Problem of Exchange a research agenda in historical institutional analysis\", Greif, A. \"The Fundamental Problem of Exchange: a research agenda in historical institutional analysis\" North, D.C. (1991), Greif, A. \"The Fundamental Problem of Exchange: a research agenda in historical institutional analysis\" North, D.C. (1991), While such a focus explains the development of certain economies, however, what does it contribute to our understanding of According to Greif, \"once an institution has evolved in a society to mitigate FPOE, it can cause that society to evolve along a particular trajectory and in the long run explain distinct economic performances\" In this light, just as efficient institutions can trigger growth, inefficient ones can trap their users in detrimental economic habits; this is particularly dangerous because institutional change is a difficult and highly incremental process, due to the informal constraints - including deep-rooted cultural and", "label": 0 }, { "main_document": "forced to mix with other rejected peers who may mutually reinforce deviant or antisocial behaviour (Siegler et al, 2003). Undesirable peer status is also associated with poor academic performance and a higher dropout rate (Asher & Coie, 1990). Rejected children are more likely to repeat a year, be suspended or to play truant (Kupersmidt & Coie, 1990; cited by Siegler et al, 2003). Their grades have been found to worsen over time and unpopular children who are aggressive are especially likely to be uninterested in school (Wentzel & Asher, 1995; cited by Siegler et al, 2003). Furthermore, socially withdrawn children tend to begin their careers later, are less successful and hold less stable jobs. Low work attendance, job satisfaction and productivity are also likely due to their social deficiencies. Indeed, rejected children experience more job dismissals due to negative behaviour than their more accepted peers (Janes et al, 1979; cited by Asher & Coie, 1990). It is suggested that unpopular children avoid going to school because they feel lonely and isolated due to peer rejection. School may be viewed as such an aversive experience, that they are motivated to drop out (Kupersmidt, Coie & Dodge, 1990). The negative emotions associated with academic life may also prevent them from having the concentration and inspiration to seriously pursue scholastic excellence. This in turn, affects their career prospects. Low peer social status in childhood is predictive of mental health problems in adolescence or adulthood (Kupersmidt, Coie & Dodge, 1990). 65% of psychotic servicemen had poor peer relations in childhood, whereas this was true for only 25% of non-psychotic servicemen (Roff, 1963; cited by Asher & Coie, 1990). In addition, individuals who later appeared on the psychiatric register received more negative peer nominations as children (Cowen et al, 1973; cited by Asher & Coie, 1990). Perhaps good peer relations help build self esteem and provide adaptive skills needed to prevent the emergence of psychopathology in vulnerable children. The pressure of isolation and rejection may be enough to unbalance a child predisposed to psychopathological behaviour (Garmezy, Masten, & Tellegen, 1984; cited by Asher & Coie, 1990). Other consequences of unpopularity in childhood include loneliness, depression and adjustment problems as adults. Aggressive children frequently become unhappy adults with conflict-filled relationships with their partners (Caspi et al, 1987). These children are also at risk of internalising problems, resulting in social anxiety and low self-worth as they grow older (Bowker et al, 1998; cited by Siegler et al, 2003). Furthermore, they report a lack of trust or loyalty. Finally, studies also show that rejected children are prone to delinquency and adult criminality. Amongst upper-class boys, popularity was found to be a predictor of delinquency (Roff et al, 1972). Additionally, military reports revealed that almost twice as many men with bad conduct were unpopular as children compared to men who had good conduct (Roff, 1961). It could be that aggressive, disruptive children are disliked by their peers and delinquency is merely a continuation of such behaviour rather than a result of poor peer acceptance. However, it is equally plausible that rejected", "label": 1 }, { "main_document": "The purpose of this assignment was to design and construct a Wheatstone Deflection Bridge and amplifier system in order to measure and analyse the change in output under loading. The first stage involved applying a force to the end of the cantilever beam by stacking washers onto the end. This enabled the sensitivity of the system to be calculated. The system was also analysed using a computer software package to record the output. It was possible to determine the resonant frequency of the system after it was disturbed by analysing a graph of output voltage against time. The system had a low pass filter added in order to reduce the noise that would have reduced the accuracy of all the readings. The investigation involved the use a cantilever beam with strain gauges attached to the top and bottom at the same distance along the beam. The application of force at the end of the beam causes a variation in the resistance in the strain gauges. This variation can be measured by analysing the output voltage produced by the system, called a Wheatstone Deflection Bridge. The Voltage must be amplified to a suitable level since the output of the bridge is in the order of millivolts. The Wheatstone Deflection Bridge set up is shown in Figure 1. The signal must be amplified because the output is very small. The amplified signal can be displayed either by an oscilloscope or digital voltmeter. The process is illustrated as a flow diagram in figure 2. The method section outlines how the system was used to measure the strain variation in the beam caused by loading the end of the beam with washers. As well as the analysis using the digital voltmeter, a computer package was used to record the change in output voltage with time. The software package is called Labview. The bridge and amplifier system used in the first part of the investigation can be plugged into a computer and the system analysed by the program. The program allows the user to record the output voltage for a given length of time at any chosen sample rate. For example; 2000 samples to be collected at a sample rate of 1000 samples per second will cause the program to record the system for 2 seconds and collect enough sample to plot a very accurate graph of output voltage against time. The results can be used to calculate the resonant frequency of the system when disturbed and also the length of time it takes to stabilise. Figure 3 shows the Wheatstone bridge setup with dimensions for the beam and the distance from the built-in end to the strain gauges. The beam is made of steel which has a modulus of elasticity of 210 GPa. The Strain is given by the following equation: where The dimensions measure with a standard ruler. Using the dimensions from Figure 3 the equation becomes: This equation of The thickness of the beam is a more influential factor because the strain varies inversely with the square of this dimension. If we assume that", "label": 1 }, { "main_document": "The FPOE, put simply, asks us This essay will essentially deal with two things - firstly, how the medieval merchants solved the fundamental problem of exchange in long distance trade; and secondly, how this helped lead to the prosperity of the Venetian Republic and Portugal before 1700. Venetian Republic and Portugal were the pioneers of the long-distance trade and it was these states which effectively opened up long-distance international trade in the first instance. EC104 World Economy: History & Theory. TOPIC2 Lecture notes. One of the most problematic issues in the world is undoubtedly the question of scarcity, that is, the allocation of scarce goods and services. Such a problem of scarcity is probably best solved by voluntary exchange between states. It is universally recognised that the ability to exchange or trade contributes greatly to greater economic efficiency and development. This is because trade enables states to exploit comparative advantage and division of labour. Through trade, states would also benefit from specialsation of goods and services as well as technological advancements. Hence, it would be foolish for merchants in one state not to trade with merchants in other states. However, a merchant first had to overcome the FPOE in order to enjoy the benefits of trade. In order to enter into a mutually beneficial exchange relationship, both sides must commit to abide by their contractual obligations. For instance, a lender will not lend without being sure that the borrower will not renege on the contract. A formal structure had to be formed in order that merchants could trade freely without the worry of the other party reneging. Once a state could mitigate the FPOE, exchange of goods and services could take place and all those involved would benefit from it through greater prosperity of the state. A merchant would not straightforwardly enter a mutually profitable exchange unless he is confident that the exchange can indeed make him better off because the merchant is convinced that the other party involved will act in a manner that will make him better off. The problem that arises here is that one of the parties could very easily renege and violate the contract. Quite basically, a necessary condition for exchange is that Such a setback is best explained using the theory of Prisoner's Dilemma. The FPOE can be presented using the \"one-sided prisoner's dilemma\" (OSPD) or the \"Game of Trust\" as shown above. In this game, there are two individuals, Player I and Player II. Player I can either initiate exchange or not. When Player I decides not to initiate an exchange, both players gain ''. If Player I initiates exchange, Player II has the option of either to exchange/cooperate by fulfilling his contractual obligations or not. If exchange is efficient, the parties will be much better off than it would be the case if they did not exchange. Player I gains -W > 0 and Player II gets W>0. However, Player II can gain much more ( > W) by reneging; leaving Player I with <0 which would make him even worse off than he", "label": 0 }, { "main_document": "Perfection Hotels is a small UK-based hospitality company, currently operating 3 hotels in the major UK cities, one in London, Birmingham and Glasgow respectively. All the hotels are operating under the same brand, where they strive to provide \"the perfect hospitality experience\" to their guests. The hotel group is relatively new in the market, and has decided to grow and become a bigger player in the market through international expansion. However, due to the lack of international experience, the first country in which to expand to could not be very different to the UK. (More information about Perfection Hotels in Appendix 1-Company Profile). Canada is the second largest country in the world, located north of the United States in North America. With a population of almost 33 million, the country has developed in parallel with the US both technologically and economically ( Canada became self-governing in 1867, but still has close ties to the United Kingdom ( These ties and the high level of development makes the Canadian business environment quite similar to the UK environment, and therefore Canada was pointed out as an appropriate country to enter for Perfection Hotels' expansion plans. (More information about Canada in Appendix 1-Country Profile).) Perfection Hotels has been very successful in the luxury, 5 star market sector. This market is highly competitive, and the hotels are described in superlative terms and far exceed normal expectations in terms of design, level of luxus, service, elegance and uniqueness (Nebel Jackson and Haid (in Moore and Birtwistle, 2005) argues that luxury brands have a high status and possess a desirability that extends beyond their function. The target market for Perfection Hotels are both leisure and business travellers, but they emphasizes the business market, both domestic and internationally. Expanding a hospitality operation internationally can be problematic, and in order to be effective and efficient in the task a company must respond to the opportunities, challenges, risks, and limitations posed by the macro business environment (Costa, in Wu Evans (2003, p.156) defines the macro environment as \"the broad environment outside of an organisation's industry and markets\", and Reich (1997) concludes that companies in general has very little control over it. The business environment in Canada is in many ways similar to the UK. Both countries are political stable, and are ranked in the top in terms of level of democracy, corruption, press freedom and civil/political rights ( They also have affluent, advanced industrial economies with a farily high level of privatisation, and both Canada and the UK rank among the most developed countries in the world ( The structure of their economies is service-dominated, making them high income countries where people enjoy very high standards of living ( A concern for the Canadian economy, however, is the oil prices. Cold weather and large distances make the energy consumption in Canada double that of the G7 average. Thus, if the oil prices remain high and/or increase in the future, people will be forced to spend more on energy, which again will affect the GDP and the spending in other industries such", "label": 0 }, { "main_document": "The nature of the mind and its relation to the body (or matter in general) is a subject which bares witness to much debate. Since the writings of early philosophers such as Socrates and Plato it had been suggested that there are elements of the mind or soul which possess certain characteristics which are not shared with the extended world. This has led to much disagreement over the subject and given rise to many competing theories, each contending with its own problems. One such popular theory is that of Substance Dualism. The fundamental claim of the Dualist is that the mind and matter are two separate substances that are intimately related. This notion arises out of various claims about the different properties which can be attributed to each substance. For instance, it is commonly accepted that matter is extended. It occupies a place in space. Dualists claim that as the mind is not extended, there is no place in space in which the mind can be located, it must be a different substance to that of matter. Dualists also point out differences in the epistemological nature of the mind compared to those of the material, as well as differentiating between the qualitative aspects of the mind and matter. Dualism has encountered various problems and criticisms in its development, which has led to many competing versions. This essay shall focus on one particular version; Cartesian Dualism, which is considered by many to be the most influential form of substance dualism. Rene Descartes (1596-1650) dualism is most clearly argued for in his 'Meditations on First Philosophy'. Within the text he argues from several different initial premises in order to clearly illustrate his position. He distinguishes between what he holds as two different substances, the extended material world is made up of one substance, extended matter, and the mind is made up of a different substance, thought. Descartes goes on to argue that each of these substances mutually exclude one another. There is no object which can hold extension as an attribute whilst also holding thought as an attribute, therefore minds are necessarily distinct from bodies and other material objects. He begins his argument for dualism using what is commonly coined his 'method of doubt', in which he calls into question the existence of everything which he has previously taken for granted. The result of this doubt leads him to dismiss the existence of his body and the external world using the idea of an evil deceiver. It appears to Descartes that there is only one thing which he can attribute necessary existence to; Descartes maintains that it is impossible to call into doubt the existence of his thought, and it must therefore exist necessarily. The existence of the body, and extended matter in general can however be called into doubt. His conclusion is that, as it is logically possible for his mind to exist independently of the body, it follows that it is possible for the body to exist independently of the mind. Therefore the mind and the body are entirely distinct. This argument", "label": 1 }, { "main_document": "this with regard to employee participation. First of all, the success of the implementation the new rights to information and consultation largely depends on the awareness employees have of their existence and the ways to set procedures in motion. If employees have no or limited information about their own rights and the key role they have in triggering them, this could prove to be a serious hurdle for them to enforce these rights (see also Hall and Terry 2004). In addition, even if employees are informed about their new rights, they might be sceptical about their real potential to introduce change. Storey (2005:4) advances this argument by illustrating: It can be concluded then that, although apparently introducing new rights for employees in the course of the employment relationships, these rights may not initially mean significant implications for workers, due to their lack of awareness of new statutory rights or scepticism about their enforcement. Being one of the sides to be consulted about the implementation of ICE regulations, CBI officially seems to support them, especially in the aspect that they allow a great scope of flexibility for employers. As the IRS Employment Review analysis confirms 'CBI is pleased that the proposals encourage voluntary arrangements and that they do not allow 'small groups of employees to overturn successful consultation arrangements' (781:8). Still, there is some sceptism among employers as well. Storey (2005) notes that the reasons for this sceptism stem from various sources including unwillingness to accept initiatives prompted by employees, dubious rationale of investment in information and consultation mechanisms and desire to keep the prerogative of managing processes for managers. Still, the inherent scope of flexibility in the new legislation guarantees employers the right, as Hall (2005:125) points out 'not [to] act unless 10% of their workforce triggers negotiations under the legislation'. Moreover, the absence of serious penalties for non-enforcement of new rights on the side of employees (maximum fine of All this illustrates that, although employers may not feel the impact of new provisions to a great extent or even at all, what is more important is that this would lead to non-existent consequences for employees as well and could deny the workforce the chance to balance the power within the employee-employer relationship. On the face of it, most major union bodies in the UK seem generally in favour of the new regulations as well. Their position can be summarized by TUC's General Secretary Brendan Barber, who claims that 'it's an opportunity for both employees and employers to improve the quality of working life and boost productivity' (quoted in IRS Employment Review 781:8) However, TUC's positive outlook could be largely attributed to the partner role it played in the formulation of proposals by the DTI, as arguably the attitude of trade unions towards the contents of the newly adopted regulations remains quite ambivalent. As Hall notes (2005:124) 'the fact that the Regulations establish information and consultation rights for employees, irrespective of union recognition, could present significant new opportunities (...) in terms of building influence in unorganised workplaces and aiding recruitment.' As far", "label": 0 }, { "main_document": "There are many factors that affect the geometries of transition metal complexes, including thes relative energies of the metal's d orbitals, the number of d electrons the metal has, the oxidation state of the metal and the size of both the ligands and the metal. In this experiment, three d 1.2080g of Cobalt chloride hexahydrate was dissolved in 140mL of ethanol, in a 250mL 2-necked round bottom flask, fitted with a nitrogen inlet. 4.092g of tripheylphosphine was added to the bright blue solution. The flask was flushed with nitrogen, and heated in an oil bath to 35 C. The solution was stirred magnetically, while a nitrogen atmosphere was maintained and the temperature remained between 30 and 40C. A solution of 0.1881g of sodium borohydride in 10mL of ethanol was added to the round bottomed flask, causing the solution to gradually turn from dark turquoise through to brown/ black in colour and small bubbles of gas were evolved. The solution was stirred for a further 15 minutes after the addition, then the fine brown crystals were filtered off on a glass sinter with suction. The crystals were washed with three 30mL portions of ethanol, then two 20mL portions of water, followed by a further two 20mL portions of ethanol. The crystals were dried in a vacuum dessicator for an hour before the yield and melting point were determined. Figure 1 shows the IR spectrum of the product, the magnetic susceptibility of the complex is shown in table 1. 45mL of ethanol was placed in a 100mL round bottom flask, 2.8178g of triphenyphosphine and 1.2076g of nickel (II) chloride hexahydrate was added to it. Two anti-bumping granules were also placed in the flask and the solution was refluxed in an oil bath for 30 minutes. The apparatus was then set to downward distillation and 30mL of ethanol was distilled off. The black crystals were filtered off on a small sintered glass filter with suction and washed with 15mL of 95% ethanol, and 15mL diethylether. Once the crystals were dry the melting point and magnetic properties were determined (table 1) and the IR spectrum of the compound was obtained (Figure 2). 1.4022 g of nickel (II) nitrate hexahydrate was dissolved in 16mL of 95% ethanol in a 100mL round bottomed flask. 0.8105g of finely ground sodium thiocyanate and two anti bumping granules were added to this. The mixture was refluxed for 20 minutes on and oil bath. The mixture was then cooled on ice, and the sides of the vessel were scratched with a glass rod to encourage crystalisation of sodium nitrate. The crystals of sodium nitrate were filtered off through a sintered glass crucible. 2.7962g of triphenylphosphine was dissolved in 25mL of propan-2-ol in a round bottomed flask, by heating with two anti-bumping granules with a reflux condenser attatched. The filtrate of nickelthiocyanate, from the previous part of this experiment was heated on a hot plate, and slowly added to the triphenylphosphine through the tom of the condenser. This caused the colourless solution to turn a vibrant red colour. The flask was then removed", "label": 1 }, { "main_document": "Since the World War II, organochlorine pesticides have made continuous major improvements to the quality of life experienced by millions of people worldwide. One of the first mass-produced, highly effective family of pesticides, organochlorines were an incredible chemical breakthrough - easy to manufacture, highly efficient and cheap. They are certainly one of the most historically significant. The persistence of organochlorines was initially thought of as a huge advantage to the family, remaining in the area for often years keeping it free from pests, however once the implications where realized the associated problems began to be appreciated. It was found that certain organochlorines where effectively inert in the environment, which led to huge bioaccumulation and the poisoning of thousands of birds, amongst other problems including mutagenic and teragenic effects on people. This chronic toxicity associated becomes apparent only after many years later and so affected creatures cannot be rapidly diagnosed. In search for their replacement, alternatives with much shorter degradation half-lives where found, one such replacement being the organophosphates. Organophosphates (OP's) are a group of chemicals developed for use as vector-control insecticides during the 1930's. Vector control techniques are employed for the containment of epidemicological outbreaks that are relayed by a carrier. For example, mosquitoes carry malaria and it is easier to eradicate the transmission vector than it is the disease. Generally speaking, this class of insecticides is considered safe due to the short half-lives which limits the risk of bioaccumulation. Several quick degradation mechanisms exist for organophosphates in-nature; including biodegradation, basic chemical hydrolysis and photolysis which operate leaving only harmless alcohols, phosphoric acid and derived ions. However, despite rapid degradation, the mode of action of these OP's is very dangerous and rapidly fatal to humans. This acute toxicity has resulted in The World Heath Organisation (WHO) classifying the toxic effects of different organophosphates from class Ia (extremely hazardous) to class III (slightly hazardous), leading to the banning and strict control in the regulated western world Structurally, organophosphates are the bis(alkoxy) phospho-esters of an organic functionality. The nature of this functionality actually plays no deciding role in the toxicity of the insecticide, but modifies the absorbtivity, specificity and stability of the agent. They are subject to transesterifcation as phosphorous due to the strong dipole, stability of leaving groups and the strength of the new bond. This feature leads to the mode of action and acute physiological danger associated with organophosphate poisoning. Identified in 1914 by the English neuroscientist, Sir Henry Hallet Dale, Acetylcholine was found to be an intrinsic neurotransmitter involved in the contraction of animal muscles. Acetylcholine works to transfer the nerve impulse across a synapse between neurons through diffusion to a receptor. The break-down of Acetylcholine is important because the nerve impulse being a transitory event must be quelled quickly - when this decomposition is slowed, the nerve junction stays in a continued state of excitation, causing constant muscle contraction. This breakdown occurs at the post-synaptic cleft, and is brought about by the enzyme acetylcholine esterase which acts by hydolysis to leave inactive metabolites. It is here where organophosphates present their", "label": 1 }, { "main_document": "are known as the normal modes, or harmonic frequencies, of the vibrating string. One consequence of this expression, and of the boundary conditions, is that the string can only support whole numbers of half wavelengths. Hence the value of n is also the number of half wavelengths present on the string at that harmonic frequency. The case n=1 is referred to as the fundamental frequency of the string. Since there is no energy transfer in a standing wave, the string must be supplied with energy by an external source in order for it to oscillate. This external source can be, for example, plucking of the string, or the use of an audio oscillator attached to a pickup to produce sound waves which cause the string to vibrate when incident upon it. The damping effect of air resistance causes the string to lose energy as time progresses, and so the oscillations of the string will die away with time. In order to maintain the oscillation of the string a continuous input of energy is required, and hence the second method mentioned here is more useful for an extended investigation into the harmonic frequencies of a vibrating string. The waves produced by the pickup will produce forced oscillations of the same frequency in the string when incident upon it. This driving frequency will not necessarily coincide with one of the natural harmonic frequencies of the string, and if this is the case then the amplitude of the forced vibrations of the string will be very small. As the driving frequency approaches one of the natural harmonic frequencies, the vibrations of the string will begin to increase in amplitude, reaching a maximum when the driving frequency equals one of the natural harmonic frequencies of the string. This effect is known as resonance, and it is this effect that can be used to identify the harmonic frequencies of the wire as the peak in the amplitude of the forced oscillations can be easily detected. In order for this method to work satisfactorily, both the pickup and the device to measure the amplitude of the forced oscillations must be placed at antinodal positions along the wire. If the pickup is not placed in such a way then the standing wave will not be produced, or will have a smaller amplitude as the energy input would be partly at a nodal position where it cannot be used to produce a wave. Placing the detecting device at an antinodal position means that it receives the most powerful signal from the standing wave, and hence will be able to detect any amplitude changes more easily. Substituting the definition for This can be used as the basis for several investigations into the properties of standing wave harmonics. It can be seen from this expression that if the value of successive harmonic frequencies are measured whilst the length of the string is kept constant, then a graph of f This allows the velocity of the waves on the wire to be calculated. It can also be seen that if the value of", "label": 1 }, { "main_document": "for younger workers continues to raise concerns. The 'Employment in Europe 2005' report states that the employment levels for people in the age group of 15-24 have recently deteriorated, with increases in the employment rates from the 1990s being replaced with declines from 2002 onwards, a process more marked for young males than young females. At the same time the problem of inclusion and participation of young people in the labour market retains its significance and continues to pose questions. Crespo and Serrano (2001:1) draw attention to issues like 'lower social value placed on young people's work, the lower expectations of young people (which means that they are likelier to accept jobs with poorer conditions), and the potential danger of their work ethic suffering.' It seems that the task of ensuring adequate types and forms of work for younger employees, thus granting their social inclusion becomes principal. L In this sense, the greatest challenge seems to stem from the apparent contradiction between some of the goals of the EES. On the one hand, the strategy is clearly geared towards combating (long-term) unemployment and ensuring full employment (including for young people), according to pre-defined targets. On the other hand, some of the quantified targets, included in the 2003 EES Guidelines are aimed towards decreasing the numbers of early school-leavers and raising the numbers of people with completed secondary education. The relevant targets state that 'by 2010, at least 85 percent of 22 year olds in the European Union should have completed upper secondary education' and 'achieve by 2010 and EU average rate of no more than 10 percent early school leavers' (quoted in Watt 2004:128). Although these targets are clearly meant to improve young people's potential employability in the future, they may lead to more people continuing to higher levels of education and actually staying out of the labour market for much longer. It seems that striking the balance between achieving sufficient educational levels and adequate entry into the labour market is quite difficult and is what currently prevents the EES measures supporting young workers to be fully effective. Crespo and Serrano see the activation-based measures of the EES as a possible solution to social inclusion issues, which is more likely to be successful on young workers 'largely because it is easier to make them accept what other groups might consider to be a rather questionable form of intervention in view of its coercive and paternalistic nature' (2001:1). Another measure that is predicted to have a positive impact on youth participation in the labour market is the one that the 'Employment in Europe 2004' report mentions and that is the recently adopted European Youth Pact, which streamlines the measures supporting young employees with the more general targets of the EES. Still, as noted, measures aimed at improving the activity rates of young workers fall behind their goals and efforts to coordinate education and training with career planning for junior workers should be strengthened. As the 'Employment in Europe 2005' report concludes more action is needed to support the pursuit of the 'non-linear' career", "label": 0 }, { "main_document": "even in the presence of very acidic fruits. They have a favourable effect on the product, prevent the flow of aromatic compounds from the fruit to the syrup and moderate the acid taste better than does a sucrose at a given sweetness level. A strong sucrose concentration in the syrup creates a higher gradient between the syrup and the fruit, and favours diffusion of fluids from the fruit to the syrup. During storage, the dry extractable matter diffuses from the syrup into the tissues, which results in an increase drained weight up to the establishment of the final equilibrium. Therefore the yield (drained weight) is dependent upon the packing medium composition, the particle size, and the shape of the fruit. Blanching is carried out in hot water or steam followed by rapid cooling to given to vegetables and some fruit. Blanching removes gases from within the tissue and softens the product. Blanching makes the product easier to fill into the can and obtain the correct weight. The removal of the gas also reduces the oxidation of the product, maintains vacuum in the can, and prevents corrosion. Blanching gives the product another washing treatment and inactivates enzymes which may cause deterioration of the food. Enzyme inactivation is not as important to canned foods as it is for frozen foods, as canned foods receive a far greater heat treatment during thermal processing of the can. Typical blanch times in near boiling water are 60 to 90 secs for small objects, such as green peas and diced carrot, and up to 3 min for larger peas. Water blanching is of simple design, robust and least expensive to buy. The water flows counter-current to the product flow and is continuously recycled. The water heated by the hot blanched product in the cooling section is cooled in a heat exchanger which in turn is used to heat the water in the preheat section, providing economy in the use of water and energy. Steam Blanching is quite common since almost all the canneries have an adequate supply of low-pressure steam. The simplest design has a metal mesh conveyor that moves through a tunnel with steam jets located under the conveyor. To minimise the e loss of steam the two ends are closed by curtains. This method is inexpensive and easy to manufacture but they are prone to temperature variations due to the effect of air currents. Blanching under higher steam pressures increase the temperature, improves steam convection, increases the blanching rate, decreases steam loss, and produces higher reduction in the microbial contamination. Airlocks limit the loss of steam and further heat loss by radiation is minimised by appropriate insulation. The primary role of the filling operation is to place a specified quantity (e.g. weight, number, or volume) of the product in the container. The quantity that can be added is primarily dictated by the size of the container. Precision and accuracy are dependent upon the type, state, shape, and size of the product. For solid products, the particle size has a direct influence on the fill precision (i.e.", "label": 0 }, { "main_document": "\"all evidence upon which the Crown proposes to rely in a summary trail\" The guidelines are not binding It has been observed that the level of cooperation between the CPS and the defence solicitors is minimal Para 57 AG's guidelines Attorney General's Office, Magistrate. Comment based on observations over 9 year period However, the guidelines are supported by case law that actively encourages disclosure. Prior to the CPIA, it was held in In the case of This conclusion was reached on the application of [1991] 155 JP 569 Ibid per McCullough J [1999] 2 Cr. App. R. 276 Ibid per Collins J at p 282 Sufficient protection was guaranteed by the possibility of an adjournment to give the defendant an opportunity to meet the case p 52 Spencer J While it has been held that the rules do not compromise the defendant's right to a fair trial, it remains possible to argue that they taint the overall impression of fairness and justice. Following One Magistrate commented that this is frequently the case, and where proceedings are adjourned it is mainly because the CPS has not passed information to the defence. However, adjournments are considered an \"unsatisfactory solution\", disrupting the preferred method of continuous trial They can lead to delays of four to six weeks p 442 Redmayne M Magistrate Article 6(1) ECHR Note 9 supra Practice also suggests that there are issues surrounding the defendant not having enough evidence upon which to enter a plea. This was the issue in one case at Norwich Magistrates' Court, where the CCTV footage of the accused allegedly stealing items from a clothing store was not disclosed. As a result the case was adjourned for two weeks, with an order that the tape be disclosed to the defendant and the court within the following seven days. This case reflects the application of the decision in There has been much commentary on this area, and it is generally accepted that a plea can only be based on an informed choice, where the defence lawyer has received sufficient information to be able to properly advise the client [2000] WL 1480140 Ibid para 34 per Pill LJ p 142 Case Comment In addition to material concerning the prosecution case, the CPIA also regulates unused material, an issue which was \"hotly debated\" in the 1990s in the light of the aforementioned miscarriages of justice Any impression is that it is difficult to reconcile with the adversarial nature of the criminal justice system is thus outweighed by the moral repulsion at convictions based on the suppression of evidence The spine of the argument that the CPIA protects the defendant's right to a fair trial is the fact that prosecution disclosure redresses the inequality of resources between the two parties, ensuring that the inability to fund and administer independent investigations does not prevent the defence from adducing evidence. This is supported by the case law of the European Court of Human Rights p 55 Roberts P & Zuckerman A p 18 McEwan J Henceforth ECtHR para 34, p 247 As such, the", "label": 1 }, { "main_document": "lower the starting level of GDP (the This property of the Solow-model is derived directly from the assumption of diminishing returns to capital; economies that have less capital per worker tend to have higher rates of return and higher growth rates. The conditional convergence hypothesis says that per capita incomes of countries that are identical in their structural characteristics converge to one another independently of their initial conditions ( Supporting evidence is found in several articles It is worth mentioning, however, that this supporting evidence is to a large extent consistent with the club convergence hypothesis as well. The club convergence hypothesis requires that the initial conditions are similar too, whereas absolute convergence is the case where countries converge to one another independently of both initial conditions and structural characteristics. Barro and Sala-i-Martin studied the behaviour of US states since 1880, regions of Japan since 1930 and eight European countries since 1950 Regressions showed a significant negative correlation between growth rates and initial GDP. This result is consistent with the Solow model. The speed of the convergence across the data sets is also surprisingly similar. The convergence coefficient is around 2-3 percent per year, which implies that it takes 25-35 years to eliminate one-half of an initial gap in per capita incomes. This evidence is consistent with the Solow model with a capital share ( It is important to keep in mind that the study is done on countries with relative similar structural characteristics. One of the first studies on convergence, by Baumol, included a sample of 16 industrialised countries and found evidence of perfect convergence However, De Long revised the regression properties due to what he argued was bias and measurement errors and found that there exists practically no convergence Many cross-sectional studies provide evidence of conditional convergence, but by closer examination one still finds a growing divergence between the rich and poor countries in the world. An example of this is provided by Kremer who discovered absolute divergence in the majority of developing economies Romer also shows that the average growth rate of the richest countries by far exceeds the poorest ones Mankiw Barro 1991, Mankiw et al 1992, Sala-i-Martin 1996. Barro et al 1991. Barro et al 1991. Baumol 1986. De Long 1988. Kremer 1993. Romer 1986. Mankiw 2000. To test for the empirics of economic growth theory, I have chosen - as several others When testing for the determinants of economic growth I have estimated a regression of the averaged growth rates for real GDP per capita ( I have chosen to estimate my regression using secondary school enrolment as a proxy for human capital investment, and since this is a flow variable I can use the average value over the chosen time period. In the literature there are examples of other proxies for human capital such as literacy rates, school expenditure and so forth; however, for the chosen countries in our sample we could only find suitable data for secondary school enrolment. For example Barro 1991. The data in the 54 country big sample was retrieved from", "label": 0 }, { "main_document": "Hospitality can be divided into three main areas: Social, Private and Commercial and in each area the host can have different reasons for being hospitable. The social and private sectors are generally seen as carrying out genuine acts of hospitality, however commercial hospitality can be viewed as having ulterior motives. As Lashley writes; 'In some cases acts of hospitality can be seen as calculative e.g. the business lunch or office Christmas party are not primarily redistributive or undertaken for reasons which primarily value 'generosity and good behaviour as host' (Lashley, 1999:52). However all areas of hospitality have similar requirements. For example as King stated; 'Successful hospitality in both the commercial and private spheres include knowledge of what would evoke pleasure in the guest' (King, 1995:229). Whether or not a commercial host offers genuine hospitality to guests, and the extent to which a commercial host needs to offer 'genuine' hospitality is the question upon which I am going to base this essay. Commercial hospitality is a diverse and worldwide industry playing a key part in the economic and the cultural development of a country. There are many ways of defining hospitality but I think Lashley's definition is best. 'Hospitality can be defined as services providing food, and/or drink and/or accommodation in a variety of establishments and settings' (Lashley, 1999:49). It is important, however, to note that a big difference exists between social, private and commercial hospitality. For example, social and private hospitality are fundamentally supply led, meaning that the host provides guests with the food he/she chooses. Social and private hospitality is generally provided on a small scale, making it personal and unique. In complete contrast, commercial hospitality is generally large scale making it much less personalised or unique. Nevertheless, it is important to remember that; 'Commercial hospitality is not simply domestic hospitality on a large scale but it is business driven and should not make any excuses about its underlying business ethic' (Lashley and Morrison, 2000:191) Commercial hospitality is demand led, meaning that the experience provided is pre-planned with the customer. The fact that; 'The commercial and market-driven relationship allows the customers a freedom of action an individual would not dream of demanding in a domestic setting' (Dr Johnson, 2000:52) is also an important point when defining the differences between private and commercial hospitality. Below is the Venn diagram, invented by Conrad Lashley to show the relationship between social, private, and commercial hospitality in visual form: The diagram (Lashley 2000:50) shows that, despite differences, social, private and commercial hospitality are all connected in the sense that they all have similar criteria in which to be able to function properly. An example of this criterion could be that in order to manage all successful hospitality types the host must make the guest feel comfortable and able to enjoy the experience. In order to find an answer to the title it is important to define 'genuine' hospitality, which is when the host provides for his/her guests out of real concern for their well-being and enjoyment rather than seeking anything in return. Many argue that", "label": 1 }, { "main_document": "on the relative position of the researcher and the researched in social formation, have opposed this view (Burgos, 1989 cited in Phoenix, 1994:50). They believe that simply being women discussing 'women's issues' in the context of a research interview is not sufficient for the establishment of rapport and the easy flow of an interview (Phoenix, 1994:50). Shostak had challenges in establishing friendship with some of the women. For instance efforts at building friendship with Hwantla never materialized, as stated by Shostak: 'yet most of the time she seemed reluctant to invest much of herself in our work, even less in her relationship with me' (1990: 35). The researcher however acknowledged that Hwantla was preoccupied with personal problems. Further to this, the researchers account that women participated in the research because of monetary incentives and an opportunity for them to talk to someone means that the woman interviewer - woman interviewee situation does not always produce rapport through gender identification (Phoenix, 1994:55). The power positions between the interviewer and the interviewee are not fixed dichotomies; positions can shift over the course of the interview (Phoenix, 1994: 55). A good example is the relationship between Shostak and Nisa. During the interview process, Nisa felt she had the knowledge, as information was sought from her (Shostak, 1990: 40). In interpreting and producing data, the researcher has the power to determine what is relevant or not relevant (Phoenix, 1994:55). This is evident in the researcher's expression of concern over the accuracy of Nisa's accounts (Shostak, 1990: 350). Believing participants have been identified as a controversial aspect of feminist research. According to Reinhaiz social interaction involves a certain amount of deception because science relies on uncertainties, this however does not imply that feminist researchers believe the women they interview all the time (1992:28). I will suggest that Shostak's action finds support in this position. A further example of power relations can be seen in the ability of the researcher to provide incentives for interviewees and other people in the community. This is in line with Golde's suggestion that fieldwork should encompass some form of reciprocity; meaning that researchers should be able to offer services or materials in exchange for the privilege of studying and disrupting other peoples lives (1986:73). This phenomenon did not work out well for Shostak as she complained that it was becoming impossible to sustain the villagers demands, in addition this had implications for life after the researcher leaves the community, as they live communal life, sharing resources, and occasionally working in exchange for goods (1990:27). Although the ideas of Golde's are useful, they do not take into account the exploitative nature of the research process (1986:73). The relationship between the researcher and participants is based on engagement and attachment, where participants are at the risk of manipulation and betrayal from the researcher. The only reward for the participant is the researcher's undivided attention during the interview (Stacey, 1991:114, Patai, 1991:142). In the case study, Shostak developed friendship with Nisa, but in evaluating their friendship she felt detached (1990:357). This action provides evidence that", "label": 0 }, { "main_document": "UHT is a continuous process which involves rapid heating to high temperatures about 140 The UHT process is usually carried out above 135 It consists of 4 steps The product is then placed in aseptic packaging that is commercially sterile and has a shelf life of approximately 6 months (The aseptic packaging is sterilized using 30-35% hydrogen peroxide at 70-80 Prior to milk treatment the tube is sterilized by pumping hot water at 130 Safety of the product is key. The Z value for Alternative processing conditions would be 131 141 Ideally the processor would want a product that does not cause a lot of fouling, and which has little sedimentation with no gelation of the product after processing The first aim is a major problem because of high temperatures used often result in a great deal of fouling Steam tables are important because they enable one to establish an equilibrium temperature - pressure where the milk is heated to the necessary temperature without it boiling (Milk boils at less than 100 To develop a working understanding of the Ultra high temperature treatment process by indirect method. To produce a commercially sterile product with a shelf-life of 6 months To consider the advantages and disadvantages of the UHT processing technique over in container sterilisation Please see procedure detailed in practical bookle The raw milk must satisfy very exacting demands as regards its quality. Alcohol stability is an indirect method of testing for protein stability; since the produce is exposed to high temperatures and must be capable of being stored for a comparatively long period of time. If it is unstable it coagulates or curdles with very low concentrations of alcohol. From the lethality tables a temperature of 142 1. The very first milk to be processed tasted very creamy and smelled strongly of sulphur After about 15 minutes, however, it had a slight toffee flavour and still smelled very strongly of sulphur. At processing temperature of 120 This test is usually used to check for the molar concentration of solutes present in an aqueous liquid or if the milk has been 'watered down'. Milk that has been watered down has a lower value. This is because presence of solutes depresses the freezing point. Since this was an indirect UHT treatment there was little likelihood that some water had not been removed from the milk as is the case for Direct UHT treatment where steam is used to heat the milk which is later flash cooled removing the steam. Between heating, if components of the milk are well distributed the flow through the tubes is better and may result in less fouling. The products processed at higher temperature had cooked flavour. This comes from the sulphydryl compounds in the milk They are packaged in aseptic containers which range from tetra packs lined with sterilized foil, plastic bottles or glass bottles in cases where the milk is processed within the glass which is its final packaging, cans and flexible pouches Browning, oxidation of sulphydryl groups, loss of vitamin C and lipid oxidation Fouling, need", "label": 1 }, { "main_document": "prices, organisation to provide convenient purchase and fulfill the transaction (Tapscott et al., 2000). 'Selection and convenience' is the main theme of aggregators. Travel aggregators package tourism products according to average travelers' needs to meet large group of customers (Paraskevas, 2004). Travelocity is a web-based travel agency owned and operated by Sabre. It concentrates on selling flight ticket, hotels and car hire and provides information of destination and special offers (Forrester Research Inc, 2001). Integrator is adapted from 'value chain' of Tapscottal. (2000) typology. The first task of integrator is to identify customers' needs, then design and build product to meet specific customer requirement. Travel integrators design and customize holidays for individual customers, who are driven by value. Hotels.com is a subsidiary of IAC/InterActiveCorp and a leading provider of lodging worldwide. It offers a one-stop shopping source for travelers and reservation (Hotels.com). On internet, prospective consumers communicate via Web sites and convert interests into action and sales (Morgan et al., 2001); an effective Web site leads to the success of business (Jeong, 2003). Besides, the considerable costs in setup, advertising and maintenance, as well as the fierce Web competitions make Website evaluation necessary (Tierney, 2000). Moreover, analyzing the effectiveness of a site is critical in assessing and revising the strategy to overcome problems (Chaffey et al., 2000). There are a variety of ways to evaluate website effectiveness. For example, Bell & Tang (1998) set ten factors for evaluation - access to Web-site, content, graphics, structure, user friendliness, navigation, usefulness, unique features, online transactions and site usage fee; Abelsal. (1999) develops user-based design criteria including the six measured elements: use, content, structure, linkage, search and appearance in the order of importance; Wolfinbarger and Gilly (2003) extracts four factors contributing to 'eTail quality', which are fulfillment/reliability, Website design, privacy/security and customer services; Chaffeyal. (2000) consider a '6C' strategy is critical to Website effectiveness, which including capture, content, community, commerce, customer orientation and credibility. In this case, the user-based design criteria (Appendix 2) are adopted for evaluation. To consider users in web design process is to stand in the position of prospective customers, evaluating Website from consumer perspective enables organisations to approach consumers and satisfy them. Given no method is perfect, some elements in other criteria will also be considered during evaluation. Branding is the process of developing a recognized name for the goods and services of one seller (Buhalis, 2003). Generally speaking, a successful brand offers buyer or user unique added values that match their needs and keeps it in competition (de Chernatony & McDonald, 1992). It is especially important for services and intangible products. Similarly, Internet brand names, which are substitute for physical facilities for trust building, matter even more. Furthermore, brand recognition is important in building trust (Turban et al., 2004). According to Lendrevie and Levy (1999), one of the four factors that enhance a Website' trust development is 'predictability factor', which includes brand reputation and brand credibility. Normally, there are three ways to build brand on Internet (Chaffey et al., 2000). The most common one is to use the established brand,", "label": 0 }, { "main_document": "applied with regard not only to the economic, but also to the environmental objectives of the Treaty\" In It is not yet evident whether this choice has been consciously made. Michael Jacobs - \"The Green Economy\" (1991), quoted in Joanne Scott - \"EC Environmental Law\", page 11. Martin Wasmeier - \"The Integration of Environmental Protection as a General Rule for Interpreting Community Law\" (2001) 38 (1) CML Rev 159. Quoted in N. Notaro - \"The New Generation Case Law of Trade and Environment\" (2000) 25 (5) ELR 467. The role of the courts must not be underestimated regarding the rise of environmental law to a specific policy area. The ECJ's record is \"almost entirely positive\" For example, in The ECJ has been accused of excessive activism, especially in cases such as Here, a Danish rule that drinks should be sold in reusable bottles was challenged as contrary to Article 28. The Court held that the \"mandatory requirements\" relating to non-discriminatory rules outlined in This was extended in A ban on imported waste into the Walloon region of Belgium was justified because of the \"special character\" of waste. Moreover, all the decisions mentioned point towards a green outlook of the European courts. J. H. Jans - \"The European Convention and the Future of European Environmental Law\", page 86. Many of the legal principles analysed so far mean nothing for the environment unless they are implemented. The implementation process has improved due to the creation of the \"New Legal Order\" in This is enhanced by the concept of direct effect that emerged in Environmental provisions can be relied upon by individuals if they are unambiguous and confer individual rights. Directives are only vertically directly effective, but fortunately, the definition of an emanation of the state has been widely construed Nevertheless, direct effect is not universal and it has been criticised that the ToA \"failed to establish a citizen's right...to a healthy environment\" Directives are \"flexible means of ensuring harmonised standards\" Hence, they are often vague, such as the Bathing Water Directive Other directives can be unimplemented altogether, including the Drinking Water Directive Another difficulty centres on enforcement. The disparities between member states' procedures support the conclusion that \"poor enforcement is the black hole in the EU policy process\" N. Notaro - \"The New Generation Case Law of Trade and Environment\" (2000) 25 (5) ELR 467, page 468. Wolf & Stanley - \"Environmental Law\", page 75. Council Directive 76/160/EEC. Case C-337/89 Grant, Matthews & Newell - \"The Effectiveness of European Union Environmental Policy\", page 203. The accession in 2004 of ten new member states creates some big challenges regarding harmonised environmental protection. The Commission announced that it \"is of the view that it is one of the main tasks of the candidate countries to implement EC environmental legislation\" This could prove troublesome, given the emergent nature of industry in certain Central and Eastern European countries. Moreover, \"their challenge is to ensure that they do not repeat the two decades of environmental neglect that occurred in Western Europe\" Jos It is vital that this does not", "label": 1 }, { "main_document": "impact of iron deficiency on children's school performance in India and Indonesia where there were significant positive effects and in Thailand where the results were contradictory. Jamison (1986) found a positive correlation between height for age and school performance in China. Similarly, Moock and Leslie (1986) have found height for age to be a robust indicator of school enrolment. Analyses by Colclough (1982), Psacharopoulos (1994), World Bank (1993), and the United Nations Development Programme (1990) are based on studies with a wide range of results, some of which indicate little or no effects. Not only do developing countries have lower levels of health the nature and incidence of diseases are different and cause greater damage as has been found by the World Bank (1993); Dean Jamisonal. (1993); Pinstrup Andersonal. (1993). The plethora of work undertaken in the area of health and economic development speaks for its critical importance in the world today. However, the advancement has only been recent, until the last quarter of the century (it was the returns to education on which much greater amount of work was done under human capital). Nonetheless, the fact remains that a healthy population is a prerequisite for successful economic development, and the interrelationships therein are increasingly gaining special interest. Following the examination of prior empirical work in the previous section, both institutional and fundamental variables are incorporated in the model to test first the determinants of health, including income, education and other factors, and then briefly discuss its causal relation with education and economic growth by using it as an explanatory variable. Cross-country regression models for the world are constructed and the estimation method applied is Ordinary Least Squares (OLS). As there are several equations estimated with different sets of independent variables, the basic multiple regression model used could be represented by the form: where y is the dependent variable, The definitions of variables used in the analysis are provided in table 1. The popular indices used for measuring health under nutrition based efficiency wage theory are per capita caloric intake, body mass index and so on. However for cross country analysis, there are two kinds of data which are frequently used, life expectancy and mortality rate. Life expectancy has wider coverage (adult mortality rates give the probability of dying between ages of 15 and 60) and thus is more appropriate and shall be used in this analysis (as a dependent variable for the first set of regressions and then as a dependent variable for the second and third regressions). To study the causal relation, average GDP per capita growth rate (in case of economic growth) and the average literacy rate (in case of education) are used as dependent variables. The data has primarily been obtained from World Development Indicators, the World Bank database. Few of the variables for which data was missing (HIV prevalence rates, immunisation rates, rates for access to improved water and sanitation, and literacy rates) were obtained from the United Nations Statistics Division (UNSD); World Health Organisation (WHO); and the United Nations Children's Fund (UNICEF) databases. All data was", "label": 0 }, { "main_document": "The idea of behaviourism has been around for a long time but was made famous by the work of Ivan Pavlov. He conditioned the behaviour of dogs to salivate at the sound of a bell by repeatedly associating the ring of a bell with presentation of food. B. F. Skinner extended this idea whilst working with pigeons to show that animals could be trained to perform various acts in order to get food, as long as the association was made between food and the desired response. Skinner always recommended positive reinforcement such as rewards over negative reinforcement or punishment, (Eysenck, 2002). Such ideas have gradually leaked into the classroom, to increase desired behaviour from children, and even into the National Health Service, to attempt to 'cure' psychological disorders. One major criticism of the behaviouristic approach addresses the claim that behaviour can be explained without reference to mental activity. Human beings have a sense of feeling and of free will; behaviourism disregards this and sees the personality as a set of outward reactions rather than a distinct entity. This means that psychological disorders are not examined for their cause or brain dysfunction but simply observed. As a treatment in the practice of mental health this is often dismissed by cognitive scientists developing intricate internal information processing models and by medical experts who view the brain in terms of chemical reactions, (Comer, 1992). Despite this opposition though, behavioural therapy is currently used for the treatment of many psychological disorders and can have excellent results. It is perhaps fair to say that the success of such therapy will depend mostly on the type of disorder and its origin, if known, as there are a number of disorders that are considered too inbuilt to be changed merely through training. Certain kinds of psychological disorder can be reasonably explained by the behaviourist model and these tend to be phobias and anxiety disorders rather than personality disorders. Phobias are often explained as the result of classical conditioning, much like Pavlov's dogs. A person learns to associate a traumatic reaction (e.g. fear, shock) with a particular occurrence even though the level of fear is not in proportion with the possible danger. For example contact with a very aggressive dog as a child may cause someone to avoid situations with dogs in future so the fear is not stopped. However this can be effectively treated with behavioural therapy. One form is systematic desensitisation where a patient learns to associate a relaxed, happy state with the idea of the phobic situation and learns to gradually accept it through the opposite of the original conditioning. This treatment is a highly appropriate one, which can be explained due to the cause of the phobia being a behavioural one, therefore the treatment works like the cause in reverse. Wolpe's original method of constructing a hierarchy of fears, training in relaxation then graded exposure (1969) was updated by Shapiro (1989) who replaced muscle relaxation with induced eye movements. This proved very popular in the treatment of traumatic memories with over 50% moving from clinical to", "label": 1 }, { "main_document": "The forecast of realistic view of final demand for Wednesday 25 The aim is to maximise room revenue and room occupancy on Thursday predicted at 55% and Friday at 45%. Conference bookings would have already been taken so planned market mix is 115 as 115 rooms are already booked. As Revenue Manager I would not look for demand from this segment. Independent business travellers have the advantage of highest room revenue but on the other hand only stay for 1 or 2 nights. There is demand of 130 but I have chosen a planned market mix of 111. It is important not to let too many clients down because relationships will suffer and affect customer loyalty, there is higher demand than rooms left to sell so I know that they will definitely be filled. Rate B is of preference but rate C will encourage a longer stay. Rate D is for loyal customers so that they do not turn to competitors. Independent Leisure travellers have lower room revenue but the hotel can encourage a longer length of stay to their own advantage in order to maximise capacity and revenue on Thursday and Friday, bearing in mind that leisure guests are likely to spend money in other departments such as restaurant, shop, bar and room service because they are relaxing and will have a care free attitude and want to enjoy themselves. Reservation clerks will not be able to take bookings for 1 night stays but rate H is offered for 2 nights and a further discounted rate I for 3 night stays because demand decreases at weekends the leisure market is therefore targeted using special offers. I have chosen a market mix of 74 and there is demand of 100 so again I know that rooms will definitely be filled. The 25 Write an essay that identifies, explains and evaluates two important issues reported in the academic literature concerning the effective application of revenue (or yield) management in hotels in terms of their relevance to the management of this hotel. Revenue Management is a method which can help a firm to sell the right inventory unit to the right type of customer, at the right time and for the right price (Ingold 2000). This definition has been developed by (Jauncey, Mitchell and Slamet 1995) as an integrated, continuous and systematic approach to maximising room revenue through the manipulation of room rates in response to forecasted patterns of demand. As Reservations manager of the Edinburgh hotel I have made planned marketing mix decisions based on the 'forecasted patterns' of the final demand situation for Wednesday 25 It is an underlying assumption that staff strive to meet the needs and expectations of customers and hopefully exceed them but revenue management is objective and puts the customer secondary to the hotel (Ingold 2000). As Reservations manager I made decisions not in accordance with satisfying the customer but instead about how I can gain maximum room revenue for that particular day. For example regarding the business traveller, there is forecasted demand of 130 but I have", "label": 1 }, { "main_document": "are arrived) and provides expert chargeback handling process (UPS, n.d.). Fourthly, other marketing channels may have been traded by businesses for spread information. As one of the SMEs, Send-Me Services needs to reinforce the online brand image -- online communities and e-mail are probably the most cost-effective solution. Communication to customers can be placed in intermediary sites (such as the e-publication (Times) or other portals (Google or RRS) to promote the service offers, via a banner linking to the company's site. Those media agencies tend to offer all the skills in content development, advertisement design and online promotion. This online adverting -- banner advertising is a small banner with keywords graphically pops up on a web page, which is normally charged by spot lease, click-through or ad impressions rate (cost per mille (CPM)). And on the other side, Send-Me Services website also can get revenue by offering the banner advertisings for other companies. Co-branding is a free way for promotions by affiliating banner advertisement, such as banner exchanges and allied sites (Chaffey 2000, p249-51). Banner is a valuable method for Send-Me Services in the e-marketing process, but problems will be from people's disregard. Special promotions can be sent to customer directly by opt-in e-mail or instant message on a monthly or periodic basis without sending extra booklet to customers (Wilson, 2000). It is a more push strategy supported by customized databases, but it will cause antipathy when the company not regulate suitably. Nevertheless, large savings in time and expenditure (including the expense in material resource and human resource) are possible by removing the paper-based work from the process. Moreover, the second e-enabled aspect in the sell-side is the customer relationship management also will affect the performance in business's sell-side. The marketplace performance includes the online and offline communication to carry the customer-company connection. The main prospect of a cost-effective e-business is to integrate those support services: technology-enabled selling (TES) (to attract customers to visit to web site to review the service and specification), e-business, call centers and data warehousing and data-mining to afford seamless customer service (Norris,al. 2000, p98). The technology-based selling (TES) focus on customer self-service (to help customers arrange an online order, online pay, require the status which is online). A customer facing system can be used to improve customer service, and those techniques involve the user guides, price comparison engines, chat platforms (forum), language selections and document management systems can assist visitors to analyse standard and technical specifications. Specially, the forum can be used to set the customers' knowledge and confidence of the services from other's experience, and that also can encourage the company to provide a better service satisfaction for customers. Moreover, the usage of phone-call centers is also a main assistant support; however it can be enabled by computer-telephone integration (CIT) or Teleweb integration technology on computer that technology can be provided by eFusion, Ericcson and Lightbridge. Data-mining and data warehousing will define and capitalize customer's historical buying pattern (as customer-demonstrated preference captured in customers' click patterns) to support decision making for driven selling, based on the", "label": 0 }, { "main_document": "non-overlapping concepts in order to support the view that trustee's breach of her duty of care is essentially the same as tortuous negligence, and that liability of trustees for breach of their duty of care is based on the principle of fault. Furthermore, we found considerable similarities between common law damages and equitable compensation and that the compensatory goals of the two remedies are essentially the same. Bearing the wide range of similarities between tortuous negligence and trustees' breach of their duty of care, we argued that the same sort of analysis in determining appropriate limitations on the two remedies should be applied. We also proposed a plausible model for contributory fault in the context of liability of trustees for breach of their duty of care. Until an appropriate set of facts comes before the House of Lords, the principles that should be applied in determining apportionment of fault will remain unclear. However, in light of the above analysis, it is safe to say that there is room for the introduction of contributory negligence on the part of claimants into the realm of liability of trustees for breach of their duty of care.", "label": 0 }, { "main_document": "lack of protection from dismissal, which will be analysed respectively in detail in the first part of this essay. The next section will focus on the effectiveness of secondary action. Some other issues which deserve notice will be mentioned in the following part, though not be detailedly discussed. The conclusion will provide a brief summary and comment of the matters raised in this essay. The right to strike, a fundamental industrial action, is a civil liberty and human right. Nevertheless, 'There has never been a positive right to strike in Britain' (McIlroy, 1999: 523). According to the observation of different supervisory bodies, Britain is in breach of the International Covenant on Economic, Social and Cultural Rights (ICESCR), International Labour Organization (ILO) Conventions, the European Social Charter, International Human Rights Treaties, in terms of the protection of right to strike. Under Britain's law, anyone who calls for or organises a strike or other industrial actions would be liable to legal proceedings unless they were given certain protection, which is provided as 'statutory immunity'. However, this protection is hard to obtain due to a number of strict regulations on the definition and procedure of taking industrial action. In order to be protected by the statutory immunity, the industrial action must first meet the requirement of being a 'trade dispute'. The law provides a detailed explanation of what constitutes a 'trade dispute'. In general, two main conditions must be satisfied. For one thing, the dispute must only between workers and their own employer. This means it is impossible for unions to take effective action towards the 'real employer' who has the power to make decisions that are capable of satisfactorily resolving the core issues of dispute, but towards the subsidiary companies or associated companies who are 'technically the employer of the workers concerned' (ILO Committee of Experts, 1989, cited in IER, 2004: 15). This also restricts sympathy action taken to support a dispute by those who are not directly involved but in pursuit of their social and economic interests. For another, the dispute must be 'wholly or mainly about employment related matters such as their pay and conditions, jobs, allocation of work, discipline, negotiating machinery or trade union membership' (DTI, 2005: 14). According to ILO Committee of Experts, formerly the dispute was permitted to have sufficient connection with specified matters. The current provision appears to deny protection to disputes where unions and their members have mixed motives, for example political or social objectives. The Committee also considered that it would often be very difficult for unions to determine in advance whether the dispute was merely related to the limited purposes, as situation might change during the course of conduct. (IER, 2004) The narrowed definition of 'trade dispute' imposes excessive limitations upon the exercise of the right to strike. Thus it has been regularly expressed by the supervisory bodies that the definition should be broaden so as to make workers be able to take industrial action to 'promote and protect their social and economic interests', and be able to lawfully take action 'against the de facto", "label": 0 }, { "main_document": "is Italian pizza style (Bold, 2002). Pizza Express was founded in 1948 and the first store opened at Wardour Street in London. It is UK based company and has approximately 300 stores with more than 6000 employees at present (Pizza Express Website). It is one of the successful private restaurants in the UK. There are some examples of standardisation in Pizza Express. Firstly, it is evidence of standardisation that all stores have the exactly same food menu and beverages with equal price and equivalent portion size. Customers are able to order meals simply from the standardised menu and their choices are just from the menu. It could be taste same each Pizza Express store. Also, pizza restaurants such as Pizza Express have the standardised ingredients of pizza, so that it makes clear for chefs when they make pizza. Moreover, the interior design are similar at each store and it might be unitised the concept for adults. Additionally, the table layout at Pizza Express is unchanged. What is more, the uniform is regulated, for instance, the servers wear black coloured tops with printed logo of Pizza Express and trousers. In contrast, it might be customisation that the store d Although one of them has bar lounge space in the ground floor and dining area and kitchen in the first floor, the other store has an huge oven surrounding with the clear glass in the ground floor, so that customers are able to see how to bake pizza. The other fact is that Pizza Express has own jazz club with restaurants in London Soho, thus customers may enjoy having pizza and listening jazz music (Tyler, 2001). Furthermore, when consumers order pizza or pasta, they can decide to have seasonings like cheese and peppers. Therefore, the company combine both standardisation and customisation. As it is mentioned above the paragraph, standardisation and customisation are considered customers' needs and also managing quality services. The criteria for evaluating standardisation are based on quality. If it is focused on quality, the important point is that how quality is standardised. Quality has intangible and tangible. Intangible might be delivering services and tangible can be providing as goods. Service is can be defined as the combination of outcomes and experiences delivered to and received by a customer. It is considered that it is difficult to measure quality of services. Then, it is the key that \"total quality is about optimizing every process and output so that it fulfils the task for which it was designed, by identifying and achieving outcomes in a way that requires least cost and produces greatest value\" ( Johns, 1996). In addition, Brotherton (2006) points out that the benefit of standardisation will be cost saving in service delivery, planning and supply. All things considered, standardisation is a one of operation system which can be not only to deliver consistent quality to customers, but also to provide products. Standardisation has impact on service quality and it might be encouraging staffs and encounter service delivery (Sandoff, 2005). Thus, it is agreed that standardisation enable to deliver equal quality for", "label": 0 }, { "main_document": "production knowledge that the management relied on heavily. By creating a managerial class, fines could be applied, and obedience, punctuality and performance required, through closer monitoring of workers. This view has its basis on Marxist notions of the skill of a worker being a source of power. Scientific management attempted to reduce this power, by 'deskilling' the workforce, as argued by Braverman. As firms continued to grow, they searched for ways to maximize profits and obtain maximum output from labour, capital and resources, focusing heavily on productivity and efficiency. Though the notion of job systematization and job division began to be examined earlier by Charles Babbage (1791-1871) and other early engineers, Taylor was responsible for its popularization. In fact, engineers were greatly responsible for the initial studies of systematization in production and management, with about 2/3 of engineering graduates going on to pursue management careers. Taylor, an American engineer and one of the founding fathers of Scientific Management, believed that by carefully studying the processes undertaken in production, and by breaking them down into minute, individual actions, the firms could enjoy efficiency gains. The manager's role was described by Taylor as \"...gathering together all of the traditional knowledge which in the past has been possessed by workmen and then classifying, tabulating, and reducing this knowledge to rules, laws and formulae\" (1947) . Taylor initially developed his studies to prevent workers' control of output or 'soldiering', claiming benefits for the firm in the separation of conception and execution, hence emphasizing the need for managers. Some argue, however, that the efficiency gain and cost reduction Scientific Management brought about, was only made possible by the large body of immigrants arriving in America at the time. These immigrants provided cheap labour, and the simplification of tasks allowed them to work without possessing great skill or knowledge. One may even view Taylorism as part of a wider efficiency movement, linking it to the development of household appliances and the efficiency gains in household chores. Evidence of the increasing attention paid to management can be encountered even in more subtle contexts; for example, the New York library did not have one single item on management in 1881, but by 1990 it carried more than 200 (Shenhav: 1999), exposing the budding interest on the subject. Similarly, professionalizing courses for managers began to develop, also demonstrating society's increasing interest in management and its study. A new type of organization, adapting to the needs of firms and managers, began to develop, much more bureaucratic in structure. Weber viewed this bureaucratization of work as unavoidable in the growth of industrial societies, with the mounting presence of rules, hierarchy and centralisation within an organization, and emphasized the rational virtues of this new structure. He attempted to rationalize the social environment in a manner similar to technology's rationalizing influence on the physical environment, and through bureaucratization, confirmed the need of a managerial class. Fayol, a less know contemporary, signalled the development of management theories and studies in Europe. He viewed management as a set of 5 activities Fayol's principles also required a bureaucratic", "label": 0 }, { "main_document": "the CR's possible impact on the Argentine economy. The principal conclusion of this paper is that the Imperfect Information assumption helps explain CR via the presence of a non-monotonic relationship between bank's expected profits and the loan interest rate. The empirical evidence on the Argentine case suggests that CR is a feature on those less-collateralized borrowers between 1996 and 2001. The remainder of the paper is organized as follows. Section I introduces the definition of CR and Imperfect Information. In section II Jaffee and Russell (1976) and Stiglitz and Weiss (1981) are developed and discussed. Section III addresses a framework proposed by Berger and Udell (1992) to test the empirical evidence on Argentina, and Section IV concludes. Following Jaffee and Stiglitz (1990) I will refer to Imperfect Information as opposed to Symmetric Information, to a situation in which \"(...) borrowers and lenders do not have equal access to all available information (...)\" (op cit., p.840). Asymmetric information is considered a particular case of Imperfect Information, with two different cases to be discussed in the paper: Moral Hazard (Hidden Actions) and Adverse Selection (Hidden Information). Moral Hazard refers to a situation whereby one part of an economic contract is unable to observe the actions carried out by the other part, and which may negatively affect the interest of one of the agents involved. Adverse Selection arises when trading decisions of one individual depend on privately-held information which adversely affects uniformed agents (Mas-Colellal. 1995). The definition of CR includes a broad range of alternatives (See, Keeton 1979, Jaffee and Stiglitz 1990). Following Keeton (op cit., p.9), there are two types of Credit Rationing: Type Hereafter, CR will refer to the type The first theoretical model which includes the Imperfect Information assumption to explain CR was Jaffee and Russell (1976). The presence of Moral Hazard behaviour by a group of borrowers who are indistinguishable In this situation it is possible to get an interest rate maximising bank's profits below the level at which the loanable funds market clears. On the other hand, Stiglitz and Weiss (1981) emphasised the Adverse Selection problem. In such a case, banks know that a positive relationship exists between the interest rate they charge on loans and the risk of each project. As the risk of the borrower's pool has a negative effect on the banks' expected profits, and banks are not able to distinguish among the risks of an individual project, they may decide not to increase the interest rate above certain level. Again, it is possible to get an interest rate which maximizes banks' expected profits below the clear-market level. In the following subsections I will address in depth both the Jaffee and Russell (op cit.) and the Stiglitz and Weiss (op cit.) models, finishing the section with a discussion on the criticism to the explanation of CR via the imperfect information assumption. In Jaffee and Russell (1976) there are two types of agents: honest and dishonest. The assumption of how each of the two agent-types behaves is central to explain the Moral Hazard hypothesis. While the honest agents", "label": 0 }, { "main_document": "This essay will describe how the responses to this type of questions have usually revolved around dichotomies such as realism versus idealism or communitarianism versus liberalism and cosmopolitanism. The essay makes a critical interrogation of the nature and content of the debates about moral pluralism and universalism within IR theory. Firstly this essay will focus on the 'responses' offered by communitarianism, realism and cosmopolitanism to the dilemma of moral pluralism and universalism. The discussion will be constructed around issues like the self, society and morality. Is morality socially constructed? Do individuals need the freedom to form, revise or change their conceptions of the good life? Is there an inescapable starting point or meta-narrative in evaluating the 'morality' of morals? On the one hand, universalism and cosmopolitanism have been accused of giving insufficient consideration to moral pluralism, neglecting local or national identity for the sake of a supposedly transcendent moral universal truth or ideal that may actually reflect just the European tradition. On the other hand, communitarianism has been accused of over-emphasising the moral priority of the community, arguing for an embedded self, which may be a constraint upon human freedom. This essay will argue that there is a need to blur this duality as an emphasis on one extreme is misleading and potentially dangerous. Moral spaces need permanent negotiation. This essay will provide a critical interrogation of the nature and content of the debates about moral pluralism and universalism within IR theory. What is moral pluralism? Is the defence of a pluralist international society a way to preserve the static approach to international relations? Are there any universal moral values? This essay will make a critical account of the responses given by IR theory to the above type of questions, responses that have usually revolved around dichotomies such as realism versus idealism or communitarianism versus liberalism and cosmopolitanism. On the one hand, universalism and cosmopolitanism have been accused of giving insufficient consideration to moral pluralism and diversity, neglecting local or national identity for the sake of a supposedly transcendent moral universal truth or ideal that may actually reflect the Western European historical tradition. On the other hand, realism has been criticised as overlooking the interdependent fate of human beings and their equal worth as moral subjects. Similarly, communitarianism has been accused of over-emphasising the moral priority of the community over the individual, arguing for an embedded self, which is considered by liberals as a constraint upon human freedom. This essay will attempt to show the overlapping nature of communitarianism and cosmopolitanism indicating that an emphasis on just one of the two is actually misleading. The main focus will be placed on the moral issues advanced by the two main strands of argumentation. The essay will show how the IR responses have tried to prove the prevalence of one perspective, generally neglecting the degrees of overlap between the two allegedly opposing standpoints. It will argue that if reconciliation between the two perspectives is looked for, then the first step to be made is to admit the relevant truths in both since an either/or", "label": 0 }, { "main_document": "an extensive operation since the rectum must be mobilised within the pelvic musculature. The tumour is resected with a satisfactory distal margin - typically 5 centimetres - and the anastomosis performed between the rectal stump and the left side of the colon. A protective transverse loop colostomy may be also be carried out. The staging of colonic carcinoma is the most important determinant of survival rate and is normally expressed via Dukes classification. The overall 5 year survival rate is 35%: 1. operation for cure can be performed on about 70% of patients: 10% of lesions not resectable at operation 20% have liver or other distant metastases 2. operative mortality rate is 2-6% 3. 5 year survival rate for curative resection is 55% Complications such as obstruction or perforation adversely affect survival. Patients should be followed for recurrent, metastatic, or metachronous lesions. Faecal occult blood should be tested every 6-12 months and colonoscopy performed 1 year after operation. Surgical resection for recurrent lesions should be considered. Once colorectal cancer has metastasised, the average survival duration without chemotherapy is 3-9 months (1). Systemic chemotherapy is rarely curative in patients with metastatic colorectal cancer except, sometimes, where metastatic disease is confined to the liver and potentially resectable after chemotherapy. Important issues raised in this case include the Colorectal Screening Programme, the issue of stigma with regards to stomas and the importance of taking thorough histories with regard to symptoms such as rectal bleeding, even when the presenting patient may not feel unwell. The Colorectal Screening Programme is currently only set up for those patients who are at a particularly high risk of colorectal carcinoma (such as familial polyposis). With the incidence of colorectal carcinoma increasing it is important to look at the public health issue of introducing a screening programme for more individuals. The drawback of such a screening programme however is the issue of engaging individuals in sending stool samples to be investigated. The stigma of having such major surgery and resulting in a stoma bag may also need to be addressed. Initially The importance of diagnosing a rectal carcinoma in its early stages has been highlighted to me in this case. The importance of a correct cancer diagnosis, staging and grading is important in order to be able to make the most appropriate treatment choice. The importance of giving the patient time to recover from major surgery and adjust to a new lifestyle following that surgery has been illustrated.", "label": 1 }, { "main_document": "There is no doubt that climate changes all over the world and this is not a scenario but the reality. The term 'climate change' sometimes is referring to all form of climatic inconsistency, but as Earth 's climate is not always the same, the term is best used to pinpoint significant change from one climatic condition to another. Although 'climate change' has become synonymous with 'global warming' scientists use the term in a wider sense including also natural changes in climate. (Global Change Research Centre National University of Taiwan) Comparing the last decade with previous it is easy to infer that climate has changed. In Europe, mean annual temperature has been increased by 0.8 During the twentieth century, precipitation has also been increased over Northern Europe by 10-40% (The Europe Acacia project, 2000). In the UK, the decade 1985- 1994 was warmer about 0.2 As a result the warmer months and seasons experienced in the UK especially the last year is a strong evidence of climate change. Finally the global atmospheric CO (Review of the Potential Effects of Climate Change in the United Kingdom, 1996) According to scientists the UK climate will become warmer. It is estimated that by 2050s, the annual temperature in the south east of the country will be 2 By the 2080s temperatures may increase more than 3 In addition, high temperatures during summer will become more frequent. By contrast, cold winters will become rare. It is also estimated that winters will become wetter and summer drier in the UK. By 2080s winter precipitation will increase by 20%. By contrast, summer in central and South UK will be drier, with 18% less rainfall than now. In addition, sea level will increase in the UK about 5cm per decade especially in south and east. (UKCIP, 2003) Climatic factors play an important role in the UK and have great contribution from year to year production. Changes that may occur, in terms of the intensity and distribution of precipitation combined with changes in CO (Review of the Potential Effects of Climate Change in the United Kingdom, 1996) There is no doubt that CO Increasing the level of C0 (DEFRA, 2003) For example, although temperatures have little impact on lettuce yield, it has been found that an increase in C0 In cauliflower the increase of C0 Higher concentrations of C0 In onions it has been found that crop dry weight was increased by 32%- 44% as a result of C0 In addition, onion yield was increased due to higher level of C0 (Hadley et.al. , 1997) Another example of beneficial effect of C0 Elevated C0 All these findings indicate that important benefits for the UK growers may happen in the future due to the increase of C0 It is known that C0 In higher concentrations plants use less water but more efficiently, being more able to resist water stress. In consequence, growers will have more water resistant plants and this is beneficial for horticultural production. (Smithsonian Environmental Research Centre, 1999) Apart from cultivated plants, weeds are also influenced by C0 The rate", "label": 0 }, { "main_document": "experimentation three different values needed to be recorded. These were the time at which the vortex ring was ejected, the time at which the vortex ring destroyed, and the distance from the nozzle that the vortex ring destroyed. The values of time needed to be defined in order to be consistent throughout experimentation. Ejection Time: The first instance in which the movement of the forming of a vortex ring is noticed. Destroy Time: There were many different ways that the vortex rings were destroyed during experimentation, and so the following events would indicate a destroy time. Errors in the destroy time could occur due to the following factors. The distance from the nozzle that that vortex ring has decayed at was recorded in two different ways. If the vortex ring was visible on the television screen at its point of decay, then this point could be compared to the initial calibration, and the distance noted. However, if the ring was not in the field of view of the cameras then it was recorded by comparison to a distance marker attached to the tank. The television method is more accurate and depends much less on personal judgement, yet it was not often possible. The second method was still considered to be accurate to roughly +/- 1cm, due to parallax errors, and personal judgement. This method is still seen to be a suitable method, due to the impracticality of other methods such as using more than 2 cameras in order to cover a larger recording range. In order to experiment, appropriate parameters for piston velocity and piston stroke length needed to be programmed into Motion Planner. It would then need to be checked that the piston was in its starting position, as illustrated by the top television screen or a particular command in the program. Also, it would need to be checked that the water inside the tank was steady, so as not to interfere with the movement of the generated ring. Then the record button on the VCR could be pressed, following by an appropriate number of bursts of dye using the dye timer. Finally the 'go' command could be executed in the program in order to move the piston and hence generate a vortex ring. Once the vortex ring is generated its movements and actions can be noted, and at the point in which it destroys the recording of the VCR could be stopped. The reason for this is to record the noted destroy time. The destroy distance would then be noted, or calculated later on from the video recording. This process is then repeated until all the appropriate data has been acquired. It was found that analysis of the video to find ejection and decay times could be done in between sets of data in the time in which it takes for the tank water to settle. The initial concept to illustrate the importance of a nearby free surface was through the use of a Perspex tube to simulate a smaller tank. However this was relatively unsuccessful in achieving what was", "label": 1 }, { "main_document": "While the two regimes of Per Both men were elected democratically, both were ousted by illegitimate army coups, both died in office (although in radically different circumstances) and both were carried into rule and supported during government by the workers, as the power of the people asserted itself in Latin America against the oligarchs, the capitalists, the 'imperialist' foreign companies and the conservative forces of politics and the military. It was the overwhelming support of the working class that defined the two governments; Per For there is more to working class allegiance than 'natural' tendencies or alternatively submission, what needs to be addressed is how these leaders gained the devotion of the working class that they undoubtedly held, but also how there is room for agency of the working classes themselves, not simply following whatever shining light is held before them or hating the capitalist phantoms that are presented to scare them but deciding for themselves what is the problem and diagnosing the solution to it. While traditionally it might be expected of the labouring classes to orientate towards Marxist parties, that unhelpful dogma is obviously proven wrong by the Peronist working class and many worker-supported non-socialist parties through time, and therefore a more analytical explanation of the strength of the Chilean working-class left is needed first in order to prove and account for any 'deviance' from the socio-political norm that the Argentines may be. Marxists are fond of dialectics and the establishment of a strong working class identity with a left-wing ideology in Chile has the hallmarks of a dialectical development, with the 'two parallel traditions of almost uninterrupted parliamentarism and of violence, repression and militant mass struggle' While on the one hand, the bitter struggle of Chilean workers against the establishment with the miners rebellion, the repression of Ib De Vilder, Tomic in Zammit, The Chilean experience then, with its parallel tradition, is the formative factor in explaining the strength and the trajectory of the working class left. The experience had not been good for the poorer people of this relatively resource-rich country, and the usual suspects in all Latin American countries reared their head: dependence, inflation and repatriation of profits overseas, a scourge that led to a typical 'enemies home and abroad' rhetoric from their leaders such as Allende, who was not straying from the general current of opinion when he diagnosed the problems as being because 'Chile is a capitalist country, dependent upon imperialism and dominated by sectors of the bourgeoisie allied to foreign capital' The material hardships and perception of inequity and exploitation of the poor Chileans fostered resentment, but the outraged determination necessary for revolutionary action was kindled by the enforced impotence of workers and labour in the face of successive demonstrations and strikes that were 'cruelly repressed' Allende, Tomic in Tomic in Zammit, Taking a specific focus on the Yarur cotton mill, significant as the first overthrowing of factory rule in an acceleration of revolution after Allende was elected, the evidence of Chilean factors in radicalising the labourers is apparent. As with workers all over", "label": 1 }, { "main_document": "legalized for those who choose to get married without creating a hierarchy of married and non-married couples. It might be a good strategy to organise political support and mobilisation to go beyond the categories of heterosexuality/homosexuality, and create a society where being a homosexual or a heterosexual makes no difference to one's rights and dignity. The gay and lesbian rights movement apart from fighting against heterosexism should also strive for the broader goal of establishing a just social order free from racism, economic deprivation and patriarchy. Thus, efforts should be made to create a world where marital status is no more important than one's driving license and the distinction between homosexuality and heterosexuality is no more than the colour of one's eyes and the former need then not to resort to 'pride marches' to proclaim its identity. ---------------------------", "label": 0 }, { "main_document": "investigate the effects of different monomers and ligands on the polymerisation and the chemical compostition of polymers, five different polymerisation reactions were devised (Table 1). The polymers were isolated by precipitation. 10.0457 g of white polymer A was formed. The polymer was characterised by FT-IR with the key signal at 1724 cm For each polymerisation reaction, a sample was taken at 2, 4 and 28 hours for The relative concentrations of free MMA monomer [M] and bound MMA monomer [M The initial concentration of MMA monomers at t = 0 equals to [M] + [M As the reaction progresses, the ratio of [M]/[M The rate of chain propagation In theory, termination does not occur in living radical polymerisation so therefore This indicates that a plot of ln([M] Theoretical DP A plot of M From the GPC analysis, the experimental M A narrow PDi should be obtained due to the mechanism of polymerisation. Theoretical Mn can be calculated using equation 3, where M The kinetic plot indicates the rate of polymerisation reactions using different ligands and monomers. All polymerisations showed linear lines with y- intercept of 0. A, B, D and E showed very good fit but however, C did not. A possible reason is that the polymer chains have stopped propagation before 28 hours which was indicated with the low final % conversion. Early termination could be due to combination of two radical chains or disproportionation. Another possible reason is the low initiator efficiency which is caused by also combination of two initiator radicals and disproportionation. This is reflected by the high PDi of about 2. Similarly, A PDi of about 2 can be observed for polymerisation D at 28 hours. However, this is not due to early termination but a low [M The % conversion was so high that the growing polymer chains have lack of monomer radicals to propagate so therefore forced to terminate via combination. C showed a higher rate than D which suggested that MMA might give a higher rate of polymerisation than BzMA. Exception were at the 28 hours samples of C and D which all gave a PDi of around 1.1 indicating that there is a narrow molecular weight distribution with this living radical polymerisation, i.e.: the polymer chains are of very similar length. A possible explanation for this is that BzMA is more bulky therefore reacts slower hence the rate of propagation is lower. This was also supported by the % conversion of BzMA relative to MMA at 4 hours. The % conversion ratio was 100 % MMA: 0 % BzMA. This is only an estimate as the block copolymer would consists of a block of just MMA monomers then a mixture of MMA and BzMA which is followed by just BzMA monomers. The length of each block vary according to reactivity of the monomer. The pattern of the mixture block is hard to predict. The pattern could be random or could be defined. The exact length of each blocks is not known but however, it can be predicted. If BzMA was added at an", "label": 0 }, { "main_document": "permitted (which would mean destroying an organism that has the potential to develop into a human being), then it may open doors to more destructive acts (Fischbach and Fischbach, 2004). To date, the use of stem cell derived from created embryos as well as embryos stored frozen in IVF clinics is forbidden by article 18 of the European convention on human rights and biomedicine (Wert and Mummery, 2003; Fischbach and Fischbach, 2004). Nevertheless, the ban of the derivation of stem cell from embryos through governmental sponsorship (public law 107-78, section 513a) has lead to the research being sponsored through private and corporate funding (Gearhart, 1998). Stem cells, especially pluripotent stem cells are of great interest in the research field. These cells although very different from one another and derived from different origins are connected by their amazing properties and medical potential. Table 1 summarizes the different and important aspects reviewed. The importance and great potential of pluripotent stem cells has been recognized amongst the different communities (scientific, clergy, governmental) but have also raised concerns about their sources and use. If the creation of embryos for the derivation of pluripotent stem cells is reconsidered and allowed, what criteria will be used for cell-based therapies? Could the use of embryo be just allowed for therapeutic cloning? Adult stem cells might be the answer as there are less ethical issues surrounding them. But biologists are still trying to find ways to grow them in cell culture and manipulate them to generate specific cell types.", "label": 0 }, { "main_document": "their properties or selectivity, due to their diversity. A model protein for TRP channels is the vanilloid receptor 1 (TRPV1) as it is the only one so far that could be isolated and characterised on the basis of its ability to bind a ligand, in this case capsaicin, the molecule responsible for the hot taste of spicy food. Among many other nerve cells, TRPV1 is expressed in nociceptors of the skin, which appear to increase their cytosolic concentration of Ca One is obviously capsaicin, which activates TRPV1 at a concentration as low as 1 M, as well as substances with a pH below 6 and temperatures above 40-42 It is not certain how exactly touch in form of pressure or stretching can activate touch sensitive receptors. However, mutations of On the other hand, activation by temperature might be attributed to conformational changes of the channel, since essentially all proteins are temperature sensitive. But there are several TRP channels with unusually high temperature sensitivity, which can be found preferably in pain and temperature sensing neurons of the skin. Besides TRPV1, TRPV2 is thought to be susceptible to noxious heat above 50 There is little selectivity for ions, but especially Ca Another sense that relies on mechanical transduction is hearing. The mechanisms involved in the detection of sound waves, however, are far better understood than the detection of other mechanical stimuli. And in contrast to the several different touch or temperature sensing cells with their vast number of receptors, only one type of specialised epidermal cells exists in the cochlea of the human ear, namely the hair cell. This is not to say that there are no other cells involved, but hair cells are the primary sensory cells with organelles that enable purely mechanical transduction. These organelles are stereocilia, tiny cylindrical, actin-filled rods of different lengths that emerge from the upper cellular surface in a hexagonal array. A sound wave entering the cochlea sets the stereocilia in motion, causing them to slide along one another and exert pressure on tip links, which are fine filaments connecting the stereocilia. Tip links are thought to be directly connected to Ca Influx of Ca Changes in potential release a neurotransmitter from the basolateral surface of the cell to synapses connecting to the auditory nerve. The resulting postsynaptic signal leads to a nerve impulse, which is then transmitted to the brain. Rapid return to the resting potential is possible through K+ specific channels, which allow the ions to leave the cell. Hair cells that are tuned to higher frequencies express channels with smaller relaxation time constraints than cells tuned to lower frequencies, moreover, the number of K+ channels in a cell increases with the preferably detected frequency of this particular cell.4, 8 Aside from being remarkably temporarily accurate and sensitive due to the lack of slow chemical processes and employing a direct mechanical approach for transduction, amplification of sound waves are another outstanding feature of auditory transduction. The proposed, and as of yet most likely, mechanism is thought to involve the hair bundle organelle. Unlike amplification of signals", "label": 0 }, { "main_document": "visualise the key sites of interest in planning. It has been identified that current land cover and land uses; woodland or grassland and agricultural land would be the key elements for further evaluation. Evaluation criteria may include the vicinity of the existing forests or woodlands in order to enhance the wildlife value by extending the existing habitat, the importance of current land uses to local communities, or preferences of local residence on the structure, size of buffer zones. There is much potential on the produced maps to be applied for similar problems. In relation to the quality of visualisation of information required, this outcome has successfully represented the specific site on which the target on management planning and further evaluation should be based. This report has demonstrated the potential and limitation of GIS approach for the specific problem associated with land use planning in real world situation.", "label": 0 }, { "main_document": "infringements because they are not substantial enough to reach the \"unfair\" point covered by art.82 EC Treaty, even though, the price increase can cause harm to consumers. The difference with leveraging is that if it has an exclusionary effect, will certainly fall under the prohibition. Additionally, leveraging is much more likely brought to the attention of rivals or competition authorities noticing changes on the market rapidly. Based on: Sven B Volcker; \"Leveraging as a theory of competitive harm in EU merger control\", Common Market Law Review. New York: June 2003.Vol.40, Iss. 3; pg. 581, Leveraging practice usually tends to lower rivals' profits by affecting its business In the case of Guinness Grand Metropolitan Similar concerns were discussed in a case GE/Honeywell Thus, it may be difficult to observe how it convince a sufficient number of customers to purchase the tied good and as a result to achieve expected foreclosure, unless the merged entity reduces the price of the tying and possibly tied product. For example trough diverting demand to the combined entity- Sven B Volcker; \"Leveraging as a theory of competitive harm in EU merger control\", Common Market Law Review. New York: June 2003.Vol.40, Iss. 3; pg. 581,para 2.2 Guinness Metropolitan IV/M.938, [1998] OJ L228/24 So called portfolio effect; ibid ,para 40 GE/ Honeywell case T- 209 and 210/01, ibid, para 353 Sven B Volcker; \"Leveraging as a theory of competitive harm in EU merger control\", Common Market Law Review. New York: June 2003.Vol.40, Iss. 3; pg. 581, It is very interesting issue, why practices such as leveraging occur more often on the EC than i.e. US market? American competition authorities are less worried about the effect of conglomerate mergers. Secondly it is observed that companies operating on common market are afraid of legal constraints imposed by art. 82 EC Treaty. Thus it is a questionable issue for the European Competition Authorities if the present approach is reasonable and creates the desired effect on the market? Maybe the Commission needs to provide a \"time- frame\" within which the effect of conglomerate merger should be considered, like it have indicated it in horizontal cases were \"countervailing factors\" arising more than two years after entities have merged, may be recognize as too speculative Therefore, the Commission \"predictive\" powers would be limited by the available evidence, thus all \"harming\" speculations possibly will not be taken into account? It is hoped that the situation will be clarified in the nearest future and that the Commission's approach will follow the US one There is a need for setting up a threshold against which mergering companies would be able to examine their conduct in advance in order to be aware of problematic issues, raising competition concerns in conglomerate cases. In Siemens/Dragewerk, the Commission's decision of 30 April 2003. Which emphasizes the need to prove that the potential harm, basing on the fact of the case, be likely to result from the merger; supra note 4, page 281", "label": 0 }, { "main_document": "The aim of this experiment is to determine the composition of a commercially available alcoholic beverage (strongbow) by analysis using the technique of gas chromatography. The basis of chromatography lies in a seperative process between 2 phases, these being the stationary phase and the mobile phase. The mobile phase, and injected sample flow across the stationary phase, and each component of the sample will be distributed between the two at differing amounts due to their relative affinity for each phase. The distribution ratio is determined by the physical and chemical characteristics of the sample, however it is also affected by operating conditions such as temperature and pressure. Temperature must be carefully selected, as a compromise between elution time and resolution quality. A high temperature is needed to cause vaporization of the volatile liquid however elution time is very fast and resolution is therefore resolution is poor. However lower temperatures cause spreading of the peaks and therefore sensitivity is reduced. Ideally a chromatogram will show a series of separate peaks, each of which corresponds to a component of the injected sample. The sequence of elution corresponds to the retention time of each component upon the stationary phase of the GC column. The area of each peak is related to the concentration of each component and therefore gas chromatography can be used as a quantitative method. During the specific process of gas chromatography, a sample is injected into a stream of gas, which is programmed at a specific temperature; usually 50 The column temperature can then either be kept constant (isothermal) or the temperature can be programmed to progressively increase to improve the separation. The components of the sample will then distribute themselves according to their relative affinities for the stationary phase, and are then recorded and quantified by measuring a physical property. The most common detector to find in a GC system is a flame ionisation detector. To begin a calibration mixture was prepared in triplicate of the following composition: 1000L of 25mg% propan-1-ol internal standard solution and 400L of 80mg% ethanol standard solution; this was taken by micro syringe into clean dry clip top vials which were flowingly capped to prevent loss of ethanol. In an identical manner, a second calibration mixture was prepared in triplicate, this was of the following composition: 1000L of 25mg% propan-1-ol and 400L of 200mg% ethanol solution. Vials were then capped, again to prevent loss of ethanol. A sample of the alcoholic beverage (strongbow) was then prepared. To do this 20mL of the alcoholic beverage was measured and placed into a volumetric flask, this was then degassed using an ultra-sonic bath (for 15minutes). From this degassed sample a solution was prepared in triplicate by pouring the solution into a beaker and taking 2mL of the degassed drink by micro syringe, placing this into a 50mL volumetric flask and diluting to volume with distilled water (dilution factor of 1:25). The samples for analysis were then prepared by taking 1000L of 25mg% propan-1-ol internal standard solution and 400L of the diluted drink (also from a beaker) by micro syringe.", "label": 1 }, { "main_document": "Having a BEng or MEng from an accredited university course does not mean entering the stage of being a professional engineer, simply because there are so much more qualities that a person needs in order to become a professional either than getting pass exams and tests. But there is no doubt that a degree would give knowledge and experience which would help and enable the individual to become a professional in the future. Example would be Africa tribe builders who built and design shelters for their villagers who have their profession as an \"engineer\" within their village community, benefiting \"human kind\" in a smaller scale. Performing work commonly know as \"what engineers do\" does not mean being professional. What does engineers do? The answer is hard to define, since what commonly know as \"what engineers do\" is based on the view point of societies and therefore it might be biased towards one direction, and it is hard to judge whether \"what engineers do\" is right or wrong. Example would be Nazi engineers. In Nazi Germany, engineer who built and design warfare machines which is used for killing innocent people were considered as professional engineers by it's society. However, within other societies (e.g. Jewish community) their what so called \"professional engineers\" became nothing more than a group of mass murders never mind \"engineers\" or the professionalism that they proclaim. Being a chartered engineer does not necessarily mean being a professional engineer even thou most societies in the world relates \"chartered\" with professional. This is because professional codes are set by different institutions, and therefore it is likely to differ between societies, and might be biased. Therefore it is hard to judge whether the codes are morally correct or wrong. However, in order to become chartered, the engineer would already have gained many abilities and qualities which have approval from an institution through qualifications. Also, there are professional codes which the engineer would have to follow in order to become chartered. Therefore there is no doubt that becoming chartered is a big step towards being professional, proving the ability to performing and behaving morally towards the subject of engineering, with the codes there to guide engineers towards being as professional as they can be. Acting in a morally responsible way as a practicising engineer could mean being professional. This is because by acting morally means acting in a way that is taking considerations from and would be beneficial to the environment, mankind associates, family and the engineer himself. However, acting \"morally responsible\" is had to define, since the meaning of \"morality\" could differ from different prospectives. It could easily be affected by e.g. religion, government, family, cultures etc... Therefore there are still doubts on rather acting morally responsible means being professional or not. Example would be engineers who built nerve-gas bombs for Aum Shinrikyo in Tokyo gas attack - March 20, 1995. From their propectives, it was morally correct to kill others due to what they so called \"helping others to heaven\". After all, I believe in order to become truly professional, the engineer", "label": 0 }, { "main_document": "most of them are not true. Germans for example wear leather trousers at the Oktoberfest, but not everyday as some people believe we do. Living with, travelling and talking to people from different cultures gave me a better understanding and increased my sensibility. As I am aware of cultural diversity through daily experience, I can handle many issues easily. However, I often catch myself, trying to impose my own cultural behaviour and thinking on others and expecting them to react similar as I would do. This is especially true in work situations. A common stereotype about Germans is their tendency to discipline and over organisation (Hall & Hall, 1990). I believe that there is some truth in this stereotype. At work I try to organise everything precisely, which I then expect from others, too. This is where I am sometimes confronted with cultural problems, as other people might tackle problems differently, but also reach the same goal. My understanding still is based on my learning in my home culture, although being in the UK for over two years has widened my point of view. An experience in or knowledge of a culture is not equivalent with intercultural competence (Brislin, 1986). The author, however, states that cross-cultural training can assist in overcoming the barriers to build competence. For successful and smooth intercultural communication, I believe that it is very important being able to listen and to observe. Once you are aware of and sensible to differences, you are able to manage cultural diversity. One example of complexity of communication across cultures is recognized by Hall (1981). The author identified two communication styles: low-context communication and high-context communication. In low-context cultures, such as Germany, communication is very direct and verbal. In high-context cultures, such as the UK or Japan, gestures are important and people have to read in between the lines in order gain whole understanding. Difficulties I came across were often based on understanding the 'hidden' meanings, which then lead to misunderstandings. Due to my experience in the boarding school as student and later as employee, I had many opportunities to talk to people. Working as a social organiser included communicating with international students on a daily basis. Organising events, asking students for feedback, working in teams and sometimes advising students about different issues continuously improved my skills. However, I still observe especially in group work situations that I impose my own culture on other group members. Continuous development is necessary and especially in a cross-cultural context, training can improve sensibility and sensitivity for cultural differences (Brislin & Cushner, 1996). 'Working under pressure in cross-cultural teams is difficult for me due to my own personal values and beliefs. As explained earlier I tend to impose my working style on people without considering their needs and beliefs. Furthermore, I feel I have to improve my understanding of 'hidden' information of gestures, facial expressions and tone. In this incident (see Appendix A) it is not only the clash of different cultures but also of different generations, which all have different values, traditions and beliefs. Knowing", "label": 0 }, { "main_document": "is therefore hard to compare one entity to another J. R. Dyson, 'Accounting for Non-Accounting Students', Sixth Edition, Prentice Hall, 2004, p254 Another profitability ratio like the return on shareholders funds that can be considered to evaluate the performance of Renold Plc is the net profit or profit margin ratio. This ratio shows the amount of profit for each pound of turnover that the entity generates. As with the return on shareholders funds the net profit ratio for Renold Plc (Graph 4) shows that the company had a non-profitable year during the last financial year. The group profit and loss account within the annual financial report for 2005 of Renold Plc states that the company made a loss before tax of Within the narrative section of the annual financial report various reasons are explained for the performance of the company. Robert Davies, Chief Executive explains that commodity prices, in particular the price of steel, along with exchange rate movements have caused significant increases in input costs within the entity. ' As the majority of operation of Renold Plc requires steel for manufacture of various components especially within the power transmission - gears and chains division, the increases in steel prices globally affected the company greatly. Graph 5 illustrates the large increase in global steel prices during the first half of the year. Roger Leverton, Chairman, explains that the increase in steel prices had a major effect in the second half of the year; there was an average increase in costs of approximately 40% within the Group. ' The increase in steel prices has meant that the profit for the company over the past financial year has been negative. Although the company has increased sales over the last financial year it has found it hard to recover the costs, particularly from original equipment manufacturers. Looking ahead from the end of the last financial year, 31 The slight decrease in the price of steel along with other operating cost reductions that are being implemented by Renold Plc aim to provide a profit before taxation for the next financial year. It can not be relied on that steel prices will decrease any further or that the will not increase within the next financial year so various other methods of operating cost reduction will need to recover the increase in the cost of steel. According to the Annual Financial Report 2005 the company is outsourcing production to lower cost countries and implementing lean manufacturing techniques within its divisions. Another large impact on Renold Plc over the last financial year was the weakness of the United States Dollar against the Pound. Graph 6 shows the weakness of the Dollar for the last three months of the last financial year up to January 2006. Robert Davis, Chief Executive, explained that the strength of the Euro had \"aggravated\" problems that the chain division had faced as the majority of the production for the division was in Europe whereas the sales growth had derived from dollar-based economies. This is extremely beneficial for Renold Plc as the exchange rate will not", "label": 1 }, { "main_document": "companies. It is usually applied to every factors of supply chain management. Compared to other business trading networks, it is affordable and easily to establish and maintain. Kinds of operations can use it such as orders details from suppliers and customers, payments to suppliers and from customers. E-business: This technology is based on internet. The club can use it as a tool of purchasing because its features-reach widely and abundant resources. At the same time, it saves cost. Management Information System (MIS): It concerns about the flow of information collected, exchanged, operated and used for management. Planning and control activities always use this system. Such as demand forecasting, inventory management, order processing and quality management. In this case, the club adopts ERP system for the whole organization. Automatic Identification Technology (Auto ID): the club considers using Radio Frequency Identification (RFID) for the product control during the operations of manufactory, distribution, storage, sales. The strongpoint of this technology is that it helps the company check exactly where every products in the supply chain. Cash flow is vital for every company. The club had 700,000 pounds to start the business. They can forecast the cash flow by the evaluation of cost and earning. At the beginning, the club collects money from investors. After successful operation and development to big scale, they can come into the market and collect money from stockholders. The business plan of Fantastic Fitness Club considered basic elements of overall operations. But still too general and need more research or study to fulfill it. The purpose of this plan is to use theoretical knowledge and tools to practical business environment. Actually, the company needs to consider more carefully in real world.", "label": 0 }, { "main_document": "a development of new business processes and practices stimulated by internal market dynamics. All these effects were in line with the removal of the 4 movement barriers. For example, removal of technical barriers would allow the existence of a single set of technical specifications. This in turn would reduce the cost or manufacturing, research and development, and marketing. Reducing the cost of intra-state trade was a clear aim of single market rationale. The costs of non Europe e.g. not integrating a Single Market can be read as an alternative aim for the creation of a Single Market. The Cecchini Report (1988) was crucial in identifying the possible benefits of a Single Market. Cecchini estimated that the economics benefits of a single market were between 5.8%-6.4% of total EC GDP Cecchini also argued that there were significant benefits from improved government budgets as a result of \"second round macroeconomic effects\". It is important to appreciate the implications of not creating a single market, to realise reasons for its establishment. From McDonald & Dearden (I994) p.25, based on the Cecchini Report. Potential benefits are some of the aims of the rationale for the Single Market. There are impacts on the macroeconomic level; elimination of custom delays and costs, increased competition in public markets and liberalisation of financial markets. Estimations on completing the internal market were that GDP would grow 4.5% in the medium term Consumers were also set to benefit from lower prices, greater choice of goods and services, and availability of work within EU. Specifically consumer prices were estimated to fall 6.1% in the medium term Adapted from various pages in Emerson Adapted from various pages in Emerson Non economic drivers of a Single Market are important, such as provision of public goods (defence, law and R&D), elimination of negative externalities (through environmental policy) and greater social and political integration. The Single Market was also seen as a tool for combating racism, bigotry and enhancing social understanding between member states. One aim was also to combat the ageing populations faced by the most developed countries, as workers from countries with younger demographics would be able to deploy their skills in other states. Leading from this was a mechanism to allow those with skills and experience to move freely. EU single market creation, also aimed to break down language, societal and cultural barriers that existed amongst European nations. Results of Single Market creation should be considered when assessing how successful the aims and rationale have been. European Commission estimates 2.5 million jobs have been created and 800 billion euros of extra wealth created since the inception of the Single Market in 1993 However estimates by PwC claim that less than half of all companies think the Single Market has had a positive impact on them and 90% of businesses think that significant barriers remain. For example trans-frontier labour movement is negligible within the EU with only a few specific groups seeming willing or desperate enough to move. In conclusion I think the rationale for the creation of the EU single market was the removal", "label": 1 }, { "main_document": "the cylinder every stroke. All of this indicated power is not available as useful work in engine 'brake power' as some is lost in friction driving things like pumps, valves etc. The graphs shown in figures 1 to 6 were generated using the raw data obtained on both the lab sheets and Microsoft Excel spreadsheets in the attached Appendices. graph (fig.1) shows that for both fuels as the load increased, mechanical efficiency decreased. At a higher load than tested the engine would therefore have stalled due to lack of power. The Biodiesel appears to have a higher mechanical efficiency over Diesel. graph (fig.2) shows that for Diesel fuel the thermal efficiency rose as the load increased from 0 to around 15N. From 15N the thermal efficiency rose only slightly to a peak at around 36 Newton's load. The engine therefore would have been at its most thermally efficient at this load and temperature. From 36 to 50 Newton's load the thermal efficiency decreased, as more of the energy in the fuel was lost as heat as the engine became too hot. Biodiesel followed a similar trend to Diesel but burnt at a higher temperature at every load and so was overall less thermally efficient. Biodiesel appeared most thermally efficient at 25 Newton's load, with thermal efficiency tailing off rapidly above this load. graph (fig.3) shows that for both fuels as the load increased to approx 7 Newton's, the brake thermal efficiency increased at similar rates. Above 7 Newton's load however, both fuels continued to increase brake thermal efficiency but at a lesser rate with Diesel performing better overall Compared to Biodiesel. Both fuels seemed to give maximum brake thermal efficiencies at around 360-380 N m-2 of Brake Mean Effective Pressure, above which their brake thermal efficiencies either dropped significantly (Biodiesel) or remained constant (Diesel). (fig.4) shows that when the engine burnt Diesel fuel on average 1.624kW of power was lost due to internal friction. Biodiesel faired better in that only 1.221kW of power on average was lost this way. Biodiesel having a good natural lubricating property over Diesel could explain less friction. for both fuels decreased as the loads increased for both fuels. Diesel ranged from 50:1 at 0 load to 15:1 at 50N load. Biodiesel ranged from 43:1 to 11:1 at 47N load. This higher Air/Fuel ratio could be explained by the presence of excess Oxygen on the Biodiesel molecules. More Oxygen means the fuel requires less air with which to burn. (2) rates for Biodiesel were around 10% higher than Diesel possibly due to the fact that Vegetable oils have a lower cetane number (40) compared to Diesel (45-50) and overall lower energy density of 37-40 MJ kg-1 compared to 45.9 MJ kg-1 for Diesel. (2) (Fig.5) Shows that when the engine was burning Diesel at loads between 0 and 15N a greater amount of power in kW was being lost through heat out of the exhaust system than the engine was producing in kW! At loads above 15N the engine only just managed to produce slightly more power (max", "label": 1 }, { "main_document": "distribution channels, advertising medium and consumer purchasing habits (Appendix 1), having increased online booking (Hitwise, 2007), and led to a shift in hotel strategies towards direct reservation systems (Hospitality eBusiness Strategies, 2004). Further, hi-tech in-room features are also a prime focus of hospitality technology development (Hotel Online, 2005) within the phenomenon of modern interior design, aimed at satisfying and impressing customers with the latest developments (Hotel News Resources, 2005). With regards to room infrastructure, bathroom maintenance and innovation require special attention and action from hoteliers in the U.S and Canada, as it is an area where customer dissatisfaction mostly stems from (Appendix 1). Canada is an important oil exporter (Appendix 1), and natural gas is significantly used in several industries and sectors, including hotels (EIA, 2004). The significantly high energy consumption in the region is attributed to geographical vastness, cold climate, high living standards, low cost and high availability of energy (EIA, 2004). Consequently, Canada is determined to improve its environmental position and thus renewable energy sources are fast gaining pace and currently above the average rate of high income countries (World Bank, 2006)(Appendix 1). It is highly likely that both governments will soon take a stronger position on climate change (The White House, 2007; EIA, 2004) with significant policies, rules and regulations on waste, pollution and energy, and therefore the hotel sector will be forced to change its ways and adopt more environmentally-friendly approaches to its operations. The sector is also particularly sensitive to natural disasters and human conflicts. However, through advanced economies of scale and the experience of 9/11, the North American industry has shown its rapid response to crises and the ability to speedily recover, which enforced by local communities' constant eagerness and planning in crisis management (Ernst & Young, 2007), reassure developers, despite increasing threats of disasters, of the sector's efficient coping with volatile conditions. There is a medium threat and evidence of entrants in the hotel sector in North America (Rosszell, 2006; Beasley, 2007). This comes as a response to increasing demand (Appendix 1) and still reasonable construction costs (Rosszell, 2006), though the latter is only evidenced in Canada, therefore attracting previously regional companies. Groups such as Taj Hotels, Shangri-La and Jumeirah have already started their expansion in the west and brands undergoing fast expansion and consolidation in Europe, Africa and Middle East, such as Rotana, Oberoi and Rezidor, may be just a step away from expanding further into other markets including North America, though this may be hindered by Canada's strict policies on foreign investment. Although a major increase in hotel room supply may suggest negative effects on RevPAR, this is counterbalanced in Canada by the significantly rising demand (Rosszell, 2006), which in turn will further attract, not only new companies to open sites in the country but also existing brands to enter the market with innovative hotel concepts (Appendix 2). Buyers have a medium to high bargaining power in the hotel sector in North America. On one hand, ruinous price competition ultimately leads to low or even non-existent margins, thus consumers no longer expect price", "label": 0 }, { "main_document": "addition, however, one could also envisage less rigidly demarcated production activities as the following example demonstrates. Although Souvatzi (2000: 108-110, 121) argues that area X8 at Dimini shows features of functional differentiation (concentration of incised pottery, tools and shell debris), she interprets this area as a multi-functional and flexible space (for making pottery, tools and shell ornaments) rather than a workshop for one type of artefact. Furthermore, it has been argued that craft specialisation and exchange were not cumulative or teleological processes and therefore these may not be placed within the context of social evolution progressing towards complexity. While the increased visibility of prestige items in the LN and FN and the possibility of attached specialists (i.e., craftspeople producing exclusively for an outsider authority) could relate to the appearance of some kind of hierarchies (Kotsakis, 1996: 169; Perl In other words, the LN and FN hierarchies, if there were any at all, may have been the result of wider social and cultural currents to which craft specialisation and exchange adhered and had to adapt. For most of the Neolithic, one would be inclined to agree largely with Perl This not only signifies that the production of lithics, pottery and prestige items were part-time activities, but could also imply that there was a wider range of specialist roles (e.g., farmers, pastoralists, fishermen, priests). Moreover, the fact that these \"heterarchical\" and complementary roles (Nikolaidou, 2003b: 500) were organised and sustained in the Neolithic, lies at the centre of the complex social dynamics of Neolithic communities. Similarly, Souvatzi (2000: 164-165, 221), with reference to LN Dimini, emphasises that the complex social relations pertaining to the production and circulation of various artefacts do not constitute evidence for social inequality but could inform of social integration and cohesion. Under this light, if craft specialisation and exchange played an important role in holding the Neolithic societies together, then the observed stability and longevity of the Neolithic village (Demoule and Perl Then, there would be no need to follow Perl Thus, the existing evidence in the archaeological record, including even the various ambiguities associated with it, such as the questionable visibility of lithics production centres, when appreciated more holistically in the context of a 'successful' mode of Neolithic village life, constitutes, in my opinion at least, sufficiently good evidence for considerable specialisation in the production and exchange of artefacts in the Neolithic Aegean.", "label": 0 }, { "main_document": "new foods have been considered as in some way unhealthy or lacking sound nutritional properties. It is true that there have been mistakes in food procession. (taken from page 29 of Eating for health, a discussion booklet prepared by the Health Departments of Great Britain and Northern Ireland) What's more, most nutritional problems in adolescents and children are related to the consumption of too much 'Junk food'. Junk food is food with mineral nutritional value, which is characterized by high levels of fat, particularly saturated fats and refined sugar. The portion sizes of junk food have increased dramatically over the last 20 years. For example, when most fast food franchises originated, a 'regular' serving of French fried potatoes contained 201 calories; today, a 'super-sized' serving of fries contains about 610 calories. As the White Paper being published, food companies have already taken action to avoid criticism that they are encouraging obesity. For instance, McDonald's is among the organizations to have reduced advertisement of fatty food to young children. They reacted to growing alarm about incidence of obesity in children by expanding its menu to incorporate salads, fruit, low fat milk and bottled water. They also reduce the salt in regular fries and encourage children to have fruit bag and fruit drink options in Happy Meal. Unilever owned Breyers ice cream has introduced its first light ice cream products for use as part of a low-carbohydrate diet. The company claims that the new products have half the fat and 40% percent fewer calories than regular ice cream. What's more, Sainsbury's has announced it will introduce its own colored logos to signify healthier options from January, whereas Pizza Hut introduced a low fat version of its original pizza. It all shows that, food retailers is now trying to improve, and food industry turns to satisfied on a low-fat, low-carbohydrate but more nutritional production. Furthermore, obese is not a patent in industry countries, those at the lowest income levels have increased the most. Overweight has replaced malnutrition as the most prevalent nutritional problem for the poor. At this point, it is also not the fault of food industry. All in all, obesity is considered to be a disorder in industrial countries. The food industry can be blamed for the increasing in obesity in UK, but its dramatically growing is probably also caused by changes in eating habits, decreased physical activity and the economic and social factors. In that case, the government and the whole society should be more concerned about it. In fact, the important thing is, it is the choice of everybody to decide whether to eat Junk food or not. In addition, the conditions associated with obesity is: Heart Disease, Diabetes, Cancer, Osteoarthritis, Gallstones, Lipid disorders, High blood pressure, Respiratory problems, Depression and Social discrimination. Luckily, there are some resources or foresight to design urban and suburban environments that encourage active lifestyles, such as sidewalks and walking paths. Areas that also require more study are the impact of genetics and psychological factors on the development of overweight and obesity.", "label": 0 }, { "main_document": "Ibsen and Strindberg's early social plays explore the society present in Europe. Both playwrights were particularly interested in the idea of determinism and the role that it plays on humanity. The plays I will explore in this essay, The plays, including such dramatic elements as the characters and scenery, are used as visual representations of the playwrights' study into European society. The female protagonists' interaction with the other characters causes them to realise to what extent life is determined and if there is enough strength in their independence to take control of their own lives. Through Nora and Julie the audience experience the pressures of life present in Europe in the nineteenth century. The finale of each play presents them with the writer's conclusion to the dominance of either determinism or free will. Ibsen believed that there were 'two kinds of conscious, one in man and a quite different one in women. They do not understand each other He wrote this in a brief memorandum, attached to the first draft of In the note Ibsen describes the affect of a male dominated Europe and explores the suppressing effect this has on women. He wrote that 'a woman cannot be herself in the society of to-day' for she is always under the influence of man 'who judge feminine conduct from the masculine standpoint'. It is clear that Torvald is the personification of the unthinking control men take over a woman's life. Torvald is obsessed with superficiality and throughout the course of the play Ibsen shows how Nora only fits into Torvald's life as another part of his aesthetic fantasy. Torvald refuses any ugliness to enter his life, and, in turn, Nora's. By denying Nora a full representation of existence, Nora has become ignorant to any other options in life other than the ones shown to her. Society's suppression of women has given Nora no chance to develop naturally and this comes at the loss of finding her own personality. Pg.205, The play is full of images that suggest Torvald's obsession with superficial attractiveness and Nora is introduced to the audience as a material possession, just another aesthetic creation of Torvald. In the original stage description the beautiful decorations and 'well-bound books' set up Torvald's aesthetic ideals before Nora enters carrying 'a number of parcels (Pg.1)' another symbol of materialism. The festive setting helps Ibsen to emphasis Nora's predicament. Her entrance is coupled with the entrance of the Christmas tree and this dual entrance suggests a link between Nora and the tree; they both represent an interference with nature. The tree has been taken out of its natural environment and placed inside, then covered up. This alteration from its natural state to an artificial creation symbolises Nora's transformation under Torvald at the loss of her natural personality. Nora's realisation of her determined situation can be mapped throughout the play through her relationship with the festive backdrop present in the play. In Act I Nora's jolly demeanour blends in with the festal setting but as the play develops Nora's growing despair contrasts dramatically. By Act", "label": 1 }, { "main_document": "Then, The main hypothesis is that Specifically, the mean number of guesses in the first case should be sistematically smaller than in the second case. As Table 1 shows, In addition, The difference between the two algorithms becomes even more dramatic when one investigates their asymptotic behaviour. The ratio between the mean number of guesses of The reason is that, for the former, the number of guesses will linearly increase with the size of the range, whereas for the latter, the number of guesses will increase slowly, following a logarithmic function. Figure 1 illustrates this behaviour: as expected, the two curves diverge as the range size increases (from range 1-800 onwards). We have empirically tested two Binary Search algorithms and showed that, as expected by complexity theory, The result illustrates the idea that seemingly minor differences among algorithms can yield large differences in performance, ranging from the worst to the optimal solution. The results from This was expected, as the split points differ from run to run. While the first guess in A possible criticism to the experiment is that the targets were not kept constant across algorithms (see Appendix for a sample output). What if the targets for Because the 100 targets in each experimental batch were chosen randomly, that possibility seems unlikely. Conversely, if targets were equal across conditions, then a similar point could be raised: that the targets could be easier for either condition. Once again, randomisation would render that possibility unlikely. Therefore, both designs are equivalent..", "label": 0 }, { "main_document": "expansion and also increase the case of business failure. Due to limited equity the creation of a brand name is rather low. However, control over the company is higher and the property, seen as a product, may also be more consistent. The number of ownership is probably higher in host countries with similar cultures. Simply because the parent organisation may not experience difficulties it may has to deal with in a dissimilar environment compared to the one at home (Contractor and Kundu 1998b, Lewis and Chambers 2000). A management contract is an agreement between physical owner of the property and the business management. In other words, the hospitality organisation pays a fee and provides technological equipment, brand name and support in operational management in order to run the estate as part of their international group without owning it. Management contracts are long term and include a relatively low financial risk. They offer a further option to enter a market abroad without a very high equity investment on the part of the managing company (Contractor and Kundu 1998b, Kotler 1997, Lewis and Chambers 2000). The country risk can be evaluated low. France as one of the founding members of the EU offers a continuous and stable political environment. Its level of economic development is high. However, regarding the growth of the GDP it needs to be said that there is only a slight rise during the last years. Nevertheless, the French market is strong and has grown steadily when looking at the period of the early 1990s to the beginning of the 2000s (see Appendix 2). So, both external forces provide good basic conditions for a company to go into the country. Furthermore, it needs to be mentioned that France can be seen as a country easy to enter for a UK business because of the agreement of The foreign business presence in France is high. It is among the countries with the highest FDI inflow worldwide. Regarding the hotel sector Southern France has gained increasingly in popularity among investors. The improved infrastructure concerning the transportation network and business facilities are mentioned as one of the major reasons for it (Bock and Foster 2004). This may signify certain openness towards foreign enterprises and support the idea expanding in France. In terms of cultural distance France may also be an attractive country to invest in for a UK hotel firm as there are several similarities between both states which should facilitate the management of a hotel in France. In addition to that, both countries have experienced equal shifts in social, demographic and technological matters what has resulted in changed consumer behaviour. Thus, a British organisation is familiar with this development and can act appropriately in order to take advantage of it. Moreover, it has to be pointed out that there is a high level of education which might be interesting in terms of recruiting staff as the company can rely on high qualified national personnel familiar with country specific practices. It may also keep costs lower since the number of required home workers can", "label": 0 }, { "main_document": "provision for 8 hours rest alleviates the side-effects of high levels of suggestibility as a result of sleep deprivation, particularly problematic for internalised false confessions. Paragraph 12.2 Code C PACE Para 8.2 Para 12.4 Para. 12.6 Para 12.8 The length of detention is another contributor to coerced-compliant confessions: \"the longer a person is detained, the less reliance can be placed on any confession, and the greater the oppression, because of the very fact of being detained in the unpleasant surroundings of a police cell\" A number of PACE provisions are directed towards expediting the investigation process and limiting the period for which suspects can be lawfully detained. The custody officer must review the detention of the suspect after the initial 6 hour period PACE permits up to 24 hours' detention without charge Detention beyond 36 hours must be authorised by a magistrates' court However, these provisions may not prevent unreliable confessions being obtained, given that the mean interrogation time for inducing false confessions is 16.3 hours Even a 24 hour detention may be sufficient to provoke the mentality associated with false confessions. David Ashby, 'Safeguarding the Suspect'; p. 187 Section 40(3)(a) Section 40(3)(b) Section 41(1) Section 42 Section 43 Section 44(3) Drizin S. A., & Leo R. A. (2004), 'The Problem of False Confessions in the Post-DNA World', North Carolina Law Review, 82, 891-1007; cited at p. 54, Kassinand Gudjonsson, Paragraph 11.5 provides that \"no interviewer shall indicate, except to answer a direct question, what action will be taken by the police if the person being questioned answers questions, makes a statement or refuses to do either\". This controls interrogation tactics, prohibiting the use of inducement or promises to elicit a confession. However, proscription does not altogether eliminate the practice. In (1991) 95 Cr. App. R 384; cited in Zander, Michael, 'The Police and Criminal Evidence Act 1984' Code C does not preclude all tactics conducive to false confessions. Some tactics described by Dean adhere to PACE yet they increase pressure on the defendant, inconsistent with confessions being entirely voluntary. The 'five Ws', make the interviewee \"feel obliged to give an explanation\" Requiring suspects to give an exhaustive account before confronting them with a bundle of inconsistencies, refuting their alibis and attacking their memory carry strong assertions of guilt which contribute to coerced-compliance PACE does not adequately address the full extent of the research on psychological vulnerabilities and the effect of different interrogation techniques. Interview Question 12 (Appendix) p. 53, Kassin and Gudjonsson, Section 36 provides for the presence of a custody officer at designated police stations. A fundamental role is to ensure compliance with PACE. This includes asking the detainee if they would like legal advice or to inform someone of their detention, and determining whether an appropriate adult or medical treatment is necessary The custody officer is entrusted with the task of determining whether there is sufficient evidence for the suspect to be charged with the offence and detained Additional evidence must corroborate any confession, representing a further obstacle against reliance on false confessions He must also ensure that all", "label": 1 }, { "main_document": "\"Why do people not walk or even run with a smooth level gait, like a waiter holding two cups brim-full of boiling coffee? Why do people select walking and running from the other possibilities? We address such questions by modelling a person as a machine describable with the equations of Newtonian mechanics. \"Although people's legs are capable of a broad range of muscle-use and gait patterns, they generally prefer just two. They walk, swinging their body over a relatively straight leg with each step, or run, bouncing up off a bent leg between aerial phases. Walking feels easiest when going slowly, and running feels easiest when going faster. More unusual gaits seem more tiring. Perhaps this is because walking and running use the least energy. Addressing this classic conjecture with experiments requires comparing walking and running with many other strange and unpractised gaits. \"As an alternative, a basic understanding of gait choice might be obtained by calculating energy cost by using mechanics-based models. Here we use a minimal model that can describe walking and running as well as an infinite variety of other gaits. We [can] use computer optimization to find which gaits are indeed energetically optimal for this model. At low speeds the optimization discovers the classic inverted-pendulum walk, at high speeds it discovers a bouncing run, even without springs, and at intermediate speeds it finds a new pendular-running gait that includes walking and running as extreme cases. \"One way of characterizing gaits is by the motions of the body. In these terms, walking seems well caricatured (Fig. 1) by the hip joint going from one circular arc to the next with push-off and heel-strike impulses in between.\" This analysis will examine assumptions that can be used to model the bipedal gait, analysis of the bipedal model, and how the model may be generalised to consider the quadrupedal gait. People have compact bodies; legs are massless. Gait choice is based on energy optimisation. Energy cost muscle work. Treat the body as a point mass Assume no dependence on elastic storage; assume no springs (tendons). Stance phase is when at most one foot can be in contact with the ground at a time ( Flight phase is when neither leg touches the ground ( The left and right legs have identical force and length. A single step is defined by one stance phase and one flight phase. Thus running would consist of a short stance phase, whilst walking would have a flight phase of zero duration. A gait is characterized by the position and velocity of the body at the start of a stance phase relative to the stance foot, the step period, and F(t); the force along the leg ( Resolving forces vertically and horizontally, Newton's Law gives, for stance (i.e. when at least one foot is in contact with the ground) with duration where Time The initial conditions are At For given Now mechanical work is given by d where []+ is non-zero only for positive values, i.e. [ This problem cannot be solved by elementary methods since The resulting", "label": 1 }, { "main_document": "or vectors of disease (Diamond 1989, p 170). This could potentially take many hundreds of years and could therefore create the false pretence that humans were not the cause as they co-habituated for so long. For example, the Hawaiian Crow is only now close to extinction, around 1700 years after its discovery (Diamond 1989, p169). Another way of looking at faunal extinctions when related to humans is the anthropogenic 'overkill' model (Martin 1984, p357). This basically means human destruction of the native fauna by any of the previously discussed means, such as habitat destruction or hunting. It can occur gradually over thousands of years or even in a few hundred years or less (Martin 1984, p.357). Putting the time differences of extinctions aside, studies have shown that roughly 35% of terrestrial mammals on islands become extinct after the arrival of humans (excluding flying mammals) and this is a minimum estimate (Alcover 1998, p914). In addition, very few mega-faunal extinctions of the late Pleistocene seem to have occurred prior to human arrival (Martin 1984, p359). It is also interesting to look at the differences between the effects hunter gatherer groups have on island fauna compared to agriculturalists, as hunter gatherers would presumably present a case for a direct extinction, with the agriculturalists being an indirect cause. I will look at this by using examples of groups from two different islands with different subsistence strategies. When looking at island faunas we must remember that there is a very high possibility that their habitats would have been void of any other predators. They would have been extremely na They would also be without the genetic need for fast reproduction rates that animals of prey usually have, which would make the extinction process that much quicker (Martin 1984). The island fauna that I will look at in this essay are island adapted species. On Cyprus I will look at the extinction of Phanourios minutus (pygmy hippopotamus) and Elephas minutus (pygmy elephants) and on Mallorca I will look at the Myotragus balearicus (From here on referred to as Phanourios, Elephas and Myotragus). It often occurs that animals which manage the difficult task of successfully colonizing an island setting become either dwarfed or much larger. This is due to selective processes which adapt the species to life in an island setting of impoverished resources (Lax and Strasser 1992, p206). On the island of Cyprus in the eastern Mediterranean recent excavations at a site called Akrotiri Aetokremnos have uncovered levels which claim to show human material culture in the same levels as disarticulated Phanourios and Elephas bones. Radiocarbon dates have placed the site at around 10,000BP which, if accurate, make this the oldest site on Cyprus by 1500 years (Simmons 1991, p857). The claims for contemporary human culture and Phanourios bones are extremely interesting given that prior to this discovery it was thought that they were long extinct by 10,000BP (Simmons 1999, p153). The stratigraphy shows level 2 to be a cultural level with stone artifacts and 1% Phanourios bones - the 1% representing 3000 bones. Level 4 includes", "label": 1 }, { "main_document": "in skin composition, which may suggest that a non-isotropic model would provide a more consistent representation of the human skin. As well as providing research concerned with the skin's mechanical properties, some publications provide experiments to determine the puncture mechanics of the skin. For example, a publication by the department of chemical engineering at the university of California As well as indicating that the stress-strain curve for the skin is non-linear, the experiment determined that the critical stress for failure varied from 0.46 to 5.8 MPa. This range coincides with the value given by another study performed by members of the University of Calgary, which indicated the pressure required to pierce human skin as 3.183 MPa By combining this puncture stress with a finite element model of the human skin, a computational skin puncture test can be performed, which could indicate that microneedles will puncture the skin, when this stress value is achieved. J. Baxter, S. Mitragotri, \"Jet-induced skin puncture and its impact on needle-free jet injections: Experimental studies and a predictive model\", P. Aggarwal, C.R. Johnson, \"Geometrical effects in mechanical characterising of microneedle for biomedical applications\", To conclude, there has been a large amount of research performed on the subject of microneedles, and a large number of designs have been successfully fabricated. In nearly all of the previous studies, silicon micromachining and deep reactive ion etching techniques have been employed to produce the arrays. Although successful, these methods require multi-step processing and clean room facilities which makes the process expensive, and time intensive. The machining techniques used give very little control over the design and geometry of the microneedles. The geometric shapes of microneedles will play a significant part in determining the overall strength and performance, and the choice of needle shape are restricted by these processes. Silicon and metal microneedles have been shown to provide high strength structures, but issues with biocompatibility tend to suggest that polymer microneedles may provide a cleaner and safer alternative. Hadgraft, J., Guy, R.H., Eds. The microneedle arrays were designed using the SolidWorks CAD package. A total of six designs were modelled, and are shown below in figure 2: The designs shown in figure 2 were chosen to offer a range of structure, to enable an optimal design to be established. These designs were modelled with a variety of geometry, namely: The typical range of microneedle specification is shown below in table 4, along with some values taken from previous research. The chosen microneedle geometry coincides with the specifications from previous studies: There are a number of design parameters which can be controlled in order to influence the failure force, and hence the performance of the microneedle array. The tip radius, and bore diameter were altered using the SolidWorks CAD package, as these parameters were believed to have the greatest influence on the fracture force. This assumption is coherent with previous studies on needle mechanics Separate SolidWorks files were generated for each of the above designs, each with varying tip radius and bore diameter. Using the integrated Finite Element package within SolidWorks (COSMOSworks), these array", "label": 1 }, { "main_document": "of reporting and valuation have been tested to shed light on the grey areas and give rise to better accounting standards while at same time making sure that the value relevance of financial analysis is not diminishing. A paper by Paul Hribar & Daniel Collins on It is based on the presumed articulation between changes in balance sheet working capital accounts and accrued revenues and expenses on the income statement, which breaks down when non-operating activities like M&A, divestitures and translation of foreign subsidiary' accounts are present. Another article by David Aboody, John Hughes & Jing Liu discusses They analytically examine the impact of market inefficiencies on the estimation of coefficients in value relevance regressions and derive a procedure that corrects potential biases caused by such inefficiencies in 1) the value relevance of earnings and book values; 2) the value relevance of residual income value estimates; and 3) the value relevance of accruals and cash flows. The findings show that cash flows map into returns with a statistically significantly higher coefficient than accruals after adjustment for future returns, while the two coefficients are statistically indistinguishable from each other when stock price is not adjusted. The topic of stock options is another popularly tested one. Carol A Marquardt conducts an empirical analysis on Valuation models and their applicability have undergone several analyses. J. Francis, P.Olsson and D.R Oswald in their study on ' This superiority of the AE value is mainly driven by the sufficiency of book value of equity as a measure of intrinsic value and the predictability of abnormal earnings. Accounting policy and financial standards must evolve. As the economy progresses, and the financial environment is revolutionized, it calls for a need for greater regulation, a better conceptual framework and global accounting standards that integrate differences in economy, allowing more transparency, reliability and non-manipulation of accounting statements. Stephen Penman, in his article (2003) on the quality of financial statements, puts forth certain changes that need to be implemented: An analysis of net revenue, a reconciliation of gross revenue to net revenue and a breakdown of booked and deferred revenue is needed. Operating income/assets/liabilities should be clearly distinguished from those of a financial nature. Transitory items need to be clearly displayed on the income statement Consolidation prevents proper understanding of how assets and liabilities are structured; this calls for more transparency and desegregations. Clarity about accruals need to be established, to give readers a better sense of 'hard' and 'soft' numbers and the likelihood that earnings will be sustained. Russell J. Lundholm in the article, ' This will identify firms that have abused their reporting discretion. With a system to report on the Other issues are: Accounting applicability in the dotcom business even though on the 'emerging issues task force' of the FASB, needs action taken. FASB should look into explosive issues of accounting for pension costs, for post-retirement costs such as health and insurance plan, interperiod income tax allocation and translation of foreign currency financial statements. Apart from new standards, a supplemental quality analysis is required. In summary, the traditional way", "label": 1 }, { "main_document": "background, perhaps allowing its economic benefits to take prominence. Another important benefit brought by the integration into a Single Market is that of the coordination between countries of some important institutions and regulations, which a FTA would be unable to provide. For example, enlargement can have a positive impact on internal security, seeing that it facilitates police, border and judicial cooperation This was first addressed in the Treaty of Amsterdam and developed in detail later in the Tampere European Council in 1999. Enlargement also requires from new members a great deal of advances on areas like environmental regulation, health and safety requirement for food and public health, safety standards for nuclear power plants (which 5 of 10 new members possess) by matching those of the new member states, whose policies are typically less developed, to those of current members. Although the political issues of the expansion have not been addressed, it is inevitable to recognize the need for restructuring and its additional costs. Although this will constitute further expenses and efforts for the creation of a successful Single Market, its benefits seem to surpass these disadvantages. In conclusion, the economic and social costs of non-implementation of Single Market (and the establishment of a simple FTA) seem to be higher than its implementation costs.", "label": 0 }, { "main_document": "is quite negative. It is likely to describe a dangerous and miserable city due to the decline of the sipping industry which Liverpool used to heavily depend on. However, Liverpool has enough resources to sell the city with a positive image, so it just has to find an effective way to take advantage of those. The Beatles and Albert Dock are all fantastic and great opportunities. Here, there are new projects to create water parks instead of having green public parks followed by floating restaurants, cafe and hotels In addition, Liverpool has taken other actions to be friendly to visitors by placing sings and maps on the street which show the direction to major attractions. It is also certain that in order to achieve this, local communities give considerable supports to the city making nice atmosphere. With the purpose of the sustainable tourism, Liverpool should have long term development plans. According to Liverpool Vision (2004), there is a project of Liverpool city having a large scale conference facility. It is a great opportunity to demonstrate Liverpool as a new image of popular conference destination. It possibly encourages Liverpool's tourism to be sustainable. In other words, this scheme ensures that Liverpool can generate further benefit in long term even after the European City of Culture for 2008. As the UK's next European Capital of Culture, Liverpool will see its profile rise. It intends to boost the tourism industry by increasing the amount of tourists and adding value of tourism to Liverpool. Since Liverpool was ever one of Britain's poorest cities, the re-imaging is the prior thing to implement. By undertaking the regeneration project, understanding the stakeholders within the tourism industry in Liverpool is the crucial factor need to be considered. The authorities have to put all the stakeholders together and make them cooperate with each other so as to make the industry succeeded. Subsequently, they have to plan for the redevelopment strategically and thoughtfully in each of the sectors in the tourism industry in Liverpool and response to the problems immediately. After putting all into action, the tourism industry in Liverpool must have a great opportunity in European Capital of Culture 2008.", "label": 0 }, { "main_document": "multiplexity and autonomy to examine the relationships forged with the aid of electronic technology. He concluded that, to the most, they are merely manifestations of fostered \"categorical identities\" based on similar race, occupation or interests, and far from being truly communities. So, how do we negotiate between the contradictory views? Logically, the question \"are there communities in cyberspace\" should be answered by examining I will discuss the later first. The term 'community' has an elusive meaning. At minimum, community could means \"a collection of people who share something in common. Van Vliet and Burgers (1987) argue that a community contains \"social interaction, a shared value system, and a shared symbol system\" in its social, economic, political, and cultural realms. Calhoun (1998) proposed \"density, multiplexity, and autonomy\" as criteria for a community. As Fernback (1997:39) observed, the term community \"seems readily definable to the general public but is indefinitely complex and amorphous in academic discourse... It has descriptive, normative, and ideological connotations.\" Johnson, A (2000) Blackwell: London, p. 53 This \"normative and ideological connotations\" explain partly the difficulty of defining community. While community is often defined in Furthermore, such interpretations are constantly challenged by the social shifts which impact the way in which people relate to each other. Here I will draw a short outline of how community had been conceptualized and discussed, which shows that the elusiveness of its concept reflects a century-long intellectual struggle between the nostalgic calling for The sociological study of community can be traced back to late nineteenth century, when many sociologists used this concept with the dichotomies between pre-industrial and industrial, or rural and urban societies. In For many sociologists then, communities were associated with all the assumed good characteristics of the rural, pre-industrial societies, and discussed as a contrastive alternative in their critique of the alienating modern societies Hence, the rediscovery of \"traditional community\" in 19th century can be seen as a reaction out of the anxiety of the social disintegration brought by industrialization and modernization. Abercrombie, Nal. (2000) Penguin:London, p. 64 The anxiety of the plausible decline and disintegration of traditional values and social orders has never ceased. It is often argued that the urbanization, the penetration of technology and market rationality, the rise of individualism and many other social shifts in modern Western societies has created generations of lonely, atomized citizens, and caused the \"loss of community, identity, and morality\" (Nisbet, 1953), \"the disappearing social capital\" (Putnam, 2000), and the \"corrosion of individual character\" (Sennett 1998). One intuitive response is to call for Sampson (1999), in reviewing the community issue, noted \"as we approach a new century and reflect on the wrenching social changes that have shaped our recent past, call for a return to community value are everywhere (p.241). In the other hand, some respond to these challenges by For instance, despite how community had been associated in the rural/urban dichotomies, Robert Park and his colleague at the University of Chicago started to use this term in studying urban life since 1920s. This conceptual shift is best reflected by Webber's 'community without propinquity'", "label": 0 }, { "main_document": "wary all the time of new concepts and new philosophies that will compromise sovereignty in the name of humanitarian intervention, in the name of globalisation which is another form of trying to interfere in the domestic affairs of another country.\" Therefore much of the suspicion that is generated from the East will have to be answered by a fair and detailed framework. The ICISS Report provides a good basis for that since there is much scope for improvement in relation to the four 'responsibilities': The Responsibility to Protect, The Responsibility to Prevent, The Responsibility to React, The Responsibility to Rebuild. The last 'responsibility' draws attention to the fact that humanitarian intervention is a continuous process and the intervener cannot claim to have finished the task simply by halting the gross violations. Indeed their task extends towards \"[building] a durable peace, and promoting good governance and sustainable development.\" So is there a future for humanitarian intervention? This is succinctly answered by the ICISS Report; \"If we believe that all human beings are equally entitled to be protected from acts that shock the conscience of us all, then we must match rhetoric with reality, principle practice. We cannot be content with reports and declarations. We must be prepared to act. We won't be able to live with ourselves if we do not.\" International Commission on Intervention & State Sovereignty, 'The Responsibility to Protect: Report of the International Commission on Intervention and State Sovereignty', Koskenniemi, M., 'What is International Law for?' in Evans, Malcolm (ed.), Brunne, Jutta & Toope, Stephen J. 'The Use of Force: International Law after Iraq', Henkin, Louis, 'Kosovo and the Law of Humanitarian Intervention', Acharya, Amitav, 'Redefining the Dilemmas of Humanitarian Intervention', Op. cit. note 13, p.39. Ibid.p.75. The interventions in Kosovo and Iraq paint a bleak image of humanitarian intervention. This is no less reinforced by the weakness of the UN Security Council in terms of their veto power and the interpretation or rather misconstruction of the provisions in the UN Charter. Arguments propounded by prominent jurists based on customary law seem tenuous at best because of the inability to prove the existence of But in light of all these failings the case for humanitarian intervention is no less weakened as proven by the ICISS report. The fact that consensus among the international community has been difficult to attain has not led to the demise of a right of humanitarian intervention. Instead strong emerging views calling for a carefully planned framework of this right of humanitarian intervention has continuously arisen. It therefore seems likely that this debate will persist until we find ways in which the law can be interpreted creatively to address rhetoric with reality and principle with practice. We have a 'responsibility to protect'.", "label": 0 }, { "main_document": "belong to \"Accommodating\", like challenging and response quickly, preferring to work in teams to complete tasks under time pressure. On the other hand, as to Honey & Mumford learning style, people in Oticon should be \"the Activist\", like to be involved in new experiences, open minded and enthusiastic about new ideas, good at working with others in business games, team tasks, role-playing. Psychological Contract: PC in Oticon should be relational for people here are whole involvement, highly motivated with small turnover. In a word, they grow up with the organization. Reward Oticon's reward must contain both extrinsic and intrinsic which will be discussed in detail later. Many commentators figured out that organizational change is a strategic imperative, which means that major or radical changes on organizational structure and culture, job contents, careers of workforce are required in order to deal with some predictable or unpredictable pressures happening in the wider social, economic, political and technological environment. (Buchanan, 2004) So when Kolind (CEO of Oticon) realized that cost-cutting and productivity-enhancing strategies would not be enough to make Oticon win over other strong competitors, he established famous \"spaghetti organization\", which also could be called as project-based organization or dis-organization, knowledge-based, networked, with no formal hierarchical structure, without traditional management positions, lack of formal grooming high-fliers program. By the 1980s, Oticon had continued to use the functional structure. At that time, the organization had all the strongpoint and weakness of traditional hierarchical organizations, including formal process, conservative culture, employee loyalty and conflict-avoiding behaviors. Although by then, with cost-cutting strategy, company achieved a position of world hearing aids market leader, it still faced increasing sales, financial and organizational difficulties. In 1990-199 1 Oticon underwent extensive organizational changes, which among other things, introduced a project-oriented organization structure, an open office with mobile workstations, and a new paperless information system (Larsen, 2002). This de-layering project organization could be analyzed in six aspects: Reduce number of hierarchical levels: In the functional organization period, Oticon had the managing director, middle manger of every department, and other staffs in different department with different responsibilities and subordinates, the job structure was very complex. In \"spaghetti organization\", the situation is changed to a large extent. There are only three levels in the company: project sponsors (the former management team); project leaders; and project coworkers. The middle manager disappeared, turned out to be senior specialist, providing professional expertise in functional areas. Improves communication: In the past, each project was divided into several parts and distributed initially, every person was just charged with specific task, lack of communication across department. Only managing director sponsored and adjusted the project process. But in the dis-organization, staffs with different tasks work together, learning to communicate with each other, respecting others' work, which leads the project to be time efficient and high quality. What worth mentioned is that in Oticon, traditional communication way like written paper is replaced by electronic scanner and other information technology system. The whole organization now focuses on utilizing face-to-face communication, like oral communication, negotiation to ensure projects could be achieved before the deadline. Shortens", "label": 0 }, { "main_document": "to implement MRP system, but the expense will increase. So the score should be level off on 3. Through a series of activities to reduce or remove restraining forces, the balance between driving forces and restraining forces is 20:10, which indicates that we have a great chance to implement the strategy successfully, although we still have to do further analysis. In order to successful implementation of manufacturing strategy, the overall project can be divided into 3 sub-projects: education project, layout transformation project, six sigma project. For each project, we need to appoint a senior executive as a captain, and a project manager in charge of integration of all resources. In every project, we will run a continual measurement in order to ensure achieving every target of the project. This project should run firstly to build foundations for the implementation of other projects. Target 1: Ensure that leadership understand clearly the importance of change, and get commitment. Target 2: Ensure that every worker understand clearly the importance of change Target 3: Each machine shop worker must reach a level of skill and knowledge to meet the requirement of tasks Once the education project finished, transformation project will be implemented. This project is a main project and aims to achieve an efficient and effective change from process layout to cell layout. Target 1: Build a work cell as test. Target 2: Transform whole plant into cell organization completely Six sigma project aims to continually improve process quality and reduce defects, which can be applied when the transform project finished. Target 1: Set up and train project team Target 2: Reduce defects to level of four sigma which means 6210 defects per million units. The implementation plan will start in Dec. 2005 and complete in Dec. 2008. A simple timescale shows the schedule of each project.", "label": 0 }, { "main_document": "Despite nearly two decades of the demand to legalise same-sex marriage in different countries, the issue still arouses intense passion and sometimes severe hostilities among the public and the political class. Infact, even within in the LGBT communities, the idea of same-sex marriage is an anathema to some while an aspiration among others. Though this issue concerns all the sexual minorities, the scope of the paper limits my concern only to gays and lesbians. This Paper tries to address the question whether the right to same-sex marriage would result in the elimination of discrimination against gays and lesbians or would it perpetuate and reinforce the oppressive heterosexual normative order through the institution of marriage while exploring the current discourse on gay rights in the US and India through certain case laws. In many countries like Sweden, Norway, Denmark, United Kingdom etc gays and lesbians have been given the right to form civil unions and domestic partnerships with all the rights and privileges which are allowed to married couples including the right to inheritance, pension benefits etc. while only Belgium, Netherlands and Canada have legalised same-sex marriage in recent years after long struggles. The campaign for non-discrimination and equal rights of gay and lesbians shot into the public light after the infamous Stonewall riots in New York in June 1969 wherein the Police raided a gay bar thereby killing many people. From the late nineteenth century to the mid-20 Hence the Stonewall riots marked a watershed in the history of gay rights movement since it was the first time gays and lesbians came out openly in public to express their sexualities and to fight against discriminatory legal and social practises. According to Jeffrey Weeks Since the attitudes toward homosexual behaviour are culturally specific, the concept of homosexuality should be historicized\". Similarly, Jonathan Katz The term 'homosexual' was invented only in 1869 while 'heterosexual' can be traced back to 1892\" since in England, till 1885 the only law dealing directly with homosexuality was buggery i.e. sodomy which failed to distinguish between man and man, man and animal, man and woman etc Jeffrey Weeks, \"Discourse, Desire and Sexual Deviance: Some Problems in a History of Homosexuality\" in Richard Parker and Peter Aggleton (eds.) Jonathan Katz, the Invention of Heterosexuality (1995) Dutton Group at p. 10-12 Jeffrey Weeks, \"Discourse, Desire and Sexual Deviance: Some Problems in a History of Homosexuality\" in Richard Parker and Peter Aggleton (eds.) The distinction between 'heterosexual and homosexual' behaviour was a 19 Michel Foucault What psychiatry will call homosexuality is a specific creation of the Age of Reason. Further, in At the juncture of the body and the population, sex became a crucial target of a power organised around the management of life rather than the menace of death\". Didier Eribon, \"Michel Foucault's Histories of Sexuality\" (2001) 7:1 Ibid at p.45 It was only in the 1970s and 1980s with the rise of radical left-feminist movements along with the student uprisings all over the world that gay and lesbian rights became an important issue of political mobilisation. However, most social movements", "label": 0 }, { "main_document": "are derived using equation (4), assuming that for the diatomic gases Air and Nitrogen, n = 5, and for the monatomic gas Argon, n = 3. The error quoted for the experimental value was based on the precision errors of the instruments used in the experiment (see Table 2). This error was calculated using the standard techniques for manipulating errors. The experimental values calculated for do not agree with the expected value or theoretical value quoted for in Table 1. In general the experimental results are larger than expected by It is noted that the value found for Air is very close to that found for Nitrogen. This is because Air is composed mostly of Nitrogen, a value of around 77%. The error bounds stated in Table 1 are only precision errors. It is suggested that the error in these measurements is actually a lot larger than stated, since some errors have not been quantified and used in the error analysis. For example, these include the human judgment error of which frequency is the resonant frequency of the piston, and in the case of measurements in the gases Argon and Nitrogen, the assumption that the tube is filled completely with the gas and that all remnants of air have been expelled, which is unlikely to be true. The errors in are more likely to be of the order This would bring the value for Air using f The values calculated using the value of f It was mentioned earlier in Section 3 that there were experimental problems in determining the value f It is likely that there is a large error in this value. This may be due to the piston not being placed correctly in the tube, which would mean the piston would be subjected to friction from the side of the tube, changing the resonant frequency. Under these circumstances air would also be able to pass from one side of the piston to the other, thus not satisfying the conditions for the experimental theory to be applied. In this case the force due to the displacement of the gas would be significantly reduced. A possible explanation for the discrepancy in the values of for Nitrogen and Argon is therefore that the calculations for these gases are based on the value f It has already been suggested that there is a large error in this value, however in an ideal situation it would be the resonant frequency of the actual gas, Argon or Nitrogen, with no bungs in the tube that would be used as opposed to that of Air. This is one of the limitations of using this experimental setup; it is impossible to find the resonant frequency of the piston for Argon and Nitrogen when a bung is removed from the tube. To analyze the discrepancy in f This would be applicable since these values use the value of f This ratio gives altered values of = 1.339 for Nitrogen and = 1.663 for Argon. These values are much closer to those predicted by theory and those taken as", "label": 1 }, { "main_document": "gentle use of low pressure to measure fluid pressure in the human eye ball, the multiplicity of linear and rotary motions in robotic mechanisms, to high force required for concrete breaking pneumatic drills. Although pneumatic actuators have many applications, there are advantages and disadvantages of using pneumatics instead of electrical actuators. Pneumatic actuators are relatively inexpensive as there is low installation cost. They are safer to use than electrical actuators. As electrical components get heated easily causing further damage. Pneumatic actuators can perform linear as well as rotary motion easily, hence are suitable for complex robotic designs while electric actuators are all basically rotary motion and complicated mechanisms are needed to convert rotation into other forms of motion. Pneumatic actuators also have high speed, while electric actuators have poor torque-speed characteristics at low speed. Pneumatic actuators are also suitable for lab work. Pneumatic circuits are also very easy for fault finding and can easily be operated unlike electric. On the contrary, electricity is easily routed to the actuators, as cables are simpler than pipes. Leakage of pipelines with high air pressure can be of a concern for health and safety. Electric actuators are fast and give accurate results, while pneumatic actuators have low accuracy and control. Pneumatic actuators cause noise pollution. It is difficult to control the speed of the pneumatic actuators compared to electric. Also electricity is available very easily compared to compressed air in some industries. For some applications electric actuators are preferred to pneumatic for instance, where there is large distances between actuated valves, in this case electric cabling is often less expensive than installing a compressed air system. Thus pneumatic actuators and electric actuators have their own advantages and if used together can help in making robotic mechanisms. The trainer \"Pneumate\" uses air instead due to ease of use and safety.' Another application where air can be more desired instead of electrical solenoids is where there is exploitation of flammable items. As solenoids and other electronic components can heat up or produce spark and can lead to fire. While using pneumatic systems care must be taken of they type of air that is used. It can cause rusting and scaling in pipelines, producing particles that are deposited in downstream components. This dirt alone or in combination with water, oil and other contaminants can cause valves to stick, instruments to clog and air-driven tools to malfunction. Aside from these obvious problems, moisture also reduces the power and efficiency of air motors and tools. Oil, dust, dirt and water alone or in combination, are the enemies that attack any compressed air system, plugging orifices of sensitive pneumatic instruments, wearing out seals, eroding system components, reducing the efficiency of air-operated tools, damaging finished products and otherwise contributing to product rejects, lost production hours and rising maintenance costs. Although the best defence against oil and dirt is effective filtration, this fact is often overlooked until problems arise. The degree of drying required will be determined by an analysis of the applications and environmental conditions. Various applications require different levels of air purity.", "label": 0 }, { "main_document": "Pan Rock (Smith 1907: 158) and another at Riou that indeed carried more pottery than other goods (Pucci 1983: 111). Shipwrecks have also proved a valuable source of information concerning the understanding of the Roman economy. Especially as some reveal parts of the cargo that otherwise might not have survived if on land (Fulford 1978: 59). Again many aspects concerning shipping need to be investigated because its cost, type and speed are essential influences on any economy (Greene 1992: 17). Firstly shipwrecks can obviously reveal the type of cargo often on board merchant ships. Complete cargoes can be found as ships can be preserved wholly if in water-logged conditions (Greene 1992: 18). Trade of food stuffs, wine, oil, metal and pottery can be measured by the surviving ceramics, amphorae and ingots (Greene 1993: 18). Evidence for the type of cargo has also been suggested through pictorial depictions on a tombstone found in Mainz. Barrels, presumably for wine from the Germanic regions and sacks for grain have been found in the iconography (Ellmers 1978: 12). It is mostly amphorae that survive in wrecks however compared to barrels and sacks and so evidence could be considered biased (Parker 1980: 50). The quantity of goods that could be transported is another obvious question. There has been a range in the size of ships either found wrecked or in historic descriptions. The majority it is estimated could carry between one hundred and one hundred and fifty tons. The largest, the Isis, is said to have held up to one thousand two hundred tons (Greene 1992: 24-25). It has been termed a \"super-freighter\" (White 1984: 212). These capacity estimates are made possible through the improvements in underwater archaeology (White 1984: 145). It is not only capacity that can be estimated through wreckages but also the ancient trade routes can be established. Sea trade appears to have flourished under the reign of Augustus as demand for foreign goods grew and land transportation was expensive and dangerous (Parker 1980: 54). The study and charter of dates of wrecks can help establish how and when trade grew in size and destination (Parker 1980: 50). Many wrecks lie within the Mediterranean (Greene 1992: 18) with many on traditional Eastern routes (Throckmorton 1972: 75). There are many well known case studies that illustrate all of the aforementioned aspects. The shipwreck at Madrague de Giens in Southern France was excavated by Tchernia. Much of the cargo had survived and the bulk was found to be Dressel 1B amphorae (Greene 1992: 26), known for carrying Italian wine (Potter 1987: 155). There was also fine and coarse ceramics (Greene 1992: 26) as well as grapes (Potter 1987: 157). The vessel and cargo remained largely intact therefore estimates of capacity were possible (Greene 1992: 25). A trade route was established as many of the wine vessels bore the stamp P. VEVEIVS PAPVS. His kiln site is known to have been south of Rome in Terracina (Parker 1980: 53-54). A nearby port is therefore the most likely place that the ship last docked before setting out for", "label": 1 }, { "main_document": "contribute a high proportion of finished lambs to the sector Lowland sheep farmers tend to use crossbreed ewes and terminal sires to produce lamb for slaughtering. At Sheepdorve the early lambing flock are usually 'a hardy cross of Shetland and Poll Dorset ewes, naturally lamb at the beginning of the year' These breeds are used because the natural lambing means that no artificial stimulation is required to encourage the ewes into season. This therefore is an important aspect in terms of animal welfare. Shetland ewes tend to be fairly small and therefore the cross breeding helps to keep the sheep size relatively small. This therefore allows for a higher stocking density. The farm soil is relatively thin and not ideally suited to organic production. However, this breed of sheep is suited to the thin soil and also due to the nature of the breed they are better adapted to looking after themselves. Crossing Shetlands with Poll Dorsets is a good combination because Poll Dorset 'ewes are of medium size and are naturally prolific Sheepdrove also cross the Shetland breed with Texels and Hampshire Downs. The farm is currently phasing out the Hampshire Down Sires, as this breed is not ideally suited to the soil type at the farm. The Texels are a good breed for long keeping lambs, and so this breed is often used in the later lambing flock. Sheepdorve uses a variety of different breeds of sheep because different breeds finish at different periods throughout the year. Sheepdrove therefore use a variety of sires so that the lambs are finishing at different periods. This is therefore an important aspect for Sheepdrove, as it is aiming to finish lambs throughout the year and needs to be able to kill 50 lambs per week. The indoor lambing flock at Sheepdrove are housed 6weeks prior to the start date for lambing. This allows for the ewes to adjust to the housing and the feeding techniques in the housing system. The early housing means that the ewes are not getting stressed closer to the pregnancy which is when the ewes need the highest energy intake. Housing later in the pregnancy can often lead to pregnancy toxaemia. Sheepdrove only house the early lambing flock because of the disease risk associated with the system and it is usually the older stocks that come inside. The indoor flock is feed for 1 month before being housed. The ewes are feed oats, and by feeding a month before housing, allows the ewes to get used to the oats. Last year Sheepdrove feed a mixture of beans and barley, but this feed was making the lambs too big and therefore the ewes encountered greater problems during birth. Later on in the pregnancy the ewes require high protein content in feed and also during the early lactation period. This is because 'the requirements of the ewes and lambs are high' The ewes are also feed grass silage as a bulk feed. Grass silage should be 'wilted to a dry matter of 25-28%' This is because the intake by the ewes", "label": 1 }, { "main_document": "through. Not that it is impossible for someone to change and acquire such skills, but it does take time and some people are predisposed with elements that facilitate their training and skill acquirement. I believe that the situations where I was not able to acquire command of basic facts and continuing sensitivity to events should not be regarded in a negative way. Although my goals were not met, the experience itself enabled me to draw conclusions about different aspects in a manager's system of approaching and relating to a trainee, and I learned what I should not do in the future. Guirdham (1990), defines the gaining of skills and attributes needed for managers as a combination increased knowledge, which will result in more control in our behaviour with others. Consequently, higher levels of confidence will be attained, resulting in more eagerness to test new forms of interpersonal relations. Skills will be the outcome of such enthusiasm, and if correctly pondered and reflected upon will contribute to higher understanding. Needless to say, determination will be needed all through the way to keep motivated and complete the cycle resulting in continuous improvement and enhancement of the skills and attributes obtained. In order to customer demands, it is essential for the hotel to define what it is the customer wants and expects to receive (Kotler et al, 2003). Customer expectations vary depending on the market segment and on the origin of the guests. Hotels should therefore identify their guest's expectations and anticipate them when delivering the service. The Meridien Dona Filipa, Portugal, attracted many different segments, the most common being, couples who were very loyal to the hotel, young families with small children, teenagers on summer vacation, and many golfers, needless to say that each have different expectations, and it was during the summer that all these segments came together and it became complicated to satisfy everyone. What the hotel tried to do, was to meet the most apparent expectations and then try to satisfy guests as their requests would come up. Cleanliness was extremely important hence the lobby and entrance were spotless, the facilities provided were always at excellent condition, and if not, their maintenance would become a priority. As referred above, the hotel had many loyal customers which were retired couples who would come every year to spend a large amount of time at the hotel. The relationship between staff and these guests was very personal, I remember that when they arrived, they greeted staff just as friends, and the front-office manager would always spend time talking to them and sometimes even make herself responsible for their check-in. Furthermore, the general manager had only been at the hotel for a few months and he demanded to be called whenever these guests arrived. It was also interesting to notice that when something went wrong, these loyal guests would accept it better and this is confirmed by Johnston & Clarke (2001) when stating that the degree to which guests are tolerant varies with their level of commitment to the hotel and service. Still with regards", "label": 0 }, { "main_document": "their rights, the implication being that it is largely these three elements that keep women in subordinate positions. Merry describes FGM (or female genital cutting as she names it) as \"the poster child for this understanding of culture\" See Han (2002), and Nussbaum (2002), 120-121. Art. 2, CEDAW. Merry (2006), 13. Ibid. Ibid, 27-28. Postcolonial feminism adds an additional dimension to the controversy surrounding FGM by raising questions about the process of legalization of human rights, the assumptions made about gender and culture and the tendency to universalize women's experiences and universalize solutions. Toubia has stated how FGM is portrayed \"as irrefutable evidence of the barbarism and vulgarity of underdeveloped countries\" Moreover, Mugo argues that FGM has been uprooted from its context with the effect of reducing African women's struggle to effectively one issue. Kapur (2006), 103. Toubia, as qtd in Abusharaf (2001), 101. Mugo (1996-1997), 479. The debate about FGM's harmfulness is further complicated when reading narratives from women who support the practice. Obiora points out that FGM can be considered a \"form of self-assertion, or even the ultimate expression of their personhood.\" Some women also argue that by decreasing a women's sexual needs she will actually become more powerful and less at the mercy of her husband. Indeed, researchers on the IRRRAG project admitted the presence of an \"avoidable tension\" when doing research on the cross-cultural meanings of reproductive rights asking themselves, \"what if women's voices tell us things we would rather not hear, or simply cannot hear - because they express values and priorities that are different from those we espouse? Obiora (1996-1997), 329. Seif El Dawla (1999), 85 and Abusharaf (2001), 130. Abusharaf (2001), 122-123; 128. Petchesky (1998), 20-21. Meyers, as qtd in Abusharaf (2001), 1. Trueblood (1999-2000), 453. See also Tahzib-Lie (2000) who points out that this 'informed choice' endangers not only the participant's health but others too and hence constitutes a public health threat that should be curtailed by the state, 976-977. The above debates are unlikely to subside but will continue to provoke discussion among human rights scholars and activists. What is significant for this essay is to contemplate how these debates feed into the discussions that take place in UN bodies when states present their reports and experts question state representatives, provide feedback and comment, and ask for further elaboration. How much leniency, if any, do they give for arguments of 'culture'? There are several UN conventions State reporting mechanisms are one of the means used to monitor state compliance with treaty obligations, which also allows experts sitting on the various treaty body committees to engage in a dialogue with states to determine what progress has or has not been made, what measures have been taken to implement the treaties, and what obstacles exist and their possible remedies, in addition to the adoption of concluding observations and comments on state reports. This section will look at lists of issues, replies to issues and concluding observations of the Committee on Civil and Political Rights (CCPR), the Committee on Economic, Social and Cultural Rights (CESCR), the", "label": 0 }, { "main_document": "both if we want to get the best results. International analysts, strategists and portfolio managers use this data. The firm also provides qualitative analysis on international markets. It produces monthly regional Navigator reports, one of which focuses on Latin America. This site could be considered as a link exchange, but it would also be a suitable sponsor. Some of these activities are developed in partnership with other international organisations. This non-profit making organisation could be a possible affiliate. It has its own network of targeted B2B sites. To get results from banner ads, we need to spend a great deal of money, and we need to be prepared to refine both the banners and their placement on an ongoing basis. Mailing list ads, however, are generally far cheaper, and often reach much more finely targeted audiences than banner ads. Advertising in the trade press prior to and just after the launch would encourage journalists to use our editorial, but again, it is very expensive. Google AdWords can help us reach people that are actively looking for financial products and services. That means we are more likely to receive targeted customers. Pricing is on a cost-per-click basis so we only pay when visitors click on our ad, and it's easy to control costs. E-mail lists can be promoted through an affiliate programme. One can be easily set-up at zero cost (see We then pay our affiliates a small commission per subscriber, or credit them with advertising impressions in our newsletter (see appendix xiii). Research shows that most traffic-driving initiatives generate less than a 5 percent conversion rate (see appendix xiv). The key to a profitable return-on-investment rests on the ability to gather accurate information about site visitors and get permission to contact them. By contacting our existing database. Our e-mailing campaigns will start with our Finance Asia and Finance China databases. We will also contact the more general advertisers on our site like HSBC. We would encourage contacts to forward our e-mail to a friend who might be interested in the new content. The most popular house lists generally receive open rates of less than 40 percent. The average clicked through rate hovers between 6 percent and 10 percent (see appendix xv). Regardless of verification, a certain percentage of prospects will still bounce, and we should anticipate an average 10 percent bounce rate. By using premium content to obtain reader e-mail addresses. Premium content is limited-distribution, high-value information sent to prospects in exchange for their e-mail addresses and permission to contact them again in the future. Premium content includes articles that focus on particular issues or in-depth treatment of specialised topics. It can also consist of comment on current economic and social trends. By encouraging sign-up from visitors to the site. We need to give our e-mail recipients a reason to visit our site, and to eventually link or advertise with us. We should arouse their curiosity by offering them a valuable information premium they can download when they visit. To build up our lists, visitors will be invited to fill out a", "label": 1 }, { "main_document": "This report will analyse and evaluate the operation with emphasis placed on Service Quality. Recommendations for improvements and a critique of techniques used will be included as appropriate. 'The Oriental Star' is an 'all you can eat' buffet restaurant that offers more than 40 Cantonese and Indian dishes. It is independently owned and managed by Mark Harwood, who describes the service: \"We offer an excellent service of authentic Chinese food in a buffet style.\" The restaurant offers a lunchtime and evening buffet (Monday - Saturday) and an all day buffet on Sunday. Mark Harwood, December 2005 The Service Concept is a core task in managing service operations Edvardsson (1998) suggests the service concept is a combination of customer outcome, customer process and prerequisites for the service This is information regarding five aspects of the organisation The The Finally the The Oriental Star service concept is figure 1 below. Johnston, R. Clark ,G. (2005), Edvardsson, B (1998), Service Quality Improvement, Johnston, R. Clark, G. (2005), page 40 Three methods were used to collect primary data. Further details are in the appendices. Company information was also obtained from restaurant visits and personal experiences. A questionnaire was delivered to the manager of the restaurant to capture organisational information (e.g. turnover, most popular dish) and their perception of the quality of the service delivered. The survey was conducted using a combination of open ended questions and on a Likert Scale for more quantitative analysis. Twenty five customers were asked their opinion on seven aspects of the restaurant using a scale ranging from very poor to very good. They were also asked how important they thought the factors were towards the service quality offered by Oriental Star. Information was also collected about how many times they visit the restaurant, whether they had made a complaint and some basic demographic information. For avoidance of doubt, service quality was defined for participants as \"the overall quality of the service offered by Oriental Star\". A STA combines numerous factors to provide a \"powerful tool to assess and improve the customers experience of a service process\" It combines five key stages to assess the customers experience as they flow through the service process. For example a mystery shopper may walk through the process and assess each transaction describing it as delighting (+), satisfactory (0) or unsatisfactory (-). A brief reasoning as to why this assessment is reached is provided on the STA. Five customers were asked to fill out the forms, and the findings and comments will be included in the analysis. The STA form used can be seen in appendix 4. Johnston, R. Clark, G. (2005), page 202 In most businesses but especially service industries the customer is king. Customers are \"the final judge as to how well the quality of the service matches up to requirements, and, by their continued support, determine its long-term success\" The business should first aim to meet customer expectations, and eventually to exceed expectations. Along this progress, careful management of expectations is required to ensure that the gap between expectations and delivery is minimised.", "label": 1 }, { "main_document": "The following report discusses the findings of our evaluation of six bread-making machines and focuses on their critical assessment. The bread-makers were assessed according to various criteria. The markets for these products are those of us who prefer to bake our own fresh bread for quality reasons but have neither the time nor the inclination to knead it out ourselves or perhaps for those of us who consume a lot of bread so that a bread-maker would provide economical and readily available bread or customers who like to bake a variety of breads or perhaps even gadget enthusiasts. At the top end there is the Panasonic, Kenwood and Large Breville, all of which are priced at over At the bottom end there is the Cookworks, Morphy Richards and the Small Breville, which are priced at under There is therefore a clear inconsistency in prices, and a This is perhaps due to the different objectives of the manufacturers. So that for the inexpensive group of bread-makers the manufacturer aims to maximise sales, hence the low price. Whereas for those bread-makers that are at the top end the aim is perhaps to retain the manufacturer's reputation for quality associated with the brand name (i.e. Panasonic, Kenwood). There are possibly two markets that can be suited to the different group of bread-makers in this comparison. For the inexpensive appliances the suitable market might be the \"gadget\" enthusiasts who are those customers that do not necessary require a bread-maker for practical reasons but more for collection purposes. This is perhaps the reason why one of the cheap machines, the Morphy Richards, has an appealing aesthetic design. There are also many products on the market today that serve well only as gifts, for instance a husband's gift to his wife. Perhaps then the inexpensive bread-makers are also aimed at such markets. On the other hand, the more expensive bread-makers would suit customer who require their bread-makers for more practical purposes. Therefore functional performance and special features such as the Although the quality of bread produced by bread-makers is generally satisfactory, there are three disadvantages relating to performance and reliability which are common to a number of the bread-makers. The first of these is the problem of unmixed ingredients, especially at the corners, since the mixing paddle does not reach all the way around the base of the pan which results in poorly mixed ingredients. Secondly, although the resulting bread was savoury in general, it was observed that on some machines the final texture looked wet and underdone, most notably on the Kenwood rapid process. This is perhaps because the bread-makers usually rush the process of bread-making. Thirdly, on most machines the positioning of the bread-makers mixing paddle beneath the ingredients/dough means that once the bread is made and removed a hole is left in the middle of the loaf. This produces an obvious problem when slicing the bread in terms of the shape of some slices. Finally, with almost all the bread-makers that were evaluated, more so the Kenwood (rapid) and Cookworks, the resulting loaf was darker", "label": 0 }, { "main_document": "'Hello' is the most common way of initiating conversation in the English speaking world. As medical students we are taught how to commence a medical relationship with a patient in great detail. The conclusion of such a relationship, preceding the death of a patient, receives comparatively little attention. How do we say 'goodbye'? The manner in which a doctor frames the human response to human finitude has important consequences for the life-threatened, the terminally ill, their family members and the doctor themselves. The science of thanatology is the study of death and dying, loss and grief. Topics related to thanatology are primarily covered during semester one in the human diversity module. Thanatology includes suicide, euthanasia, bereavement, sudden death and protracted death. I found the experience challenging because Death and Dying was taught in five hours over two weeks and I felt that it should have been accorded more time and importance. I feel insufficiently prepared to communicate with the life-threatened or their families and have tried to overcome this inadequacy by exploring the teaching of thanatology to medical students. Sprang, G and McNeil, J (1995). The many faces of bereavement: the nature of natural, traumatic, and stigmatized grief. New York: Brunner/Mazel inc. The limited emphasis placed on thanatology at medical schools in Britain has been interpreted by many theorists as symptomatic of the wider population's aversion to discussing matters relating to death. Death over the past 100 years has become increasingly unmentionable in western societies. In Muslim communities in the United Kingdom the extended family provides a great deal of support for the bereaved. Because of the physical proximity of relatives and the custom of talking through the experience a feeling of loneliness and isolation is less common. Aries, P. (1983) London: Penguin. Kamerman, J. (1988) Death in the midst of life: social and cultural influences on death, grief and mourning. Englewood cliffs, NJ: Prentice hall. Gorer, G. (1965) London: Cresset press. Scambler, G. Dying, death and bereavement; In Sociology as applied to medicine; edited by Scrambler. 4th ed. Chapter 8. UK; Harcourt publishers, (1999). Bartrop RW et al (1977) Depressed lymphocyte function after bereavement. Schleifer SG et al (1985). Depression and immunity lymphocyte function in ambulatoy depressed patients, hospitalised schizophrenic patients and patients hospitalised for herniorrhaphy. Qureshi, M J H (1995). Muslim customs surrounding death. Ironically for the teaching of thanatology at medical school, the invisibility of death in the twentieth and twenty-first century is substantially due to the fact that since the 1930's and 1940's death has become increasingly hospitalised. In 1990 23% of deaths occurred in people's own homes, and 72% in institutions: 54% in hospitals, 4% in hospices and 14% in nursing or residential homes. It is perceived as the ultimate failure. Often, junior colleagues come to the practice of medicine without having been exposed to death in their own personal lives and this makes the subject even more foreign and difficult. In addition they must take into account the views of the patient and the patients family in often highly charged situations. This leads to the provision", "label": 1 }, { "main_document": "since she taught them to not only show distrust towards barbarians, but also to be wary of women. Although Medea was a barbarian woman, she showed her intelligence and strength of character over Jason, and had defeated a great hero. However, the Greeks did not generally fear the Barbarians. For example \"the Scythians...were the ideal type of anti-Greek: non agricultural, non-urban, uncivilised, nomadic,\" ( This can also be seen in tragedy and \"in his As with their own sense of identity, the Greeks constructed a barbarian identity which was an opposite to their own. Whereas citizens were free men, barbarians were ruled over by a single leader or group of leaders and whereas the Greek soldiers were brave and experienced, the barbarian soldiers were weak minded, inexperienced and brutal. The Greeks created the idea of civilised and barbarian as a defence mechanism. The Greeks had suffered during the 5 However, the Greeks agreed to fight side by side and proved their strength at a number of battles. The Athenians defeated the Persians at Marathon in 490B.C. despite being outnumbered. This was followed by the fighting at Thermopylae, in 480BC, where a Spartan force of only three hundred men succeeded in slowing the advance of many thousands of Persians, and at Salamis, where the Athenians successfully defeated Persia's fleet. These great victories gave many Greeks a sense of national pride and seemed to show the superiority of the Greek armies The victories also helped to form a feeling of superiority that affected the Greeks politically and ideologically. The Persian invasion was key in bringing the Greek city states together and created a national identity which had never existed before, and as Norman Davies writes \"the Persian wars gave a permanent sense of identity to the Greeks who escaped Persian domination.\" ( This was solidified by the formation of a Delian league, a pan-Hellenic alliance that was led by Athens and was formed to help to defeat the future threat of Persian invasion. All cities in Greece were forced to join this coalition, and were expected to give money or ships to aid the war effort. The Delian league unified Greece and created a true sense of a Greek nation. However, the Athenians started to use the Delian league to their own advantage and used much of the money intended for defeating the Persian on elaborate building projects within Athens. The xenophobia of the Greeks was in some respects necessary. The idea of a national identity meant that, if Greece was attacked, the city states were more likely to unite to fight a common enemy, since the threat of Persian invasion remained throughout the 5 However, towards the end of the 5 The Peloponnesian War, between Sparta and Athens, began in 431B.C. and ended with the Spartan victory in 404B.C. Despite the Greece-wide hatred of non-Greeks, the Spartans accepted the aid of the Persians in the war, which would have been considered by many as impossible. As can be seen, the idea of Greek and non-Greek stemmed from the Persian invasion of Greece. Although", "label": 1 }, { "main_document": "My essay intends to explore the friendship between James Joyce and his Italian friend Italo Svevo (pseudonymous for Ettore Schmitz), during the years that the Irish writer spent in Trieste. Svevo was a businessman that wanted to improve his English, and Joyce was his teacher. In this way begun a lifetime relationship, based on mutual respect and sympathy, but broken by the premature death of Svevo in a car accident. On one hand, this reciprocal friendship inspired Joyce, who described his friend as one of the Ulysses' character; on the other hand, thanks to Joyce, Svevo's novels were successfully published in France before than in Italy. Svevo has been considered the writer who introduced Freud's psychoanalysis theory in the Italian Literature, and often his work has been studied under the eye of Joyce's model. First of all, my intention is defining the time and the place where this friendship was born, and secondly, I consider in which way Joyce helped Svevo. James Joyce and Italo Svevo Italo Svevo's real name is Ettore Schmitz. Svevo was a businessman, manager of an anti-corrosive paint for ships company owned by his wife's family, with factories in Trieste, Murano, and London. Since 1901, Svevo used to travel to England for business, but also for pleasure. It was his wife Livia who suggested he'd take private English lessons by Professor Joyce, an Irish teacher, well-known as \"Professor In Trieste, an Italian free port city at that time under the Austrian control, Joyce was not only a teacher, but also a journalist for the local newspaper; there he wrote several of his novels for \" Between the two men there were about twenty years of difference, but the friendship that began was not only humanly, but also artistically related. Between 1905 and 1915, the only artistic contact that the two writers received was each other's company. Not only did Joyce give English lessons to Svevo and an introduction to the Contemporary English Literature, but also a deep and unconditional encouragement to keep writing, highly appreciated by his old friend. Svevo and Joyce shared the idea that a writer need to be highly satisfied by his work, not thanks to the success or the audience. Nevertheless, both hardly fought to be appreciated by the public. \"What is surprising is that Svevo never realized that literary 'success' is produced in literary circles; that a writer had to go to Rome or Florence to seek success by cultivating celebrated or influential men of letters. At the same time, his nature wanted to be always approved and encouraged by others, and this is why, after the ruinous critics on his second novel, he decided to give up his literary aspirations. For Svevo, Joyce was a sort of alter-ego, somebody that had the courage of doing what he could not do. Their friendship was a sort of father-son relationship, where the father was played by Joyce. Gatt-Rutter, 1988:114 Only after many year, Svevo let Joyce read his two novels, Svevo was so amazed that that day he could not leave Joyce, and so", "label": 0 }, { "main_document": "not imply a process of bargaining, compromise and joint agreement but is rather a means through which employers seek views before deciding on action.' Drawing on the same source, collective bargaining will be perceived as 'necessarily containing an element of negotiation and hence as distinct from processes of consultation, from which negotiation is absent, and where outcomes are determined unilaterally by the employer\". Consultation should then be referred to the broader process of asking feedback from employees on certain key issues in the organisation, which may not necessarily result in this feedback being implemented, while the process of negotiation (by means of collective bargaining for example or leading to joint decision-making) is firmly oriented towards agreeing on results (often on pay issues) (see also DTI Guidance on Information and Consultation Regulations 2005). In addition, the distinction between the processes of 'information' and 'consultation' should be noted. For the purpose of this essay the definitions used by Dill and Oxenbridge in their ACAS research paper (2003:12) will be taken into consideration: These definitions would suggest a form of continuum which sees the provision of information as the first most basic level of employer-employee communication, followed by involvement in consultation, when management still has full prerogative on decision-making, finally reaching negotiation which allows for employees' agendas to be introduced. Sisson (2002:16) supports the idea of kind of continuum but reinforces that Having clarified terms and established the importance and relevance of the debate around the new regulations on information and consultation rights for employees, this essay will attempt to discuss the form of the new legal rights, provide a summary of the perspectives of all the actors affected by them - employers, employees and trade unions, and assess the likely significance and impact of the new legislation. By looking into the possibilities incorporated in them, it will try to demonstrate that the effect of the new rules, although eagerly anticipated and extolled by official institutions, might not be that quick or dramatic for employment relations in the UK as yet. Although regulations on information and consultation rights should be transposed by means of the EU Directive into all countries - members of the EU and the process is currently at various stages of progress, this text will concentrate only on the effect they have in the UK, due to limitations of space and relevance to the topic of the essay. The Information and Consultation of Employees (ICE) Regulations entered into force in the UK on 6 April 2005, transposing the European Information and Consultation Directive in a framework agreed jointly by the Department of Trade and Industry (DTI), the Trade Union Congress (TUC) and the Confederation of British Industry (CBI), this multi-sided approach representing as Hall has rightly pointed out 'a notable departure in the UK context' (2005:104). Here several features of the regulations' form will be commented upon. The implementation of the ICE Regulations in the UK has been divided into three stages - they apply to undertakings with at least 150 employees from April 2005, to those with at least 100 -", "label": 0 }, { "main_document": "other variables. While RV, FRC and TLC are correlated to some degree, there is even some evidence of negative correlation between FEV1 and these variables. This is a cause for concern and could indicate that the lung volume data is not all that reliable. The method which will be used to find a suitable model to predict PEmax as well as possible will be multiple linear regression. Multiple regression studies the relationship between one dependent variable (in this case PEmax) and several independent variables. Multiple regression fits a linear model, where Y The error e In matrix form this model is where The relationship fitted is a linear one as the model involves only linear terms of the independent variables. The method used to find an optimal set of coefficients () so as to best fully describe the data was ordinary least squares, where the least squares estimates minimise the function This produces a set of parameters that minimise the square of the differences between the actual and predicted values. In attempting to fit a multiple regression model to the data a number of assumptions are made about the data : (i) That the variables involved in the regression are normally distributed. (ii) That the relationships between the variables are linear. (iii) That the errors e (iv) That the errors e These assumptions will be examined and analysed in the case of the cystic fibrosis data. While the first two assumptions can be analysed in light of what we have already seen of the data, the last two will be examined after the model has been formulated with the help of residuals plots. A multiple regression taking account of all eleven variables in the data set produced the following results, along with the standard errors of the regression coefficients and their corresponding significant levels. At first sight it appears as though this is a good model for PEmax due to the relatively high R square value. However regression using all of the available variables is not a good choice of model. Almost none of the variables have coefficients significantly distinguishable from zero and the R square value has been inflated from the effects of multicollinearity. As has already been seen, some of the variables correlate extremely highly with other variables in the data set - notably age, height, weight and BMI. Since there are strong linear relationships between these variables, it becomes extremely difficult to get regression coefficient estimates for these variables and using several collinear variables in a regression is inadvisable [5, pg. 107]. Multicollinearity can be examined by regressing each of the independent variables on the other independent variables it is suspected of being collinear with. Doing this with age, height, weight, BMI and BMP produces R square values over 0.9 in all cases, with some being extremely close to 1. This seems to indicate that each of these variables can be described by a linear combination of the others and makes it inadvisable to include several of them in a multiple regression of PEmax. The strong collinearity makes sense,", "label": 1 }, { "main_document": "nurses to be assertive to deliver patient care has never been as great (Benton, 1999) especially as personal effectiveness largely relies on the ability to assert ourselves (Thompson, 2002). However it is acknowledged that \"Self-assertion can be a problem for nurses in both their professional and their personal lives.\" (Niven & Robinson, 1994 p31). Glen and Parker (2003) state that assertiveness is founded on the belief that each person is worthy of respect, has a right to ask, and be heard. Mentors need to create a relationship that encourages these beliefs for students to feel confident enough to be assertive. It is proposed that the three components to assertion are: 'being specific' - stating a point succinctly, 'sticking to it' - being persistent, and 'fielding responses' - acknowledging others views (Kagan Thompson (2002) states that assertiveness involves finding a positive balance between the interests of all so not to compromise anyone's rights and needs, it also creates an atmosphere of openness in which constructive change occurs. Furthermore it requires us to be aware of, and open about, our thoughts and feelings even though this may be challenging and makes us feel vulnerable. \"Assertiveness is about having confidence in yourself, a positive attitude about yourself and towards others, and it is about behaving towards others in a way which is direct and honest.\" (Hartley, 1993 p210 cited in: Thompson, 2002 p42). A person's assertiveness is rooted in their personal philosophy as it comes from the value a person places in themselves and their rights and \"is something that we are all capable of, to a certain extent at least.\" (Thompson, 2002 p45). In a study aiming to provide insight into the concerns and expectations of newly qualified nurses Evans (2001) concluded that nurses recognise the importance confidence plays in assertiveness and express concerns about assertiveness being misinterpreted or causing offence; a concern I share. Niven and Robinson (1994) and Benton (1999) suggest that to be seen as assertive rather than aggressive, others must believe their needs and rights are being respected so self-awareness is required. Assertion is not about getting your own way but about freedom to express your own needs and to stand up for yourself while respecting others (Benton, 1999). Being perceived as pushy or overbearing alienates others while being non-assertive allows others to take advantage of us; thus assertiveness involves a constructive balance between the extremes of being submissive and aggressive (Thompson, 2002). A low self-esteem can be attributed to non-assertiveness and a lack of assertion can lower self-esteem (Niven & Robinson, 1994; Thompson, 2002). Thompson (2002) contends that by failing to assert yourself you fail to express your own needs, allow others to have their way at your expense, and often say 'yes' when you mean 'no'. Nurses often have difficulty saying no as it usually feels uncaring to withhold agreement to take on more work to help someone else under pressure however saying no is an element of assertiveness and \"It is not always in the best interests of the practitioner, or the person asking, to agree to", "label": 1 }, { "main_document": "This essay argues that Ken Booth's concept of 'utopian realism' is an unworkable compromise. My main argument is that the concept of 'utopian realism' is both theoretically unsound and practically unfeasible because it continuously struggles to make its utopian visions fit into the realist paradigm. I elucidate this struggle through a close examination of the four main propositions made by Booth's 'utopian realism'. After establishing the unfeasibility of the concept in the realist paradigm, I move on to examine if Booth's application of Nye's idea of \"process-utopias\" (Booth 1991, p. 536) could be used to catalyse a paradigm-shift required to make 'utopian realism' a feasible IR concept. Finally I conclude that while no paradigm-shift is possible, the concept of 'utopian realism' remains unfeasible and must therefore be abandoned. However I propose that though Booth's concept of 'utopian realism' cannot be accepted as a utopian corrective to the sterility of realism, his application of the idea of \" process-utopias\" could be used to meet this end. Man has always attempted to negotiate between the ideal and the actual. Ken Booth's 'utopian realism' (Booth 1991) is yet another manifestation of this attempt. He defines 'utopian realism' as \"an attitude of mind\" that bases itself upon \"normative\" as well as \"empirical\" theories (Booth 1991, p.534). The reason for the failure of 'utopian realism', I argue, is its simultaneous acknowledgement and denial of the characteristics of the anarchic international system it needs to survive in. It is this self-contradictory nature of 'utopian realism' that undermines its potential to become a feasible IR concept. To elucidate my point of critique I shall examine four main propositions made by Booth's 'utopian realism'. First, 'utopian realism' proposes that if \"security in anarchy\" is being considered, \"emancipation\" should gain precedence over \"the traditional realist themes of power and order\" (Booth 1991, p.539). Booth argues that this is because the distribution of power is very unequal in the international system and so, a focus on power would imply the absolute power of one state and the impotence and insecurity of others. Thus, 'utopian realism' argues that if \"security in anarchy\" is the agenda the guiding principle cannot be the unjust one of \"power and order\" but the more egalitarian principle of \"emancipation\". However, in the conclusion to his essay 'Security in Anarchy: Utopian Realism in Theory and Practice', Booth also acknowledges that politics shall always translate into power politics (Booth 1991, p.545). If it is so, how can the preoccupation with \"power and order\" be replaced with the utopian concern for \"emancipation\"? In other words, when the realist paradigm prevails why would realist tenets be sacrificed for utopian objectives? Booth's argument provides no answer to this question, and I argue that the very assumption on which the first proposition of 'utopian realism' rests is too na To further elucidate my argument I juxtapose this utopian realist proposition with a model for collaboration between \"power\"(realism) and \"morality\"(utopianism), that E.H.Carr proposes in The latter's proposition presumes the existence of a hegemonic power, desirous of maintaining its hegemony over others in the international system.", "label": 0 }, { "main_document": "(B), in raising an issue from the newspaper she's reading, which B had not previously considered, to the coercive force of a labour camp worker with a gun, (A) to get a prisoner, (B) to work faster. It seems that many attempts to define power, for example, those of Hay While political power, is clearly something created and affected by people, Charles E Merriam's personification of power may prove helpful in maintaining an objective stance in one's approach: \"Power...is a creature of habits, of culture patterns woven deeply into the lives of men...as if apart from the central drive of life\" \"...we can define power as the ability of actors (whether individual or collective) to 'have an effect' upon the context which defines the range of possibilities of others.\" Hay, C. (1995) 'Structure and Agency' in Marsh, D. and Stoker, G. (eds), \"X by his power over Y, successfully achieved an intended result, r; he did so by making Y do b, which Y would not have done but for X's wishing him to do so, moreover, although Y was reluctant, X had a way of overcoming this.\" Benn, S. (1967) 'Power' in Merriam, C. E. (1964) London p.21 Having shown that a simplified statement like that of Dahl's, which attempts to describe the continuous and indefinite essence of power, is not of particular use to the political analyst, I will now attempt to identify and describe what could be the essence of political power in our time. Here I will concentrate on the essence, or more significantly, the most important form of political power, evident in the political system in which we live. It is broadly accepted that political power in a democracy takes the form of an authority's ability to govern the state's citizens: a 'majority', (note: usually as little as 30% of the population), of whom will have voted for the existing government to be in office. As Barbara Goodwin describes: \"a politician, office-holder or other politically active individual...(has power to)...cause them (citizens) to do what he or she wants.\" The essence of political power could thus be described as the ability first to gain enough votes and secondly to manipulate and act within the parameters of the existing constitution (in the broadest sense). However, the passing of laws is usually a non-event as far as the general public is concerned. If they know about them at all, the extent of their involvement in this act of 'power' may be a discussion at the pub or by the water dispenser at work. In fact, the majority of the population may in fact be against the passing of a law and yet it will pass and very few people will break it. It therefore seems that in the democracy in which we live, a politician's task to get B to do something he or she would not otherwise do is an easy one, so easy that it cannot be considered the essence of political power. Goodwin, B (1997) The success of this type of political power clearly relies on the", "label": 1 }, { "main_document": "of analyzing social relations and consider how different socio-political formations interacted with each other, which have been dealt with by Preucel and Hodder (1996) in They refer to Renfrew and his \"peer polity interaction\", which states, that relations between societies (competition, \"symbolic entertainment\" and increased trade) form social structures, as for example, occurred in Greece where the city-states evolved as a result of interaction with each other rather than with Egypt (as the cultural evolutionary model states). The prestige goods theory says that social status depends on the access to prestige goods, therefore an increased demand and trace relations between core and peripheries. World system theory of Wallerstein is based on the interaction between societies from \"the First and Third worlds such that development in one area causes underdevelopment in another\" (Preucel and Hodder 1996: 103). These theories indeed offer an alternative approach, but is also has many limitations. The \"peer polity interaction\" seems to undermine the economic motivations of people, focusing mainly on political aspects even when explaining the increase in trade. Also prestige goods theory stresses mainly the importance of luxuries as the main driving force in the development of state economies, diminishing the significance of food products that were also part or economy. Finally the world systems theory is limited to the modern world, which archaeologists tend to forget about and impose it in archaeological data from pre-capitalist societies. Because of so many limitations, it is impossible for an archaeologist to assign himself to one particular theory. It is crucial to take into account all three of them while analyzing social relations, because each is contributing from a different angle to the debate. Another discussion emerges, about the relation between the different types of exchanged commodities and the societies that perform the exchange. Preucel and Hodder make an in-depth analysis of the nature of the commodities themselves. They present the economic view, according to which \"commodities and things which circulate throughout an economic system and can be exchanged for other things, usually money\" (Preucel and Hodder 1996: 106). As an example, O'Shea (1981) presents the relation between the production of commodities and the emergence of social ranking in the Aegean, where the commodities are produced in a process of social storage as food surpluses, which are then exchanged for tokens value. The social-reproduction view uses the Marxist perspective of the commodities being a product of labor within a particular mode of production, which is then passed to another social group in form of exchange (Preucel and Hodder 1996). The politics of value consider the interrelation between the movement of goods within and between spheres of circulation and the impact which this movement has on the societies involved in exchange. Commodities and their meaning instill certain symbolism in an individual or in a group, therefore it is not possible to ascribe only one meaning or one value to an object. We need to bear in mind the route it had to go through, its biography. According to Kopytoff (1986: 68), \"a commodity is a thing that has use value and", "label": 0 }, { "main_document": "in an argument elsewhere that remoteness criteria for unintentional breach of trustee's duty of care should be the same as those applied in the tort of negligence. But the principles underlying both systems are the same' Although the contributory negligence statute In particular, a function of contributory negligence in tort law is to act as a mechanism to effect a just allocation of responsibility. It is submitted that this function is directly comparable to that it might serve in the law of trusts. There is already support for apportionment in cases of merely careless breach in the area of bare trusts. However, it is also submitted that our new meaning of equitable compensation dispels these concerns, such that there is room for the introduction of contributory negligence into liability for breach of trustee's duty of care to trusts in general. See S Elliott, 'Remoteness Criteria in Equity' (2002) 65 MLR 588. [1996] 1 AC 421 (HL) Law Reform (Contributory Negligence) Act 1945. . See G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224. Tort-like considerations of contributory negligence should be introduced to trustee liability for breach of their duty of care. A plausible mechanism would be to introduce a rebuttable presumption against apportionment of liability for the careless trustee. Contributory negligence is an objective concept that depends on what the reasonable person would have done in the victim's position. However, tort law allows leniency in applying the test of reasonableness; some people may take less care for their own interests than it requires people to take care for the interests of others. In other words, the defence of contributory fault would still lean towards favouring the beneficiary over the trustee. Only if the trustee could establish gross negligence on the part of the beneficiary that contributed significantly to her own loss, would the trustee be entitled to apportionment. The availability of contributory fault would ease the ethical concern of trustees being liable for losses not attributable to their own carelessness, while maintaining consistency with the fiduciary or moral duties the trustee may be in breach of by virtue of our breach in question. With this form of contributory fault, there should at least be room for the introduction of contributory negligence on the part of claimants into the realm of liability of trustees for breach of their duty of care in this sense. This is adopted from Watt's argument for introducing apportionment in the case of bare trusts, see G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 214. P Cane, P Cane, We highlighted the differences between a trustee's fiduciary duty and a trustee's duty of care and argued that the arguments against introducing contributory negligence into breach of fiduciary duty do not apply to our issue. We have also taken note of the similar positions a trustee's breach of her duty of care and a tortfeasor is in when compared to breach of fiduciary duty. We went on to assume that fiduciary duty and a trustee's duty of care are distinct,", "label": 0 }, { "main_document": "rather than interest. As Fairbank notes, China tried to extend economic possibilities with expansion of old systems rather than creating new ones: 'the tradition in China had been not to build a better mousetrap but to get the official mouse monopoly' Bayly, Landes, Fairbank, The East was hindered further by other cultural influences such as the role of women in society. Early modern Europe experienced the growth of cottage industry through utilising the skills of women. They were able to spin and weave from their homes and the sale of their products supplemented family income, allowing an improved living standard. Asia, as aforementioned, encouraged early marriage followed by many children; Confucianism not only restricted growth in terms of foreign interaction, it restricted their use of human resources where Europe employed initiative to further their progress. Also, China never developed a system of formal logic. This has obvious organisational repercussions, but it is also symptomatic of East Asian culture as a whole. The West was cultivating an atmosphere of change, innovation and progress, whilst the East strived to reinforce structure, control, and remain somewhat introverted. As Landes concludes, 'if we learn anything from the history of economic development, it is that culture makes all the difference' Landes, With the benefit of hindsight it could be easy to view the slow progress of the East Asian economy as inevitable; the early modern period was an era in which trade networks were cemented across the world, where new continents were discovered and where new resources and techniques created the potential for both industrialisation and the expansion of consumerism. European religious and political beliefs formed an ethos of utilisation, competition and conscientiousness, all favourable for embracing these changes and thus encouraging economic growth. The West changed its diet to incorporate food from the New World, modified decorum to embrace Chinese porcelain, and sought fresh energy resources such as coal in order to increase their prosperity. On the other hand, East Asia had the advantage at the beginning of the period: It had overcome environmental problems such as water distribution with irrigation systems, and experimented with technological developments to create an accomplished agricultural system and efficient methods of chinaware production. However, this supremacy led to an environment in which foreign interaction was viewed with hostility. Instead of exploiting the full potential of European trade, they resisted it; instead of adopting western technology, they ignored it; instead of allowing the flourish of mercantile interests, they repressed them. In the move towards modernisation, the West cultivated the seeds of capitalism whereas the East retained Confucianism, and in an increasingly material society, Europe fared better. It would be erroneous to suggest that the East Asian economy remained stagnant by 1800, for it did change and develop, but the inherent mentality of state control imposed limitations upon the extent to which new thought and direction could emerge. As Bayly concludes 'Europe's...\"exceptionalism\" was to be found not in one fact, but in an...accumulation of many characteristics seen separately in other parts of the world' Equally, East Asia's slow progress can be seen", "label": 1 }, { "main_document": "with everlasting suffering in the underworld because he is not only trying to mess up the world by freeing Titans and attacking heaven, but also he tramples Megara down by abusing her pure love towards her insincere boyfriend and Hercules. This structure of the world under which the good can always be victorious over the evil and thus can be seen as completely rational, forms a sharp contrast with the Greek mythical view of the world that is usually portrayed as being at the mercy of divine intervention and containing a lot of unfairness. Particularly, as one of the strongest worshippers of American social and political system, Disney completely denies the power of the gods to choose men's destiny which is crucial for Greek myth, and instead emphasises the value of liberty under which we can open up a way that leads us to the happy and promised future by making our own effort. As we have seen in this essay, it seems quite difficult to categorise the Disney certainly utilised mythology for creating a lot of aesthetic graphics in this film and to some extent succeeded in producing the entertaining well-made love story which is inspired by Greek myth, however, Disney manipulates and recreates so crucial and fundamental features in the original story in order to make Hercules an appropriate protagonist for its film that we may even doubt whether there were any specific aims or benefits for Disney to choose Greek myth as its film's subject matter. In a word, it is probably more appropriate to conclude that the Whether this Disney's attitude towards mythology can be seen as merely having resulted in defiling the ancient world, or whether it should be accepted as an epoch-making way of employing antiquity in film making - that would be another discussion.", "label": 0 }, { "main_document": "the EU energy label system. They shall ideally all be 'A Rated' appliances (A++ for the fridge). There shall be an Ethernet with sockets in the main rooms of the house. The router used shall also have wireless 802.11g technology for roaming by laptops and other wireless enabled devices. Rooms with hifi systems shall be equipped with an Apple Airport Express for streaming Airtunes. Also the main rooms shall have cable television enabled sockets. Solar power. The roof shall be tiled using Solar Roof Tiles. As for this project money is no object so the whole roof shall be tiled using these. If money were a factor then only a portion of the south facing side would be done as the north facing side typically produces only 60% of the power of a south side. Wind Energy. As mentioned earlier the house shall be situated within a few aches of land. If the location has sufficient wind speed and power then a small wind turbine farm shall be erected. On average the money saved on electricity bills over 15years shall recoup the cost of these turbines. Any extra electricity generated can also be sold back to the energy supplier. Water Usage. The waste water from bath, showers, sinks and dishwasher, and rainwater can be collected in a tank. It is filtered to remove 'sludge' and soap scum, and then used to flush toilets. This saves the drinking quality water that is usually used to flush toilets. Not only does it save money on utility bills but also energy and chemicals that go into producing clean water. All these systems used shall be controlled via a 'central' computer. Not only shall it make the management of the systems easier, it shall also enable data to be collected so optimum configurations can be obtained.", "label": 1 }, { "main_document": "between the buyer and the seller. At the Venture Capital cycle (early stages), we can look at revenues/EBIT multiples, even though this is much risky. These multiples are given by the market (2.5 to 7.5 in general but there are exceptions). Indeed, we must bear in mind that multiples are conversely proportional to debt costs and a smart seller will always try to maximise earnings before putting the firm on the market. From observation, the rule is simple for VCs This is because only one out of 4 businesses will be successful. VC = Venture Capital EBITDA for \"zero-assets\" businesses (e.g. IT consultancy) At the Private Equity cycle (later stages), we can look at Adjusted Net Present Values. As operations (investments in assets) are different from finance (financing in liabilities), the discount rate is dissimilar: WACC for assets (NPV) and cost of debts for financing (interests rates). For the Adjusted NPV, we will use WACC. However, there is an issue with taxation: the tax is embedded in the interest rates and WACC. We must discount those taxes from the actual rates. Consistency is crucial to make meaningful figures in the business valuation: use WACC before tax and EBIT OR use WACC and EBIT minus Tax. Another issue lies on the residual values (equipments). These residuals must be considered with different options (price level, inflation impact, probability of occurrence). The rule of thumb is to double the value and discount with the relevant rate before tax. As these valuations are based on future earnings or cash flows, any financial planning must be \"reasonable\", with a high-low band including a \"most probable\" scenario. We also need to include the funding level, the price changes (inflation). No matter how wide it is, the gap between budgets versus actual figures is not the best way to measure management qualities This is the heart of Entrepreneurial Finance. Businesses should forecast when they will need cash long before running out of it. For investors, getting out (exiting or harvesting) is as important as getting in (investing). The exit (harvest) is as important as the entrance in any investment. As fundraising never stops; to be successful, managers must hit their plans. We must separate Short term and Long Term financing which have different objectives. While we aim to optimize for long term, we also aim to survive for the short term. There are exceptions but basically, long/short term liabilities respectively finance long/short term assets There are exceptions to that rule like when the company is over-capitalized Once the diagnostic and valuations done, we can think of the acquisition process. An acquisition has multiple alternatives: acquire & diversify, expand in existing markets, slow growth and take cash from business or sell. Here, there are two important rules to remember: By acquiring assets, we have to use the \"appropriate\" period, discount rates; we should never mingle several decisions (assess buildings and assets lease for example). The basic question to analyse is \"what if we don't go for it?\" For small business acquisitions, we should never ever acquire the liabilities but only", "label": 0 }, { "main_document": "The aim of this investigation is to discuss economic development in fifty developing and transition countries Both economic indicators (GNI, GNI growth) and social indicators (life expectancy, infant mortality rate) will be examined, and the survey will highlight the importance of employing at least one from each of these sets of indicators to adequately appreciate the state and dynamics of economic development. Full list of countries provided in Appendix 1. The GNI (gross national income) per capita is arguably the most revealing single development indicator, In this survey, mainly the GNI based on PPP will be discussed, as this better facilitates comparison of living standards between different countries than GNI converted at official exchange rates. Like the GNI per capita indicates the present state of an economy, the growth of GNI per capita reflects how an economy changes over time. It is imperative to realise that GNI per capita growth is not necessarily the result of output growth, but may as well follow from a population decline, a shift of production from the informal to the formal sector, or a change in real prices. Figure 1 shows annual GNI per capita growth over two time periods for the selected fifty countries according to region. Figure 1 shows that most countries experienced moderate positive growth (2-6% p.a.) in both time periods. The red line separates the countries growing faster in the first and second period respectively; the further away the country is from the red line, the greater was the difference in growth rate between the two periods. A regional trend is discerned for the Latin American and Caribbean region, where nearly all countries demonstrated lower growth rate in the second period. The situation seems to be particularly severe for Venezuela and Argentina, both experiencing negative growth in the second term. In Venezuela, this was arguably due to a \"disastrous two-month national oil strike\" Argentina's negative growth in this period can be linked to the transition from a pegged to a floating exchange rate system Thus, there was no single reason for the falling growth rates in the region, although it is arguable that decline in one economy had adverse effects on neighbouring ones (e.g. by falling demand for imports). Only two countries, including Costa Rica (considered economically and politically stable for the region World Development Indicators. . In Europe and Central Asia, several countries demonstrate very high average annual growth rates: Ukraine, Tajikistan, Moldova and Romania in the second period, and Armenia, Georgia and Belarus in both periods. These countries are \"economies in transition\" (shifting from planned to market economies), and several have adopted growth-promoting policies (sometimes under the influence of the IMF and the World Bank) such as of market liberalisation combined with monetary and fiscal tightening Most East Asian countries enjoyed high positive growth rates throughout the two periods, with China and Vietnam - communist states moving towards more market-oriented systems - being the front runners. Most African and Middle Eastern countries show a similar pattern, although it is arguable that many impoverished economies of Sub-Saharan Africa in particular would", "label": 0 }, { "main_document": "Insufficient intake of certain amount of nutrients will lead to deficiency diseases, but excessive intake will also cause undesirable effects. Reading University students between the ages of 19 and 23 years took part in a survey in Reading University in October 2004. They were asked to fill in a food frequency questionnaire to access their current typical diet and they represent 95% of the whole population in the university. The results are used to access their nutrients intakes. The Committee on Medical Aspects of Food Policy (COMA) set a range of intakes called Dietary Reference Values (DRV) used for groups and individuals as guidance. The range is normally distributed with: a mean requirement or Estimated Average Requirement (EAR) Reference Nutrient Intake (RNI), located two standard diviations above EAR, represents the amount sufficient or more than sufficient for the nutritional needs of most individuals in a group Lower Reference Nutrient Intake (LRNI), located two standard diviations below EAR, represents the lowest intakes which will meet the needs of some individuals in a group For male and female Reading students, both their EAR for energy meet the values set by the Panel of COMA. However, energy expenditure depends on basal metabolic rate (BMR), which is the rate person uses energy to maintain the basic function of the body, physical activity level (PAR), body size and composition. Thus different people will have different energy requirements and in turn different energy intakes. That is also why there is no RNI set for fats and carbohydrates (including sugar and starch). The dietary intakes of these nutrients are expressed as percentage of daily total energy intake and so it is hard to judge whether students' daily intakes are optimum. Half of the 95% population's alcohol intake exceeds the recommended RNI. Although alcohol is a good source of energy and beneficial to heart if take in small amount, high intake will conversely bring many adverse effects. For the increasing cases of coronary heart disease and obesity, it is advised that not to take too much fats, sugar and alcohol in our diets. In fact, our diets should be high in dietary fibre. Diet high in dietary fibre is also low in fat and helps improve bowel function. The dietary reference value of dietary fibre ranges from 12-24g/day where the students' range is 7.1-27.9g/day. This shows that some students have taken enough dietary fibre a day but some have not. The protein intake is quite high for Reading University students. There are 88% females of the 95% population and even the whole 95% males have taken protein which is higher than the recommended RNI. These proportion of students have taken an amount more than sufficient to meet their daily nutrition needs to be healthy and prevent from suffering deficiency diseases. Protein is important for body-building, repair and body defence. However, excessive intakes of protein may be associated with health risks. There is evidence that excessive dietary protein may contribute to demineralization of bone and people having vegetarian diets containing with lower protein intakes exhibit lower blood pressure. There is firm", "label": 0 }, { "main_document": "borderlines and operate abroad. However, it has to be mentioned, too, that it enforces competition among market players. In addition to that, a common European currency creates a certain price transparency between Euro zone countries which can be evaluated in a positive and negative way at the same time for both service provider and consumer. Therefore, all elements of the marketing mix have to be selected carefully. As the analysis has shown the area around Epernay, Champagne, would be a suitable location for the first unit of Furthermore, the proximity to the key target markets and to Paris could be other advantages. The capital city can be accessed very well from abroad and France in general offers a well-developed transportation and road network which enables tourists from neighbouring states to enter the country and travel around easily whether by train or car. Even though, there are many statistics available from the different governmental bodies including national and supranational level it can be stated that data looking at the different nationalities and their preferred activities are lacking. However, tourist arrival statistics provide information about the countries of origin which again can be used as a basis when deciding what markets to target. But further research might be carried out in order to obtain more detailed data about the different nations and their consumer behaviour. This again, could help to adapt the product better to the key target group(s) and improve service and marketing mix strategy. According to the findings it can be recommended to enter the French hospitality market but it should be also paid attention to the risks mentioned above. Only so,", "label": 0 }, { "main_document": "pest damages and that farmers were incapable of practicing the complex pest control procedure. Insecticide application was carried out on calendar-based schedule and ignored the differences in locality factors. Insecticide significantly damaged population of natural enemies such as spiders and other predacious insect species through 'turning off' of natural ecosystem, resulting in the widespread outbreak of insect pest, notably of stem borers The consequence of inappropriate evaluation on the role of natural Biological Control mechanism and misleading in insecticide use was the severe outbreak of pests and damages on rice (Pontius Weeds in rice fields are the major biotic constraint on the yield reduction (table 2). Two important factors in contemporary system have contributed to the magnified weed problem over the last twenty to thirty years: change in production system and herbicide application. Increased labour costs have led to shift in production system from transplanted rice or deep water rice to direct seeded rice, of which wet-seeding is the common method (Moody, 1992). This change has caused an increase in weeds and the shift in the dominance of weed species from aquatic weeds to grassy weeds (IRRI, 2006c). The prioritisation on increasing weeds problem was on rapid, short-term solution, hence herbicide use was promoted. For instance, in Vietnam, although there was very small use of herbicide until the 80s; 0.5 % of total rice production area, there has been a dramatic increase in the 1990s. Herbicide accounts for 30.2 % of total pesticide use in all agriculture production in 2003. However, weed resistant to herbicide have emerged to colonise the field leading to the reduction of rice yield (Khanh Findings from many field observations and research since the 1970s eventually revealed those adverse effects of contemporary rice farming system. The concept of Integrated Pest and/or Weed Management (IPM or IWM) has developed and came to be enforced at farmers' level in the 1990s (Pontius The primary objective of IPM and IWM is to incorporate natural occurring Biological Control in farming practice. There is a particular emphasis on farmers education in order to increase the awareness on the risk of environmental pollution and human health by excessive or mis-use of pesticide and to improve the understanding of natural Biological Control mechanism. Today IP/WM is an important farming strategy promoted by the governments in Southeast Asia and many field schools have been carried out (Fagan Spiders (the most common genera of orb-weavers are Studies demonstrated that the pest outbreak was due to the significant reduction in spiders and other predacious insects killed by insecticide. Today, those natural Biological Control agents are paid great attention in Integrated Pest Management (IRRI, 2006b). Study by Sigsgaard (2000) pointed out the importance of land management for the provision of suitable habitat for predatory spiders. Population dynamics of spiders depend on availability of suitable pests and hosts. Prior to cropping, spiders feed on alternative prey such as Collembola and dipterans and the population gradually increases. Weed residues within rice fields are an important habitat which provides refuge for predators and abundant alternative preys. Organic matter from plants leads to", "label": 0 }, { "main_document": "proteins are known for there ability to bind to RNA or have been implicated in mRNA cleavage/processing events. Several reasons lead researchers to believe that PIE-1 is a transcriptional repressor: The protein is detected in the right place at the right time for SKN-1 repression It is maternally loaded and segregates with germline It disappears from germ lineage shortly after the division of P4 into Z2 and Z3. At this point new zygotic mRNA's need to be transcribed to expand the germline. (Seydoux This block in gene expression is essential for the formation of the germline. PIE-1's cytoplasmic function is to prevent the degradation of maternal RNA's in germline blastomeres and promote the expression of maternally encoded factors that promote germline formation (Tenenhaus One such factor is the Studies by Tenenhaus When the first zinc finger is knocked out, the protein is no longer able to inhibit transcription, when the second finger is removed transcription repression still occurs but the promotion of The process of transcription elongation requires phosphorylation by CDK9 of a repeated heptapeptide in the carboxy-terminal domain (CTD) of RNAPII, this phosphoepitope is known as H5. The PIE-1 protein is able inhibit CDK9, because it contains a sequence that is very similar to the repeated heptapeptide (H5) in the CTD of RNAPII. PIE-1 competes for the CDK9 active site preventing phosphorylation and transcription (Tenenhaus Correct localisation of PIE-1 protein first becomes detectable at low levels in the posterior cytoplasm of the 1-cell embryo. After first division PIE-1 is present in the nucleus and cytoplasm of the germline blastomere P By the 4 cell stage PIE-1 is present only in the germline P This accurate localisation is achieved by several methods: Localisation via the MEX-1 protein. In a study by Guedes MEX-1 protein contains two repeat finger domains suggesting that it may bind directly to posterior cortex proteins PAR-1 and PAR-2 to produce the P granule association and localisation of PIE-1. PIE-1 is removed form the somatic cell lines by an enzyme, C/CUL-2 E3 ubiquitin ligase. This is targeted to PIE-1 by a bridging protein called ZIF-1 that interacts with bother the ligase and the first zinc finger of PIE-1 (see figure 7). The final method is associates with centromeres. When germline blastomeres begin to divide, the nascent mitotic spindle complex rotates by roughly 90 PIE-1 protein appears to accumulate round each centrosome of the spindle in equal quantise before rotation. At the same time the PIE-1 concentrations diminish in the nucleus, cytoplasm and even in the P-granules. After rotation the levels of PIE-1 diminish rapidly at one centromere until it becomes undetectable. This results in all the PIE-1 protein be located at one centrosome and therefore in only one daughter cell (the germ cell line). The movement of the PIE-1 protein to the centrosome maybe a translocation process (other proteins are know to do this such as NuMA). The proposed reason for the sudden loss of PIE-1 at one centrosome after rotation is that the protein is brought into a different cytoplasmic environment that affects the PIE-1 stability and it", "label": 1 }, { "main_document": "shifts but changes slope with changes in temperature. It is estimated that a 1 degree change in temperature at 298K would result in a 2% error in the potential reading at the millimolar level. Whereas the effect of this on solutions of concentration above 0.005M is insignificant compared to the other sources of error in the experiment, solutions of below 0.0005M may suffer significantly from such a temperature difference. This report has assessed water fluoride concentrations and explored the effects of tea infusion upon such values. The factors leading to the greatest sources of inaccuracy referred to in the introduction were sucessfully controlled. Had time allowed, a more rigorous treatment of the uncertainties inherent to this method would have been conducted. Our findings show the experimental method was suitable for the task, but also highlight some of the problems that need to be addressed if this topic was to be studied further.", "label": 1 }, { "main_document": "serving as a model for the other classes. Because X-ray crystallography of the intact molecule was difficult at the beginning of its investigation, due to its conformational flexibility in the hinge region, it was subjected to digestion by proteolytic enzymes. Treatment with papain was the most successful, and three fragments of equal molecular weight were obtained. Two of these fragments continued to display antigen binding properties, and were named Fab (ab: antigen binding) as a result. The third fragment however did not bind antigen, instead it was named Fc (c: crystallisable) because it crystallised readily. Fc expresses effector functions, such as activation of the complement cascade. From the pioneering work of Roberto Poljak and David Davies in 1973 to this day, the high-resolution structures of multiple Fab and Fc fragments have been determined, either by themselves or complexed with a multitude of antigens. Though in reality this is often not the case due to its flexible hinge region, an IgG molecule can be pictured as Y-shaped, where the Fab fragments would form the two arms and Fc the tail of the Y. As already mentioned the IgG molecule consists of four polypeptide chains connected by disulphide bonds, namely two identical heavy and two identical light chains, which can be further divided into so called immunoglobulin domains as can be seen in figure 1. The light chains consist of two domains, the variable amino-terminal domain termed VL, and the constant carboxyl-terminal domain CL. The amino-terminal domain of heavy chains exhibits sequence variations as well, hence it is called VH. The three remaining domains are constant, thus termed CH1, CH2, and CH3. Most of the amino acid sequences of these domains are remarkably conserved, even across different species, suggesting that the genes encoding the constant domains evolved from one common ancestor, and that the amino acid sequences are crucial for the molecule's tertiary structure. Also, variation in sequence and length of the variable regions is confined to three short sequences that do not contribute to the overall structure; these are called the hypervariable loops or complimentarity determining regions (CDR). These regions determine the specificity of the antigen-antibody interactions. Constant and variable domains do not share homologous sequences, yet their structures are very similar. An important feature of the molecule's structure are the numerous disulphide bonds which do not only connect the Fab and Fc fragments, but also the heavy and light chains. Figure 2 illustrates these as well as the 12 immunoglobulin domains, which will be discussed in more detail shortly. Underlying the structures of the 12 immunoglobulin domains found in IgG is a structure called the immunoglobulin fold, which shows remarkable similarity not only within the molecule itself but also across the wide range of specific antibodies produced and many other proteins such as MHC. A pair of -sheets, consisting of a stable arrangement of hydrogen bonded anti-parallel -strands, and held together by a single disulphide bond, forms a barrel like structure, creating a central hydrophobic core. The amino terminus and the carboxyl terminus are at opposite ends of the barrel, so that", "label": 0 }, { "main_document": "the theory of aggregate supply and why it is that prices adjust slowly. The reason for nominal price rigidities is that adjusting prices immediately can be very costly. To change its prices, a company may need to send out new catalogs to customers, distribute new price lists to staff or print new menus in restaurants. As a result of those 'menu costs', it would be extremely expensive to change menus and catalogues continuously in response to changing prices. Moreover, companies may simply prefer to hold prices stable in order not to annoy regular customers. Some economists really doubt that menu costs can explain short-run fluctuations in the economy because those costs are usually very small and, therefore, not strong enough reason to explain recessions that are costly to the society. Others argue that despite the small size of menu costs it has a huge impact on the economy as a whole. For instance, a price reduction by one firm has externalities on the economy as a whole. For instance, a price reduction by one firm has externalities to the economy as the average price level would be lowered slightly. Hence, the real income would be increase followed by an increase in demand. There are cases, when companies have to keep old prices because of fixed price long-term contracts. Before setting their prices companies usually take into account the expected price level. If it is high, they will set their prices high to compensate for the high price of inputs (e.g. raw materials). In case of overestimation, companies may simply produce more output as the profits for production is still high. When they choose their 'sticky' prices, they set them high to meet the high demand. The price level will increase. Therefore, a high level of output leads to a high level of demand, which in turn causes the price level to go up. Efficiency wage theory states that people tend to work harder with an increase in real wage. Higher wage will also attract more qualified labour. The overall productivity will improve. The relationship between output and real wage is procyclical, i.e. if output goes up, real wage will follow. Mankiw argued that monetary policy is a more useful tool for stabilizing the economy as the lags can be very long in fiscal policy. 'Fiscal rules have to be well crafted. A balanced budget amendment that is too strict could be a disaster. At certain times, like recessions and wars, it is appropriate to run budget deficits. So any fiscal rule has to take into account those special situations where budget deficits are appropriate policy response. A fiscal rule by itself wouldn't be a bad idea, but it has to be well crafted and so far I haven't seen one that is.' With fixed prices, any changes in policies will cause the aggregate demand (AD) curve to shift either through the LM curve in case of changes in the money supply or the IS curve in the case of government spending. Nothing would be lost if price increases. However, the assumption states", "label": 0 }, { "main_document": "enacted to govern the conduct of PMCs) governs only the conduct of those PMCs working for the US government. It appears that the US government extended regulation only so far as protecting the government's interests. Supra, n 1, 286 The South African legislation for regulating PMCs has attracted its own critiques, including that the definition for foreign military assistance was over broad, there are issues of accountability with the approval processes set out in the Act and weaknesses in its implementation and enforcement. The South African media reported a number of South African PMCs were operating in Iraq despite being prohibited under the Act. This fact \"illustrates the problems of an ill-thought out piece of national legislation to tackle a global industry.\" Supra n 1, 290 The United Kingdom has recently begun examining how to regulate the activities of PMCs and has published a Green Paper on the issue. \"The publication of the Green Paper strongly suggests that the government increasingly recognizes that PMCs are new actors on the international stage. The Green Paper is also the first step towards regulating the activities of these new actors.\" However, it has now been five years since the publication of the Paper, and no new licensing regime is in place. Time will have to tell on whether the future regulations proposed by the British government provide a more adequate, efficient and transparent solution to the regulation of PMCs than has been seen to date. Christopher Kinsey, It would seem that without international oversight of domestic regulations, states will not be able to regulate PMCs. However, much like trade laws, which have married domestic trade legislation and international trade law together, a complementary system of law is not inconceivable. IHL may be in the best position to provide this kind of oversight as all of the rights that require protection, of civilians and combatants, fall under the responsibility of IHL. There is no doubt that the status of PMC employees will have to be reviewed and established in the near future. With the characteristics of new war rapidly changing conflict, and with it the type of \"soldiers\" needed to fight it, PMCs are going to continue to increase in their participation in global conflicts. For the protection of the PMC employee, of military members worldwide and of civilians in conflict zones, the ICRC and the global community must fit PMCs into the jigsaw puzzle of IHL - no matter how awkward and incompatible PMCs may seem.", "label": 1 }, { "main_document": "had to wait for the next Sullivan 1989: 304. Reynolds & Wilson 1991: 101. The disparity is not only apparent in the number of texts regularly copied but also in the proportion of classical texts held in the Carolingian monasteries. While some monasteries had a substantial range of classical texts Likewise, the library of Hartmut had sixty seven books and only five of them contained works of classical authors Reynolds & Wilson 1991: 98. Brown 1994: 34. Brown 1994: 35. In conclusion, the Carolingian period may share many cultural features with earlier and later periods of history, but its position within the history of Latin textual transmission grants it higher significance than other revivals solely because of the precarious state of the literature by which the Carolingians were confronted. It is possible then, to suggest that had the Carolingian period been an intermediate revival, such as the In my opinion, the sheer importance of the Carolingian revival rests solely on the dilapidated state of the Latin classical heritage. Moreover, had the heritage been in a better state, the Carolingian period would be one of a long line of necessary and unremarkable periods of recopying and the Renaissance, bringing with it the dawn of printing, would indeed be worthy of the epithet given to the Carolingian period by Reynolds and Wilson: Reynolds & Wilson 1991: 92.", "label": 1 }, { "main_document": "Following two study permit applications to the relevant Greek authorities (ephorias), written permission has been granted to study (i.e., observe closely, sketch and photograph) 16 Haimon lekythoi kept in the National Archaeological Museum in Athens and a more general permission has been given to look at the material in the storerooms of the Agora excavations (subject to arrangements with the curators). Further to additional applications, it may also become possible to handle Haimon lekythoi from the recent metro excavations and from the Museums of Kerameikos and Thebes. It must be noted that the Greek authorities only allow the study of published material and there are considerable bureaucratic processes involved in gaining access to archaeological finds. (It helps that I can communicate in Greek and have already established a small network of relevant contacts). This dissertation will comprise the analysis of primary and secondary data. Specifically, the following methodological steps will be adhered to (not necessarily in this order): The information on Haimon lekythoi from the online Beazley archive will be tabulated into an Excel database. From this database a comprehensive list of find locations will be extracted and presented in table, map and graph format. The quality of this provenance data will be assessed. Other correlations from this database will be investigated as appropriate (e.g., the number of Haimon lekythoi per single find spot and whether the lekythoi were deposited in bulk). Fields could be added to this database particularly about the find locations studied in detail and the vases examined closely. Library research using the resources at the universities of Reading and Cambridge as well as those at \"The British School at Athens\", \"The American School of Classical Studies at Athens\" and the Athens Department of the \"Deutsches Arch Discussion with subject experts (academics and museum staff). A qualitative hands-on examination of 20 to 30 Haimon lekythoi kept in collections in Athens. It is hoped that this dissertation will contribute to the emerging critiques on the 'traditional' treatments of Greek vases by juxtaposing an archaeological with an art-historical method of study. Consequently, the potential for a contextual approach will be explored and the limits of its applicability evaluated based on the nature of the available excavation evidence. Some new insights into a 'minor' painter of pottery in classical Athens will be offered. Why did Haimon paint in this 'mass production' manner? What 'market niche' was he targeting? What was the social value of the Haimon lekythoi? It may also become clear whether these lekythoi were used, traded and valued as artefacts of this specific painter or as mere oil flasks, although this dissertation, for reasons of space, can only partially include comparisons with other painters and with other ceramics found together with Haimon lekythoi. In addition to the two supervisors ( The following project schedule has been adhered to and is planned for the coming months: The Haimon Painter has not been a historical figure. It has been a construct for a similar group of pots. The name denotes a Theban connection and for that reason some reference will be made", "label": 0 }, { "main_document": "internet advertising entertaining to attract customers, which may contain games or cartoons. However, survey of 500 internet users suggests that they prefer information rather than fun and games (Gordon et al., 1997). This shows that internet is still used as a source of fun and enjoyment rather than as a source of information gateway. From the positive insight, online advertising cost is lower than traditional advertising as its expense is subsidized by advertising revenues. Similarly, it costs the web visitor less than to watch some television channels, especially in the more developed countries. Moreover, the reach of the internet towards its customers could be more if used properly and efficiently (Brown, 2004). The online advertising will significantly reduce the global advertising costs, and help businesses reach a global audience more easily, cheaply and quickly. Some small and medium sized companies who offering specialized niche product, would be able to find the critical mass of customers through internet advertising. Internet's low cost communications make it possible for firms with limited capital to become global marketers at an early stage in their development (Hamill, 1997). Furthermore, advertisers can understand more about their customers by technically collecting consumer information, such as demographics of average viewer or visitor. Whereas, firms may pay more for online advertisement than traditional advertising for this reason. The different advertising forms depend on the type of product. For instance, informative advertising may be preferred for high involvement or industrial products, entertaining advertising may be suitable for low involvement consumer goods (Gordon et al., 1997). Advertising spent may vary significantly between different industries and different firms within an industry (Perry et al., 2000). According to Perry (2000), merchandise stores are most likely to utilize direct marketing such as e-mail, conversely, transportation companies are the least likely to utilize direct marketing. Several factors need to be considered before choosing online communication tools, which can be inferred from table 1 (Moorey, 2004): According to the latest survey conducted by Dave Chaffey (2006), on his website, in-house e-mail list is considered to be the best form of online communication tools among various surveyed companies, followed by paid search advertisements and behavioral targeting. On the other hand, the rented e-mail list technique was the least favorable among the companies followed by pop-ups and e-mail newsletter advertisements (Chaffey, web reference). Compared with traditional forms of communications such as the telephone, postal and fax, E-mail is a more cost effective, flexible and reliable method, particularly when long distances or different time zones are involved, and it is not limited by real-time presence (Hamill, 1997). This is commonly used by large companies, however, for SME's (small and medium size companies), the problem is not if they can afford to install an e-mail facility, but whether could they afford not to. As more and more competitors begin to form relationships with their customers, suppliers or partners by using electronic communications, SME's who are not involved will face being out of the network. Compared with other communication channels, e-mail presents the highest return on investment by a wide margin, according to", "label": 0 }, { "main_document": "except man. C. Denis-Huots (2002) observed while studying lions that lions tend to spend the early part of the morning basking in the sun, they do this to dry their coats after the morning dew has left it wet (C. Denis-Huots 2002). They then find shade and settle down for the day. C. Denie-Huots (2002) believed lions have a seasonal preference to where they spend their time of rest. However there has not been sufficient study on this aspect of lions to justify this claim. The social life of a male lion has more variety to it. They start for the first three to four years of their life with the pride they are born into, here they lean through their mother and through play the skills they need to survive. Around the age of three the male cubs are no longer tolerated by the resident male and are driven away. If the cub has no other male relations he will seek out other male cubs of the same age and form an alliance with them. However, should the male be driven out with other male relatives i.e. brothers and cousins they will form a group with their relatives. In either case the male lions become nomads at sexual maturity. DNA testing was carried out on groups of nomad lions and it was found that groups of four or more lions were relatives, whereas nomad groups containing three or less were not related to each other( C and M Denis- Huot 2002). The males spend the next three years in this nomad group where they develop their skill in fighting to get ready to replace a weakening/old resident male from his pride. They will then lay claim to this pride of females and spend as long as they can as their resident male, until another young group of bachelors comes along and replaces them as they did to their predecessors. The old resident males either die in combat, die later from injuries sustained in a confrontation or they become a nomad again. The behaviour of a lion depends on its gender. The lionesses of any pride anywhere in the world are adapted to hunting. They form a coalition with a minimum of two lionesses ranging anywhere up to ten or so members. This cooperation between female lions increases their chances of success in hunting. A lioness hunting by herself is far less likely to succeed in a kill. David Attenborough (1990) observed the techniques used by lioness on the hunt. Once the lionesses have found a potential prey they will spread out, the spreading out ensures that if the prey spots one lion and tries to escape from it they are likely to end up in the jaws of another lion. Lions are not the swiftest of cats and have to get within at least twenty yards of their victim before they stand a chance of catching it. While the lionesses are on the hunt it is not believed they coordinate their attack but work on their own initiative, even though it", "label": 1 }, { "main_document": "reducing total lung capacity. Interstitial oedema is also thought to contribute to stiffening of the lung, by altering the elastic recoil properties (West 2003, Tse Fick's law describes the factors that determine the rate of diffusion of gas (CO Alveolar oedema would affect these factors, such as increasing the distance for diffusion, ultimately reducing gaseous exchange (Levitzky 2003, West 2003). The epithelial cells of the lung consist of type I and type II pneumoctyes. Type I pneumocytes are thin cells that line the alveolar space, allowing for efficient gaseous exchange and serve as mechanical support for the lung. Destruction of type I pneunmocytes, due to SARS-CoV infection, would result in a reduction in the mechanical resistance/ elastic recoil of the lung; thus leading to an increase in alveoli susceptibility to collapse. This is apparent as alveoli act in mechanical support to maintain the patency of adjacent alveoli in a process known as alveoli interdependence (Levitzky 2003, Taylor, Rehder, Hyatt Type II pneumocyte function consists of the production and secretion of surfactant, which acts to reduce alveolar surface tension created by water molecules being attracted to one another. Surfactant reduces surface tension of the alveoli from ~75dyn/cm Therefore, destruction of type II pneumocytes both by apoptosis due to viral infection or lysis by viral budding would result in a reduction of surfactant production, and an increase in surface tension. Mechanical support alone in a normal lung is not enough to resist increases in surface tension. Therefore, a reduction in mechanical support due to type I pneumocytes destruction, would see an increase in alveoli susceptibility to collapsing. In addition, air space consolidation has also been shown to occur with SARS patients using chest radiograms. This may be due to extensive migration of interstitial infiltrates of inflammatory cells, the increase in interstitial and alveolar macrophages, neutrophiles and the atypical pneumonia associated with SARS. The immune cells release proteolytic enzymes that may attack the protein components of the interstitial space (collagen, elastin), resulting in alveoli fusing to make one large alveoli. Vascular congestion/ infarction shown to occur with SARS, by histological examination, along with collapsed alveoli and air space consolidation would lead to ventilation/ perfusion inequality, which results in a low arterial partial pressure of O Ventilation/ perfusion inequality describes the mismatch of gaseous exchange between the alveoli (ventilation) and the capillaries (perfusion). Therefore, if the alveoli collapse, no airflow will reach the affected alveoli, although blood flow is still maintained. In addition, with SARS it is thought that a miss match of up to 50% can be observed, greatly reducing arterial partial pressure for O Acute areas affected by ventilation perfusion inequality can be counteracted by hypoxic pulmonary vasoconstriction, directing blood flow to areas of the lung with increased ventilation. However, the combination of collapsed alveoli, extensive alveolar oedema and vascular congestion/ infarction, hypoxemia is inevitable (Levitzky 2003, West 2003). Lactate dehydrogenase (LDH) and creatine kinase (CK) have also been found to be at high concentration in SARS pateints. LDH and CK are both associated with anaerobic conditions of metabolism, therefore, increases in concentration indicate", "label": 1 }, { "main_document": "company could generate enough cash to pay all liabilities at once if necessary. If the ratio is too high it would mean the company is being over cautious and not using all its assets efficiently. The graph displays an acid test ratio less than 1.0 in 2000. This figure only decreases until it is only 0.88 in 2003. This is worrying because the company could potentially liquidate if required to pay all liabilities at the same time. However the acid test is quite strict and does not take into account of stock. In reality Kidde plc is not close to liquidation but it could be troubling if this value were to continue to decrease. In general Kidde plc has shown that it is a successful growing business. The graphs show a slightly distorted trend in the periods 1999 to 2001 due to the demerger from Williams plc. However Kidde plc was fully established and separate by the end of 2001. Most of the figures start low in 2001, compared to the 1999 values, but over the next few years it can be seen that Kidde plc have improved their profitability and efficiency ratios. As an investor it is interesting to note that Kidde plc paid dividends even in 2000 when funds were low. Kidde plc seems to have a good relation with investors. In conclusion Kidde plc is a business that is performing well and will continue on their success for at least a few more years. The Kidde plc Annual Report and Accounts 2003 seem quite reliable. There is a lot of information provided about how the business is run. In the corporate governance statement there are details on measures taken to produce accounts which represent a true and fair view of the financial status of the company. The report is helpful in allowing the reader to make better comparisons with previous years reports. According to the Group Finance Directors report in 2003 Kidde plc changed their number of operating divisions from four to three This could cause problems when trying to compare to previous years. However in this report they have included figures in the four division format especially to allow for comparisons. They have also converted the old 2002 figures to a three divisional format again to allow for ease of comparison. A report which provides such useful comparison methods must have something good to show otherwise if there was something to cover up the reports would not contain easy comparison. As with any set of accounts adjustments have been made. These adjustments seem quite appropriate. For example in section 13 of notes to the accounts depreciation has been applied to tangible fixed assets The technique used was a straight line basis which is a valid depreciation method. Freehold land has not been depreciated which is also valid with normal accounting practice. The only slightly worrying figure is here is that some plant, equipment, or vehicle has a depreciation rate of 33.3% which seems a bit high. Either Kidde plc should invest in longer lasting assets or someone is", "label": 1 }, { "main_document": "of high volatility in the squared residuals, absolute values of the residuals and smoothed squared residuals plots. There appeared to be clear evidence of volatility, which confirms the known behaviour of residual interest rates exhibiting volatility. However, residual interest rates are also noted for having high volatility following high values of interest rates. The scatter plot of the volatility based on the previous residual value showed no evidence of this and did not show particularly clear evidence that volatility was present in the series. There is possibly an explanation for this. The known behaviour of residual interest rates exhibiting high volatility following high values of residuals might be due to increases in interest rates leading to worries over high interest rates affecting spending and profits. However, over the period studied (1 The graph below shows a time series plot of the monthly Euro-Dollar 6 month deposit rates over the period 1971 to 2007. Looking closely at the graph it is obvious that between 2002 and 2004 the interest rate was generally at a low level compared to the rest of the period. The generally low interest rate may have meant that fears caused by high rises in the interest rate may not have been as problematic as normal, meaning that the usual volatility following high residual interest rates is not present in the period examined. This may explain the discrepancies between the volatility properties of the data and the stylized facts of interest rate series. (viii) The daily residuals series of 753 values was converted into a weekly residuals series of 150 values by taking a sequence of five day non-interlapping sums in the series. Since there were several missing values and thus not a whole number of 'weeks' in the series, the final 3 values were ignored. The time series plot below shows the new residuals series. The weekly residuals plot seems similar in character to the daily residuals series. Like the daily residuals series, the weekly residuals appear to be centred at a zero level and display some evidence of volatility. The section of the series near the end does not appear to be clearly centred at zero, but this is likely to be randomness due to the smaller number of observations being analysed. The histogram below gives an indication of the distribution of the residuals. This is quite similar to the one seen for the daily residuals series. The residuals are heavily peaked at the centre, with some indication of a weak negative skew and a number of very extreme values, although with the weekly residuals series these are mostly negative values rather than a fair number in each tail. The weekly residuals are slightly less negatively skewed with a skewness value of -0.12. In fact the kurtosis value of 1.77 seems to indicate that the weekly residuals are not as heavily peaked as the daily residuals series. Both residuals distributions are highly non-Normal, as would be expected from known behaviour of residual interest rate series, but the weekly residuals seem to be less extreme in their non-Normality. The", "label": 1 }, { "main_document": "or very complicated interface with a substantial number of dialogue elements it would be better to split the test into several smaller sessions, each concentrating on a part of the interface. During the evaluation session, the evaluator goes through the interface several times and inspects the various dialogue elements and compares them with the list of heuristics supplied by examiner. These heuristics are general rules that seem to describe general properties of usable interfaces. In addition the list of general heuristics can be extended by developed category-specific heuristics that apply to a specific class of products, or by the evaluator himself. One way of building a supplementary list of category-specific heuristics is to perform competitive analysis and user testing of existing products in the given category and try to abstract principles to explain the usability problems that are found (Dykstra 1993). It is highly recommended that evaluators go through the interface at least twice, however standard says, that they decide on their own how they want to proceed with evaluating the interface. The first pass should give them general scope of the system and the way it works. The second (next) pass allows the evaluator to concentrate on specific interface elements and flaws while knowing how they fit into the entire system. ( If the system is intended as intuitive use interface for the general population or if the evaluators are field experts, it should be possible to let the evaluators use the system without further assistance. If the system is sophisticated and the evaluators are fairly inexperienced with the area the system works in, it will be necessary to help the evaluators e.g. by supplying them with typical usage scenario. Such scenario will list the different steps for a user to take to perform a sample set of realistic tasks. One has to remember that it is not enough for evaluators to simply say that they do not like something. They must explain why they do not like it with reference to usability principles. Every tester provides subjective result of using the heuristic evaluation method, which is a list of usability problems in the user interface with references to heuristics that were dishonoured by the design. The evaluators should be as specific as possible and list each usability problem separately. There are two main reasons to note each problem separately: even if it was fully replaced with a new design, there is a risk of repeating some problematic aspect of a dialogue element, unless one is aware of all its problems. it may not be possible to fix all usability problems in an interface element or to replace it with a new design, but it could still be possible to fix some of the problems if they are all known. Heuristic evaluation does not offer a systematic way to fix the usability problems or to assess the quality of any redesigns. However, because of the way that heuristic evaluation results are composed, it will often be fairly easy to modify design or fix many usability problems according to the guidelines provided", "label": 0 }, { "main_document": "This essay sets out to look at the Heckscher-Ohlin model which is a model of international trade developed by Eli Heckscher and Bertil Ohlin in the 1920s. One important result of the model is the factor-price equalization theorem, which will be explained in the essay. However, this theorem makes no assumption about demand conditions. Therefore, this essay will continue to look at different demand conditions in countries and see whether the factor-price equalization theorem holds. The Heckscher-Ohlin model is a 2x2x2 model where there are two countries (H and F), two factors (labour and capital) and two goods ( Both countries have identical production functions which are constant returns to scale. There are fixed total supplies of labour and capital which are fully employed. These factors flow between industries in a country but do not flow between countries. In all markets including factor markets, there is perfect competition. There is no distortion in all markets such as government interventions. While marginal rate of substitution is independent on scale of consumption, the model expands to allow for trade by assuming that people in both countries demand both goods and tastes preferences are same in both countries. Most importantly, the relative endowments of factors of production of both countries are different. Relative endowments are defined in terms of the ratios between capital stocks and the labour forces in the two countries. In equation 1.1, country H is relatively capital-abundant and labour-scarce because it has a higher capital-labour ratio while country F is relatively labour-abundant and capital-scarce. The Heckscher-Ohlin model assumes private ownership of capital. Owners of capital earn rents, Cost is minimized for production at Therefore, if both industries face the same relative factor payments Similarly, ( As w increases relative to r, producers will be willing to use more labour for production and in equation 1.2, good Thus, country H which is capital-intensive produces more of good This is illustrated in the figure below. In Figure 1, the equilibrium price ratio is steeper in country H than country F. This yields The Heckscher-Ohlin model assumes that resource differences are the only source of trade. Countries have different factors of production and each good requires different factors to produce them. This is how they affect comparative advantage. One of the main results of the Heckscher-Ohlin model is the Factor-Price Equalisation Theorem. Free trade will influence prices of goods and this in turn influences the returns of the factors of production. The relationship between relative good prices, p, and relative factor prices, w, is illustrated below. In Figure 2, the initial equilibrium is at point A where The equilibrium price ratio is However, when there is an increase in price of good The wage-rental ratio at point B corresponds to point C. However, at point C, the slope is steeper than the slope of the line that is tangential to both isoquants at point A. Thus, it follows that the wage-rental ratio at point C is higher than point A An increase in price of good Economy would then move to produce more of good", "label": 0 }, { "main_document": "The manufacturing of motor vehicle industry in the England can be considered to be in a rising star stage with respect to the market adoption graph. Market adoption of motor vehicle is still increasing. According to sales figures from EIU (2004) the production of non commercial cars have been rising steadily since the 1980, except for a slight slump at the year 1992 that could be attributed to external factor such as high pricing or interest rates. Peugeot is listed with the primary code under the Its primary activity is to assemble and distribute motor vehicles, parts and accessories. According to the FAME database (2004), there are a total of 505 companies that is competing under the SIC code 3410 in the UK and Ireland. Ford Motor Company Limited is the market leader in the industry in terms of turnover. Its latest turnover is GBP7443000 followed by VAUXHALL Motor Limited at GBP3574800. Peugeot Motor Company PLC has a latest turnover of GBP2071400 which place it 5th in the market. The firms that are competing in this industry consist of 24 very large firms, and the rest are medium to small size firms. Ford has a market share of 15.6%. This can be considered a near monopoly by UK government standards. Peugeot on the other hand has 8.2% market share. The The 7 firms in decreasing market share are the Ford Group, GM Group, Peugeot, Renault, Volkswagen AG Group, Citreon and BMW group. 7 very large firms dominate the market. They account for more than 75% of the industry. They can be said to be in a Peugeot's main competitor is Ford, GM and Renault. They are more likely to go for non-price competition like advertising and product However competitive pricing is the key to strengthening sales in this industry. According to the report by Mintel International Group Limited (2004), Consumers spending on cars grew in year 2001 excess of 10% due to the drop in prices as a result of pressure from the government. This means that the industry Firms also compete on Product's Features such as: Function Based on a survey of 25000 adults conducted by Mintel (2004), 25% of them agree that a car is only used as a mean of transportation. Thus this gives us a huge market for low cost, fuel-efficient cars Looks Consumers tend to expect high performance and other features in additional to looks. Safety More than 57% of people agree that they want the latest safety technology incorporated into the car. There appear to be However there might be Peugeot produce reliable, economical and aesthetically pleasing cars. This appeals them to the consumers. The order winner for Peugeot's car is it's established brand name and bold looks. Their involvement in \"sustainable development\", not forgetting new technologies such as HDI engine Barrier of entry and exit in this industry is large. There's a large amount of Examples are highlighted in the next page: Economics of Scale Being a new set up, we will not have the advantage of economics of scale. Our production cost will be", "label": 0 }, { "main_document": "This report documents a model of communication system developed by Matlab and Simulink, in which various digital modulation schemes used and tested with relative performance easily compared in the implementation part. In order to investigate thoroughly, better methods and solutions are added to the system one by one, through which we can see an improved performance as well as a more complicated communication system. The analysis and discussion part gives an academic support for the results through comparisons and contracts and specifies merits and disadvantages of each scheme from both practical and theoretical aspects. Other features and technologies that may be applicable in the near future are mentioned and considered in the conclusion part together with an overall understanding of this gradually developed system. This report provides a framework for understanding and evaluating the key elements and various methods involved in developing a DSP communication system. There are three parts in a modern digital communication link, which are the transmitter (TX), transmission channel and the receiver (RX). The transmitter processes a message signal in order to produce a signal most likely to pass reliably and efficiently through the channel. So this usually includes modulation of a carrier signal by the information signal, coding of the signal to help correct for transmission errors, filtering of the message or modulated signal to constrain the occupied bandwidth, and power amplification to overcome channel losses. Inversely, the receiver function is principally to reverse the modulation processing in order to recover the message signal, attempting to compensate for any signal degradation introduced by the channel, which normally involve amplification, filtering, demodulation and decoding. The transmission channel is loosely defined as the electrical medium between source and destination and characterized by its attenuation, bandwidth, noise, interference and distortion. In this report, we will concerned with choice of modulation method crucially affects the ease of implementation and the noise tolerance. These forms of digital data modulation such as BPSK, QPSK, 16PSK, 16QAM and 64QAM will be discussed later in the background part, tested and applied in the implementation part. With their BER or PER verse SNR plotted in a same diagram, we can get a relative performance to make research and draw conclusions about these various modulation solutions. The analysis will be done in two ways: one is between the M-ary PSK and single binary signaling (BPSK), the other is between phase shift keying (PSK) and quadrature amplitude modulation (QAM), which need to handle both amplitude and phase information. As the system developing, convolutional coding is added to protect the data transmitted by adding redundancy. Convolutional contains blocks that implement convolutional encoding and decoding. We will test simple and NASA code, gradually placing state pinning, puncturing and packet data with CRC into the system. Different modeling schemes are given and bit/packet error rate performance are compared and contrasted along with theoretical grounding for the relevant results. It is important to trade off the merits and disadvantages of these solutions in different practical situation, such as transmission speed (band rate), reliability (noise immunity), power (linear amplitude) and complexity issues. When", "label": 0 }, { "main_document": "seems to be a long way off' is a mute one. In this sense, traditional power relations might well evolve differently in East Asia than they did for example in Europe, if at all. If this is the case then the rise of a powerful China will be more beneficial than confrontational. Markus Hund, 'ASEAN Plus Three: Towards a New Age of Pan-East Asian Regionalism? A Critic's Appraisal', Hund, 'ASEAN Plus Three', p.411 Mark Beeson, 'Multilateralism, American Power and East Asian Regionalism', In summary, Japanese imperialism must be seen as a crucial factor in the history of regional cooperation. Wrenching East Asian nations into modernity, via economic development and industrialisation, Japanese imperialism left an indelible imprint on the region. The economic interdependency which emerged prior to the Second World War was subsequently consolidated by the post-bellum foreign policy of the United States in order to promote regional stability. From this, cooperation has developed around a core of economic and production networks. This core of functional interdependency still forms the concrete basis from which regionalism in Pacific Asia is occurring today. This essay has attempted to demonstrate that despite a proliferation of potentially divisive issues that have abounded since the turn of the century, economic cooperation provides the impetus for cooperation concerning new and increasingly salient issues which are potentially conflict ridden.", "label": 1 }, { "main_document": "the transfer of IgG anti-bodies, vitamins and fats into their bloodstream by ingestion of their mother's colostrum to protect them from the mass of rapidly invading bacteria that enter their bodies from the surrounding environment after birth. A calf's intestine gradually loses the ability to absorb IgG after birth after a period of between 24 and 48 hours after birth and will no longer be able to absorb antibodies (See appendix 2) after this. If not enough colostrums is received in this time a calf is venerable to an enteric disease. Poor management practises that cause stress also contribute to high rates of infection as they lower a calf's ability to fight off the infection. Wind chills, irregular feeding times, damp bedding and feeding of milk at incorrect temperature are all known to increase the likelihood of enteric diseases developing as calf struggles to maintain normal function. Good stockmanship skills are needed to identify calves suffering from enteric diseases quickly and accurately so that an effective treatment leads to an effective recovery minimises economic losses to the farmer. A calf with an enteric disease may display all, a combination or just one of the following symptoms during infection depending on what disease organism responsible for the disease. Scour (diarrhoea) is the most obvious initial symptom with smelly faeces being the result of undigested milk being excreted (Garnsworthy 2005). Examination of the faeces can often identify more specifically the cause of the disease (for example thin profuse yellow scour suggesting a rotavirus infection is to blame) and aid later treatment selection. Calf scour may also be the result of overfeeding of a calf or experienced when a calf excretes for the first time. This is normal and not caused by an enteric disease and a skilled stockman should be able to recognise and differentiate these causes of temporary scour. Dehydration and dry eyes are common as the result of the disease sloughing the epidermal cells tips (willi) in the intestine preventing the calf from absorbing liquids (Garnsworthy 2005). Calves are also likely to appear depressed lying on the floor with their head down, weak and lacking in energy and display a part or total loss of appetite in the early stages of the disease. Calves showing any suspicion of these symptoms should be isolated away from other calves immediately, preferably by movement to an entirely separate building to stop the transmission of the disease to neighbouring calves via contact with infected materials and air space. Disinfection and removal of manure from their old pen will also prevent rapid spread of a disease through all the farms calves. The affected calf should then be fed a diet of electrolytes and glucose suspended in a solution of water warmed to around 25 The electrolytes in this feed positively promote uptake of fluids through the damaged intestine allowing the water to re-hydrate the calf and glucose provides an instant energy source to aid quick recovery (Garnsworthy 2005). This diet should only be fed for a maximum period of two days and then occasional feeding of milk", "label": 1 }, { "main_document": "Hume, R, D, It is not an easy task to define the writing of the fifty years following the Restoration of Charles the Second. The period reflects the perception of the world in vastly contrasting ways by the French and English cultures. The use of theatrical techniques by the differing cultures to illustrate social and political values renders the plays of this period to have built into their fabric, an overriding reflection of social consciousness and political ideas. This reflection appears to be the very essence of Restoration Drama. The difficulty lies in identifying the genre of the plays - more particularly, the Restoration Comedies. It has been rightly noted that several sorts of comedy co-existed during the period. They can be - rather clumsily - categorized as Jonsonian, Intrigue, Dryden, Manners, Farce and Sentiment. This presents a problem when we come to apply them to the plays however; for when reading Restoration Comedy, one gets a sense that it consists of many of the categories combined together. Moreover, the plays are categorized according only to opinion - ' Such a system of categorization obscures as much as it clarifies the playwright's aims and methods'. Hume, R, D, Although the shift from wit or manners comedy, using a satirical approach - to the rise of the sentimental approach between 1650 and 1750 is quite clear, discussing these different approaches to playwriting remains problematic. '... Exemplary theory appears even at the outset of the period (note that Mrs. Behn is busy rejecting it in 1673), while satiric theory is never obliterated.' Hume, R, D, For this essay I have chosen two plays, each using a different approach. The satiric in William Wycherley's ' Firstly however, I am going to discuss each of the styles and establish the clear difference between them before exploring how each approach affects the components of the different plays. The two approaches give Restoration Comedy an additional purpose than simply entertaining the audience. The satirical held up human vices and follies to ridicule or scorn. A use of biting wit, irony or sarcasm intended to expose the harsh realities of society. The sentimental approach - more aptly termed as an exemplary form of comedy, hoped to fulfil a contrasting role to the satirical approach I think. I will now highlight the main differences to be discussed. One of the differences in the approach to playwriting is the effect the social background had on the satirical and the sentimental style. Another difference lies in characterisation. Restoration Comedy appears to have relied upon two particular components to make it successful. The first is its mission to recommend virtue and discountenance vice, be it by illustrating honestly a society filled with creatures of appetite, exemplified in ' The play's success as either a corrective or an exemplary comedy is drawn from the wittily dealt with moral issues within the plot and the responses of the characters towards them. This component gives Restoration Comedy its didactic quality. Its effectiveness depends on the play's success in reflecting the characters and manners of the period", "label": 1 }, { "main_document": "This investigation assesses the usefulness of experimental data obtained from this experiment, to distinguish noticeable differences in a materials properties. The specimens were tested using a Hounsfield Type W Hand Tensometer, where an extension was applied via a worm gear, and the resistive force plotted on a graph. The results were obtained from two batches, and by multiple individuals in the years of 1995 and 1998. The materials tested were: Statistical analysis of these results showed that whilst significant errors had lead to a wide range of values for yield points, tensile strength and ductility, there was sufficient difference in the magnitude of these values to distinguish the two materials. These results confirm predictions laid down in theory, that increased carbon content leads to increased tensile strength, but a decrease in ductility. This investigation assesses whether a real variation in the properties of selected carbon steel specimens, can be identified from experimental data. The values were derived from experiments using the Hounsfield Type W Hand Tensometer. Many groups working on separate experiments, in the years of 1995 and 1998, collected the data analysed within this report. The results however were obtained using the same methods as this year's laboratory. The following materials were tested: Whilst the materials were the same in each year, they belonged to different batches so composition would not be identical. The constituent of steels believed to have the greatest effect upon the properties of the material is carbon. The properties of the steel can therefore be honed to desirable characteristics by increasing or decreasing the percentage of carbon in the steel. Increasing the carbon content results in: And conversely, low carbon steels have the following properties: Therefore Type AN steel is expected to extend a greater amount than the N steel, however also to peak and fracture at lower forces. The statistical analysis uses the following processes: The mean and mode are more likely to offer the best representative value for the group, as the arithmetic mean can be greatly affected by anomalous values. The standard deviation returns the deviation from the arithmetic mean, however this is proportionate to the values being analysed. Therefore the standard deviation is calculated also, as a percentage of the arithmetic mean. This offers a comparable value between all fields. The specimens were tested using 'The Tensometer Type W Testing Machine'. The specimen is held in place by specimen chucks that fit around the neck of the specimen, effectively holding the head of the material. One pair of chucks is fixed at the measuring end, connected by a force-measuring system. The other is fixed to the end of a worm gear, which when turned, applies a tensile force, and thus an extension to the specimen. The extension under load is transmitted from the measuring system via a lever and is displayed with a mercury system. As the level rose along the tube, the cursor was manually aligned, and the pricking device pushed down onto the rotating spool of graph paper. A magnifying glass is incorporated in the cursor allowing the level of mercury", "label": 1 }, { "main_document": "wavered in doubt, they gave Antonius time to reform\" (Greenhalgh 1975: 150). Caecina's absence weakened an army already facing a formidable opposition. The Flavians also had a considerable amount of luck in the battle. For example Tacitus described how the moonlight shone from behind the Flavians, allowing them to see their enemy, whilst also enlarging their shadows which made them harder to aim at (Tac. 3.23). Also, the Syrian troops who had been transferred to Moesia had a ritual of saluting the rising sun every morning, which they observed even as battle ensued. However this gave rise to the rumour that Mucianus had arrived to re-enforce the Flavians and caused the Vitellians to waver (Greenhalgh 1975: 154), allowing Antonius to press his advantage. These factors, combined with the skill of Antonius' leadership and the effects of Caecina's defection, led to a victory that was critical for Vespasian's success in the civil wars of AD 69. Vespasian's forces had defeated the largest force of Vitellian troops in Italy, and this was to have major consequences which would lead to the fall of Vitellius. In my view the most important effect of the battle was the fact that it left Vitellius' most important commander, Fabius Valens, short of troops and unable to launch an effective defence of Rome. The troops under his control who had left with Caecina were meant to wait for Valens to catch up with them before engaging the Flavians. However, Caecina overrode this order, since he must have realised that it would be unwise to leave a strong force with the man who was soon to become his enemy, and split the troops between Cremona and Hostilia (Tac. 2.100). Therefore, after the Flavian success at Cremona Valens had access to neither group of soldiers, since those who fought at Cremona had either been killed, captured or joint the Flavians, whilst he was cut off from those at Hostilia. The force he was able to take command of was \"too large to escape detection and too small to cut its way through\" (Tac. 3.41), so in the end he decided to try to make his way to Gaul with a few trusted companions to try and stir up revolt against Vespasian. However, he was forced to land at the Stoechades Islands, who was in support of Vespasian, where he was arrested. Eventually he was to be killed by the Flavians. Valens' death had a huge impact on events. He had been the Vitellian symbol of resistance, but after his death \"both armies realized that the war was as good as over\" (Greenhalgh 1975:174). Tacitus tells us that \"the whole Roman world rallied to the winning side\" (Tac. 3.44). Vespasian had now added the west of the empire to the east in support of him, leaving only the area in Italy south of the Apennines in opposition. Even Vitellius realised that the game was up, and soon began negotiating with Vespasian through Flavius Sabinus, Vespasian's brother who had remained loyal to Vitellius throughout the conflict. However, Vitellius was unable to obtain the", "label": 1 }, { "main_document": "own section. Pessen, 'How Different from Each Other Were the Antebellum North and South?', p.1134 In bringing these ideas together then, I would argue that the historian would be misguided to stress simply the differences between the North and South in this period. Although it might seem that to explain the explosion of the Civil War there must be fundamental distinctions between the sections, to concentrate only upon these would reveal an inadequate picture of this period. Indeed there were many important differences in the society, economy and other areas of Northern and Southern life, the presence of an enslaved labour force in one being the most glaringly obvious. However, while these may have taken on a greater importance after the commencement of the war, at this point in time I would argue that the similarities between the sections are of equal importance to the historian who desires to examine this period as a unique time in its own right. Like Wiebe, I believe that the success of this period hinges on the fact that most Americans did not see a contradiction between having autonomy of sections whilst maintaining harmony as a whole. Wiebe, Within these decades, as I hope to have shown, there were perhaps as many important similarities, or at least links between the sections as to make them interdependent, as there were differences tearing them apart. Both sections underwent periods of definite economic progression while issues of inequality remained an undercurrent. Of course the end results and even the process of transformation that the international industrial revolution brought about within the sections was entirely different, but they were equally affected by it and thus to deny the importance of studying similarities would be inadequate. Equally of course, one cannot simply emphasise what was the same nationwide, a trap that perhaps Pessen falls a little too far into in his work. The fact that war was the end result of the antebellum period, although perhaps tainting the Perhaps my conclusion is not as definite as one would prefer, but I cannot fail to argue that looking at both the similarities For future historians, and this is perhaps why I have emphasised this aspect more, looking at similarities is possibly of a greater importance because there is already so much historical work which stresses the differences. If we desire to understand this period in history in its own right then comparisons between the North and South become as important to our work as contrasts because 'for all of their distinctiveness, the Old South and North were complementary elements in an American society' Pessen, 'How Different from Each Other Were the Antebellum North and South?', p.1147", "label": 1 }, { "main_document": "that have an obvious gap in time between the predicates. Berk uses the example: This temporal ordering is also true of some passages containing many sentences. I have taken an example from a fictional book. Some past tense verbs do not have this obvious time line. When actions continue over a long period of time, the predicates are not ordered temporally, for example It is not only the order of the sentences that shows the temporal ordering of events. If a progressive or stative verb follows a past-tense verb, we assume that the progressive was occurring before the past-tense verb. For example: We assume that the window was open before I went over to it. The order of sentences is not necessarily the temporal order. This shows that word order is not the only thing affecting temporal-relations. Tense is definitely affected by context, and the context will affect the syntax used. For example \"Actor, 72, dies of heart attack\" is written in present tense but when seen as a newspaper headline we assume that it means the event has already happened and is therefore in past time. The present tense can be used to express many different things. Berk (1999) summarises these. Habitual action can be expressed with the present tense, along with states, Universal truths, Planned future events, Commentaries, Performatives and Historical events. Comrie attempts to explain these uses of a one tense to express a different time than usual. For example present tense grammatically representing past time in narrative discourse. Comrie says: \"apparent exceptions to the use of a given tense as defined by its meaning can be accounted for in terms of the interaction of the meaning of that tense with independently justifiable syntactic rules of the language in question.\" (2002) This implies that these differences are a matter of syntax, therefore making tense systems a matter of syntax. This all shows how the traditional view may not be as clear-cut as it first appears. As stated above, we have no future tense as such, but this does not mean we have no concept of future time. Tense and time can also be separated by looking to other languages for evidence. Chinese has no grammatical tense system but this does not mean that the speakers have no concept of time in their language. They have words to express past, present and future, and they understand time as well as any other speaker of a different language. Other languages such as Japanese mark tense on a different word class such as the adjective. In an Indian tribal language, Potowatomi, endings expressing time can be used on nouns. These are just examples of a different tense marking. The fact that they do not mark the verb for tense does not mean that they have no tense system. Their system of marking a different word in a sentence works just as well. Romance languages comply more with the traditional view of tense. On most verbs there are three markings for past, present and future time. Word order may change according to tense", "label": 1 }, { "main_document": "of interest fields that can be experienced by young visitors in the Scottish Parliament: history, architecture, politics, art, various activities for all age groups. As soon as the tourism managers understand what the demand is and thus the expectations are met, there is a high possibility of success. Examining the behaviour of the segment through a behaviour model might be helpful and could lead to a better understanding and effective marketing. A stimulus-response model of buyer behaviour created by Middleton in 1998 is being analyzed. It is a simple, descriptive micro-model based on input-output information. The model is divided into six processes of which the first [Stimulus input] and second [Communication channels] can be highly influenced by marketing managers. This is where they can have an effect on the buyer behaviour, thus they have a chance to stimulate potential customers in direction of purchase. The main body consists of process number 3 [Communication filters] and 4 [Motivation] that is the 'buyer characteristics and decision process' where the personality of the individual is being involved and all the factors that affect and form their mind-set. Finally the out-put part consists of process number 5 and 6 which are the 'Purchase outputs-responses' where the consumers decide on the purchase [Choice of: Product, Brand, Price, Outlet] and afterwards form an opinion about the experience [Post-purchase and post-consumption feelings]. The tourism marketing managers must have an understanding of these processes in order to work effectively. Translating the module to the case of students and Scottish Parliament the following suggestions are being made. Starting with the input there must be a wide range of products and activities offered to students at the Parliament and that should be promoted in an effective way. There is a wide range of competitive products and services offered at the SP. Services and facilities offered are guided tours, free access to public gallery on business days, exhibition, education centre, caf However it is free and assures a safe environment for children, the website of SP advises visitors that it is not open on the weekends. SP has an attractive website where one can get information and follow the happenings in the SP. 'This week in the Scottish Parliament' section gives the program of the week and 'Visit, Learn, Interact' page describes the services and facilities offered by SP. An attractive website is of a significant importance regarding young people and students. This leads to the second part of the model where communication channels are being discussed. The World Wide Web is widely used by the targeted segment on an everyday basis. The website of the SP would be the most effective tool as a formal communication channel to promote itself and its programs and events targeting students. It must be regularly updated and maintained in order to attract visitors and give a positive idea about what visitors can expect when visiting the SP. There is clear information about visiting the SP at the moment and Public Gallery Tickets can be booked online via e-mail which is a good feature. On the other", "label": 0 }, { "main_document": "his way with one hand, and having the other firmly grasped by his companion, ascended with much difficulty the dark and damaged stairs: which his conductor mounted with an ease and expedition that showed he was well acquainted with them. He threw open the door of a back-room, and drew Oliver in after him. The walls and ceiling of the room were perfectly black with age and dirt. The floor was dirty with footprints and cigarette butts. In a microwave, which was on a corner of the table, and which was plugged in by a long and patched-up electric wire, some sausages were being reheated; and standing next it, with a toasting-fork in his hand, was a very old shriveled Indian, whose villainous -looking and repulsive face was obscured by a quantity of straight black hair. He was dressed in a jacket that was brown one time, but black now with grease, and corduroy trousers, his throat was bare; and seemed to be dividing his attention between the microwave and a tool box lying open next to a great number of mobile phones and cameras which were probably being examined and repaired. Several rough beds made of sleeping bags and old mattresses, were huddled side by side on the floor. Seated round the table were four or five boys, none older than the Dodger, smoking roll-ups, and drinking spirits with the air of middle-aged men. These all crowded about their associate as he whispered a few words to the Indian; and then turned round and grinned at Oliver. So did the Indian himself, toasting-fork in hand. \"This is him, Fagin,\" said Jack Dawkins; \"my friend Oliver Twist.\" Selvon. S. Longman. 1956. p 46.", "label": 0 }, { "main_document": "effusion is an accumulation of fluid within this pleural space. The fluid may be either transudative or exudative: a transudate results from an alteration in the hydrostatic forces operating across the pleural membrane an exudate results from a change in the permeability of the membrane due to inflammation A pleural effusion will only be detected: Signs: there may be a displacement of the trachea and the lung apex away from the effusion if it is very large there is reduced movement of the affected side the site of the pleural effusion is dull to percussion; classical stony dullness is not a constant sign there is reduced vocal fremitus over a pleural effusion breath sounds are reduced or absent over a pleural effusion towards the upper part of an effusion there may be signs of consolidation i.e. bronchial breathing and bleating vocal resonance. Malignant pleural effusions indicate a late stage of lung cancer using the TNM staging strategy. Under the TNM classification, tumour staging at T3 implies a tumour of any size which is invading surrounding structures but not involving the heart, great vessels, oesophagus or vertebral body or a tumour within 2 cm of the carina but not involving the carina. T4 classification implies a tumour of any size which is invading any of the structures excluded from T3 and the presence of a malignant pleural effusion. (1) The effusion should be investigated by diagnostic aspiration and the fluid examined for protein content, cell type and bacteria. If the effusion is an exudate then a pleural biopsy is useful. If there is a large effusion then symptomatic relief can be achieved via aspiration of the effusion. A chest drain is an alternative to repeated aspiration - this must be able to drain the base of pleural effusion (1). A systematic review on the effectiveness of management for malignant pleural effusion investigated the best ways in which to carry out pleurodesis for malignant effusions. It concluded that bleomycin was effective in reducing recurrences and tetracycline (doxycycline) was not superior to bleomycin (2). The study also looked at strategies such as rolling the patients after instillation of the sclerosing agent, protracted drainage of the effusion and the use of larger chest tubes, which were not found to have any substantial advantages (2). An important issue raised in this case includes the issue of breaking bad news with regards to a terminal diagnosis. There was no terminal diagnosis in this case but it was necessary to discuss the possibility that Mr", "label": 1 }, { "main_document": "suggested that the information is presented in such a way, serving as to back-up his personal agenda, where in fact common sense would suggest otherwise. Mobbs' overall conclusions suggest voluntary energy deprivation; this unfortunately does not take into account greed, and based on previous human behavioural patterns (with the exception of the environmentalist movement) this concept is an impossibility. Unfortunately 'Energy Beyond Oil' is largely a promotion of environmentalism, and possibly permaculture, under the fa From the critical analysis the following conclusions can be drawn; Supplies of uranium suitable for fission are far higher than Mobbs estimate. Fast breeder reactors can utilise other Uranium isotopes, expanding 0.7% uranium usage to a potential 100%. Thorium is of great abundance in the earths crust (roughly 3x that of uranium) and is a proven nuclear fuel. Operational thorium reactors are well documented; further research and development of this technology would expand possible nuclear fuel resources by a factor of 2. Maintaining current levels, the total nuclear energy supply will last ~30000 years. Upon replacing all other energy sources, nuclear power has the potential to power the world for ~1000 years. Waste management is progressing rapidly, with several methods providing safe long-term solutions. Subduction zone disposal would provide waste disposal in the absolute sense. Nuclear fission has the potential to act as a 'stop-gap' between 'Peak Oil' and fusion. A possible timeline for the next 100 years is illustrated below;", "label": 1 }, { "main_document": "(2004, p. 96). Thought tracking, for instance, can support writing in the reflective mode and freeze frames lead easily to narrative writing. Teachers should therefore employ suitable drama activities to suit the chosen form of writing in class. By so doing, they will find the contribution drama can make to their students' writing. Drama as a Venue to Cater for Diverse Learning Styles The greater the variety of multisensory learning tasks, the more likely it is the pupils will learn efficiently. Drama is a unique learning medium that allows children to comprehend the text more fully by using diverse multisensory strategies and multiple intelligences. Nicholson cogently argues that drama, as a multitextual art form, combining visual, aural, verbal and kinesthetic languages, offers students with different points of entry into the work and different ways of engaging with the text (Nicholson, p. 179). Miles Tandy's workshop serves as a good example of demonstrating how drama can engage students in active learning. In his workshop, the participants not only acted out an excerpt from Shakespeare's To contextualize and generate meaning from the excerpt, the participants needed to make good use of the language of movement, visual images, sound, music, voice and intonation. Since working in drama inevitably demands a wide range of intelligences, every member in the group can bring their talents into full play. The whole process of producing a short film based on a short script helped the participants gain a deeper comprehension of the written text; it provided a venue for everyone to shine, to contribute, and to gain confidence, as well. Learning to become literate is not a static process but a dynamic progression of meaning negotiation. If children can fully participate in it as a whole being - with the whole body as well as the brain, they will learn to the maximum degree. Literacy skills should not be viewed narrowly as linguistic abilities. It is the context of situation that should be placed at the heart of literacy curricula. Oracy provides the basis for children's growth in reading and writing. Through the dramatic framework, we are able to embrace all these important factors in literacy learning. Drama embodies the words and ideas, brings the written texts off the page, and makes them happen here and now. It encourages children to listen to each other, have their own say, and try out a range of registers to communicate in the context. In drama, children imagine together, fully engaging with the text. Drama is no elixir, but so long as the teachers are willing to apply drama in their literacy teaching, they will see the difference and payoff in their pupils' development of literacy.", "label": 0 }, { "main_document": "Unfortunately there was not enough time to develop a way of marking the end of the maze and making the buggy stop when it reached the end. However, the algorithm was designed and implemented to enable the buggy to follow the maze indefinitely, eventually exploring the whole of the maze. Methods for turning 90 These were initially relatively simple but used frequently so were designed and implemented separately. However, during implementation, it was discovered that these simple methods were inappropriate. The buggy had to be manoeuvred differently to ensure the optodetector was correctly positioned at a junction. Here, w denotes a variable whose size is dictated by the size of the buggy. Before beginning the actual coding of the software, it was important to understand how the sample code worked, and to ensure the buggy was functioning correctly. The tester program was loaded into the SWET and executed (exact details of how this was done can be found in Appendix A). The buggy functioned correctly so the buggytimertest.c code was compiled, loaded and executed. Watching the buggy's actions, examining the code and reading the accompanying lab sheet (see Appendix A) made it easy to see which lines did what. The next stage was deciding which method of driving the motors was best: signalling the motor drive pulse controls or using the timer. Code was copied from buggytimertest.c to develop two short programs which moved the buggy in all directions. There was a small problem at this stage as the buggy was making strange grinding noises as it moved, which hadn't happened when the test program was run. It was discovered that this was due to the fact that the delay between pulsing the motors was too short so they were slipping; simply making the delay a little longer sorted this problem. Having sorted this problem, it was decided that the best way to drive the buggy was to use the timer. During the testing, it was also discovered that simply pulsing the left or right motor was not a very good way of turning and resulted in a very large turning circle for the buggy. However, rotating one motor clockwise and the other anticlockwise provided a much tighter turning circle - the buggy pivots about the centre of its back end, rather than a wheel. This discovery was duly noted and this method was used throughout when rotation was required. Having done the above preliminary work to understand how the buggy worked, coding of the actual software could begin. The first stage of implementation was setting up the signals on the VIA. This task was relatively simple as the sample file buggytimertest.c required very similar signals to be set up so the project software was based on this. Global variables were also declared at the beginning of the code along with the delay method. The main method was implemented next. This sets bits on the VIA appropriately, makes the timer generate a drive pulse for the motors and calls the method which allows the buggy to follow the track, move. These lines", "label": 1 }, { "main_document": "a discourse. An issue becomes a security issue when it is viewed and treated as an urgent priority in a political discourse. It is a matter of 'speech act' Security is not an objective concept but depends very much on one's capacity to make his/her fears or interests heard. Even if global security is a concept whose aim is to shift the authorship of security issues from the sovereign state to local, regional or global non-state entities, the state remains the actor who secures threatened people: it chooses which issue to prioritize. Hough (2004: 17) Security is subjective and relative to the politicisation of particular issues which provokes their rise and fall in the international agenda The analysis of the behaviour of the actors reflects the meaning of the securitization of a particular issue. Mansbach and Vasquez (1983) Global security is paradoxically highly subjective: it is the assertion by a subject that an issue is of global concern for the security of everyone. The North seeks to defend its own interests by convincing the other actors that it is their interest too. The others cannot refuse to see it as global as soon as it is said to be so. Their reluctance to act in this imposed way is hence delegitimised as an egoist act. Furthermore, global security is a discourse embedded in the discipline of international law. This relation is essential to the reiteration of global security discourse since international law, as a symbolic and post-colonial tool of domination, provides the words, the institutions, the methods and the framework necessary to deploy a discourse. Every aspects of the discourse have to be defined into legal terms in order to gain the authoritative legitimacy necessary to be imposed to the international community. For instance, 'sustainable development' is a legal concept created by the 1972 World Commission on Environment and Development and elaborated by the 1987 Bruntland Report which followed. This commission and the report gathered lawyers, experts, governments' representatives and scientists, mainly from the North. First originated in soft law instruments, sustainable development has been reinforced in hard law instruments like the Kyoto Protocol entered into force in 2005. A legal discourse is produced within a certain temporality which corresponds to the long-term temporality of international law. Indeed, global security is a concept rooted in the rejection of a historically situated notion of security originated in the Cold War period, in response to a new understanding of threats to security as global. Environmental threats are global threats and thus concern global security - that is the security of every political entity (the state, the individual, the community). I call them a narrative in the global security discourse, not to deny their global scope and significance, but to highlights that despite having always been global in essence (climate change and the depletion of the ozone layer are borderless threats) they have been characterized as an issue of 'global security' only in a recent historical period and context. It means that the construction of this reality does not aim at saying the truth about", "label": 0 }, { "main_document": "sensible causalities, so that our real autonomous causes are concealed or disguised. We act in uniform and regular ways because we adhere to a universal morality. Quoted in Honderich, T. This is just a transcendental idea. It transcends our capacity for knowledge. Both thinkers assume a universal causality. Kant uses this to assert an equally plausible transcendental freedom, whilst Hume thinks causality is a more credible idea because a constant conjunction can be observed. One might object that these grounds are not sufficient for holding it true. Universal causality might not be a necessary feature of scientific pursuits. Compatibilism, of a form more reminiscent of Hume than Kant, is the current widely held view on this ongoing debate. The most recent development however, endorsed by thinkers such as Quine, is that Compatibilism does not preclude determinism, in fact determinism might become redundant in the debate. The reason for this is development in the theory of quantum mechanics, a doctrine that supports the idea that some events are random. By following a quantum mechanics route to freedom we may encounter far more 'outrageous' suggestions than we could possible point at Kant. Having considered both Hume and Kant's positions quite thoroughly, we might want to suggest that their theories are not essentially about compatibility at all, but compromise. Both philosophers have to bend, in a sense, the original intuitive problem of freedom. Hume offers us not a freedom without determinism, but instead offers us a freedom that is without constraint. Kant cannot assign us causality in a universal, natural sense, but instead construes a causality of freedom, in which he hold all the strings. Despite the popular opinion of what we want form freedom, and the freedom offered to us by each philosopher, the reader might surprisingly feel soothed by Hume's account, and perturbed by Kant's.", "label": 1 }, { "main_document": "This report will determine what factors affect the functionality of control systems and how to prevent instability by adding other devices to them. Seven systems are analysed ranging from mechanical to electronic systems. The systems will be simulated and modelled theoretically in order to give a full insight into the depth of control system analysis using different techniques. A select few of the systems had controllers within them to help increase response time and stability. Some systems responded with a lot of damping or less damping in the case of the underdamped system. It was shown, that the system(s) that had the best response were those which were critically damped. It was discovered that the critically damped systems had controllers within them to regulate the system. Control and analysis of Control Systems has existed since the late 18 It is an area that started with the invention of the Fly Ball Governor by James Watt. Since then, many different examples of control systems have been designed, each with different properties. The systems themselves may be mechanical, hydraulic, electronic, pneumatic etc, and exist virtually everywhere in society. Even animals and plants have types of control systems within themselves.[1] Mammals have the regulation of body temperature from homeostasis, and plants regulate the conversion of carbon dioxide into oxygen via photosynthesis. However, the basis of each of these separate systems remains the same: to provide a form of regulation, via the feedback of information. The information exists as a range of different types of quantities, for instance force in a mechanical system or a voltage in an electronic system.[2] The control system has three basic parts to it, if it is modelled as a black box. There is the input or the stimulus, the process (which is also denoted as a plant); which is the black box in the model, which uses the stimulus to create a suitable output; and a response. In reality the control system is more advanced than this containing hidden sub-processes and outputs in the overall system, which then helps to achieve the desired output of the system via feedback and regulation. Control has many uses from basic experiments such as small electronic circuits, and mass spring systems, to the design of much larger grandiose systems such as solar tracking units, which consist, in turn of many smaller sub-systems. Control is only easy if the engineer has the firm background and current understanding as to how to put the knowledge to good use, in order to analyse systems similar to the ones in this document. These systems will be explored and analysed within this document, by using numerous mathematical techniques for theoretical responses, and the use of the MATLAB software for the simulated responses. To do this, the transfer function of the systems will be deduced both analytically and will also be simulated within the MATLAB environment, for a range of different systems, covering additional features to the system such as P.I.D and Lead-Lag control. Both responses will be plotted on a step response graph, and these should line up over", "label": 1 }, { "main_document": "consists of a closed bay, with a moderate dynamic activity in the coastal area and slow currents in its centre, resulting in a slow interchange of water with the outside sea (see Figure 2). This is why its hydrochemical properties have a slow rate of recovery, which enhances the potential impacts of water pollution generated by industrial and domestic wastewaters coming from further north (Sol In the next matrix (Table 1), the main actors, economic activities and the real impacts or potential threats imposed by them on the Paracas Bay will be listed. The governmental institutions have not been considered for this analysis because they will be studied in detail in the next section. Paracas, like all of Peru's natural protected areas, is managed by the Natural Protected Areas General Intendancy of the National Institute of Natural Resources (INRENA), which depends from the Ministry of Agriculture. Nonetheless, the Vice-ministry of Fisheries is responsible for controlling the fisheries and the fishing effort in the reserve, while INRENA manages the flora and fauna in the terrestrial area. Additionally, the archaeological ruins are controlled and maintained by the National Institute of Culture (from the Education Sector). This makes it very difficult to regulate the fishing activities and causes conflict between the different competent sectors involved (ParksWatch, 2005). More than 30 institutions, including government entities (from at least 6 different sectors, as well as the local government at the district, province and regional level), private institutions and NGOs, share roles in the running of the reserve and its buffer zone, making it ineffective and inefficient. The legal framework is not clear and does not help to establish an understandable hierarchy of authorities. (ParksWatch, 2005; INRENA, 2005; ERM Peru, 2002). Although there are many efforts in place aimed at protecting either the PNR or just Paracas Bay, there does not seem to be a reduction of the presented problems. The main ones are: Natural Reserve's Master Plan 2003-2007: Designed by the Reserve's administration with participation of local institutions and stakeholders, the PNR Master Plan is the main management guideline for the reserve for the next few years. It comprises strategies for resources conservation, public use control, management support, and a detailed definition of different management zones (INRENA, 2002). Its main limitations are the insufficient personnel, logistics and budget assigned by INRENA for the reserve. Although every visitor has to pay an entrance fee, this income is managed centrally by INRENA and distributed to all the national protected areas system as part of the state's budget for them, so that Paracas itself does not receive a significant part of the resources it generates (ParksWatch, 2005). Buffer Zone creation: The PNR's buffer zone was establish recently in 2001 (INRENA and SPDA, 2002), covering an area where many impacting activities are already in place (see Figure 1). Thus, its objectives cannot be properly achieved. PAMAs for fishing industries: As part of the increase of environmental regulations in the country, the Fisheries Vice-Ministry requires that all existing industries not having an EIA (because it was not required previously), should design and", "label": 0 }, { "main_document": "under varying compressive load. They can fail either by It may be important to notice that they do not happen at the same time as only one or the other will cause failure. If the column is slender, the spacer will buckle long before the material crushes. This is a failure that is very likely to happen in small to medium columns. Very short, fat columns, though not likely to buckle, is in danger of crushing. In industry, Unlike refraction (which depends on a wave's response to a change of speed), wave diffraction depends on the Therefore, the bigger the obstacle(fat spacer), the more diffraction it causes to water flow. Thus, big spacers are avoided in some manufacturing industries. Having found that buckling is more likely to happen in slender spacers, which are more often used, it is important to find a solution to its weakness-buckling. Buckling can be predicted by Euler Buckling equation: To link buckling force and materials physical strength, simply rearrange Euler Buckling Load equation: It suggests Therefore, material used must have very high Young Modulus. Spacers selected must also have adequate ability to resist This should be considered as one of the most important factors that would affect the results. Without adequate corrosion resistance, components often fall short of the expected design life. Corrosion is due to generally slow chemical reaction between materials and surroundings. Stable materials are less likely to be damaged by corrosion than instable ones. These protons combine with electrons and thus corroding the original material. Water, comparatively, has less corrosive strength as it contains less reactive flowing group. Even so, it could cause slow corrosion which has to be aware of. Therefore, selected material has to be strong enough in terms of corrosion resistance strength. There are a few limitations on this selection for spacers. The basic requirements are as follows: Bulk Mechanical Property- Elastic Moduli: The material must have very high Young Modulus to avoid buckling as proven previously Electrical Property- Electrical Conductivity: The material should be a bad electrical conductor. Bulk Mechanical Property- Thermal distortion: The material thermal expansion should be as low as possible Chemical Property- Corrosion Resistance: The material must have very high corrosion resistance strength to water and organic solvents and high resistance strength to acid Stage 1 Material passing this stage must have very high Young Modulus (It has to be set above 80GPA as this sufficiently removes a lot of useful materials) and must have very high resistivity conducting current ( Its minimum limit must be set to 1e12 as this is the typical resistivity an electrical insulator has. Stage 2 Material passing this stage must have small thermal distortion ability( Its minimum is set to be 1e-6/K as this removes a right number of materials ) and at least very good resistance against acid. Stage 3 Material passing this stage must have very good resistance to water and organic solvents. After the four stages, only 4 materials remain. They are Boron They all have high Spacers have to deal with periodically changing pressure. Material of low", "label": 0 }, { "main_document": "Thus, the conflict between a state and a terrorist group is not considered as an international conflict. But the war against Afghanistan was an international conflict since the Taliban represented the state Thus, the war on terrorism against Al Qaeda is covered by the law of international armed conflict only when it is effectively controlled by the Taliban, it means only in Afghanistan. Even if a conflict does not qualify as an international armed conflict it may be non-international armed conflicts, covered by Article 3, of the four Geneva Conventions, prohibiting 'the passing of sentences and the carrying out of executions without previous judgment pronounced by a regularly constituted court, affording all the judicial guarantees'. But it has to be an armed conflict as well. 'Terrorist actions by private groups have not customarily been viewed as creating armed conflicts' M. Sassoli, Op. Cit. p. 101 The problem of this detention camp is that the detainees belong to different legal categories (the Taliban are prisoners of war, Al Qaeda members are unprivileged combatants) but are all treated as 'unlawful combatants', a category invented by the US administration. The category of 'unlawful combatants' answers the question of who qualifies to the status of prisoner of war. This category does not exist. The US administration admitted having invented this category because the terrorists did not fit in any of the categories of IHL (neither civilians nor combatants). But it seems that it is the US administration which does not want them to fit in any category. Indeed, the Geneva Conventions are clear on the issue of incertitude. When doubts exist about a person's status, she should be qualified as prisoner of war and benefit all the protections due, until more clarifications are made. The US is openly breaching the basic rules of IHL and International Human Rights Law. It is a sign that terrorism represents a black hole in IHL since these breaches cannot be repaired. Another approach can be considered. Fletcher, in 'The indefinable concept of terrorism' These two categories fall under different legal regimes. A general definition of terrorism cannot be drawn from its variables because each specific case does not bear them all in the same way. In an International Criminal Law (ICL) perspective, the function of defining terrorism is to legitimate a new form of military violence by the state: 'the targeted assassination' Indeed, a crime gives the obligation to a government to arrest the suspects and bring them to trial without using deadly forces. In contrast, in situation of armed conflict, ruled by the laws of war, the government is entitled to use its military force against the one of the enemy. The difficulty is to classify a serious act of violence as crime or as act of war. The terrorist people are neither criminals nor combatants. Guantanamo relies on a model of warfare without having recognized the status of prisoners of war to its detainees. The creation of a new regime of law covering terrorists might have negative consequences for civil and political rights by increasing the prerogatives of", "label": 0 }, { "main_document": "to understand the degree to which third world women are empowered. According to Molyneux (1985 cited in Barrig, 1989: 132) in order for organisations to be successful they need to recognize and move towards resolving the problems created by strategic and practical gender. These two burdens are central to the lives of women. She defines strategic gender as the 'base of women's subjection', which can be broken down into three core parts: 'the sexual division of labour; sexual violence and the control of reproduction'. In addition to this, they face the struggles of practical gender, which she considers to be 'experiences which are affected by class'. Moser (1993) believes that this framework is important but unfortunately often bewilders planners, in that they fail to appreciate how the complexities of these two strands impact women in very different ways. In fact as Gianotten In that the theme of this paper focuses on third world women's perceptions of empowerment and not western perceptions of what empowerment should be, it is important to address that whilst Molyneux's (1985 cited in Barrig, 1989: 132) interpretation of gender is significant, the concepts she uses must be modified to the desired collective. A way of explaining this further could be to take for example, women in a small Indian village who wish to empower themselves. It is essential that NGOs understand that this notion of empowerment is inextricably woven to these women's notions of self. It is almost like a village specific version of empowerment. Thus applying a universal characterization would not further the cause. True empowerment is a result of their very specific circumstances and experiences. Consequently, NGOs must allow for third world women to define themselves what they believe strategic and practical gender to be. The importance of this should not be underestimated. In recent years participatory and community driven development has seemed to be at the forefront of NGO planning. Schemes have been set up which allege \"full participation\" and \"true empowerment\" from the ground up. However, more often that not, they have failed to live up to the hype, with many turning out to be driven by male gendered interest, leaving 'the least powerful without voice or much in the way of choice' (Cornwall, 2003: 1325). Bosch (1998) puts forwards that even projects which have been set up with the best of intentions will run into problems if the at the planning stage, facilitators fail to take into account the situations of the women that they are trying to empower. Simple factors, like for example if the time of the meetings are not convenient for women will impact upon the success of any campaign. An apt example of this is Educacion y Tradbajo programme (Education and work) set up by the women's NGO, Centro de Investigacion y Desarrollo de la Educacion (CIDE) in Chile. This aimed to help train unskilled women and to assist their entrance into the labour market. This was achieved through personal development sessions combined with vocational training. However, whilst in the beginning women's enrolment increased, these rates began to drop", "label": 1 }, { "main_document": "adversary, rather than give a definition. The 1949 Geneva Conventions do not discuss children in the context of combatants. While the Geneva Convention IV makes many references to children, it does so with a civilian status in mind, and while the first three Geneva Conventions focuses on combatants, they do not broach the issue of the possibility of children as combatants. The 1977 Additional Protocols on the other hand do include provisions on the recruitment and use of children during armed conflict. Article 77(2) of Protocol I states that parties to the conflict \"shall take all feasible measures in order that children who have not attained the age of fifteen years do not take a direct part in hostilities and, in particular, they shall refrain from recruiting them into their armed forces.\" In a similar vein, Article 4(c) of Protocol II states that children under the age of 15 \"shall neither be recruited in the armed forces or groups nor allowed to take part in hostilities.\" While both Protocols establish 15 as the minimum age for recruitment, Protocol I has an additional statement that when recruiting between the ages of 15 and 18, priority should be given to the oldest. Since Protocol I expressly states where priority should be given, it is unclear whether this means that priorities do not need to be made in internal armed conflicts. Art.77(2) Protocol I. The second difference is that Protocol I makes reference to \"direct\" involvement in armed conflict, whereas Protocol II makes no such reference. This seems to imply that indirect involvement would be permissible in international armed conflicts. That there is a distinction between direct and indirect involvement is supported by the fact that the ICRC was opposed to the insertion of 'direct' in Protocol I, although this went unheeded. Van Bueren (1994), 815. There is also an ambiguity about the definition of recruitment and whether this encompasses both compulsory and voluntary enrollment. The ICRC had proposed during the drafting of Article 77(2) of Protocol I that States should refrain from accepting voluntary enrollment of children under 15, but this did not make it to the final text. Due to the fact that Geneva Convention IV See Art.51. Van Bueren, G, (1994), 813-814. Van Bueren also points out that during the second reading of the CRC, recruitment and voluntary enrollment was viewed as two separate categories by Algeria who argued that children who wished to volunteer enroll in armed forces, particularly for national liberation wars, should not be dissuaded by States even with a minimum age for recruitment, 814. Article 38 (1) of the CRC reaffirms the rules of international humanitarian law by calling upon state parties to respect these rules relevant to the child. It also calls upon states to \"take all feasible measures to ensure that persons who have not attained the age of fifteen years do not take a direct part in hostilities\" If recruits are between the ages of 15 and 18, priority should be given to the oldest Art.38(2). As in Protocol I, the word 'direct' is used.", "label": 0 }, { "main_document": "The essay introduces the proposals of the Serial Endosymbiosis Theory (SET), and discusses the evidence supporting the theory. Other theories opposing SET, along with supporting evidence, are also briefly discussed in order to argue a balanced case for endosymbiosis. The theory of endosymbiosis is a theory that was developed principally by an American biologist called Lynn Margulis in the 1960s. At its most simple level, the theory suggests that the modern Eukaryotic cell evolved from symbiotic associations with prokaryotic ancestors. Free-living bacteria and photosynthetic cyanobacteria became incorporated inside larger nucleated prokaryotic cells, where they developed into the forerunners of the mitochondria and chloroplasts seen in modern eukaryotes. Margulis postulates that these events have occurred on several occasions, producing various lineages of both heterotrophic and phototrophic organisms, from which ancestors of animals, plants and fungi have evolved. Evidence for the theory is relatively strong, particularly the finding that mitochondria and chloroplasts have circular DNA similar in form to that of bacterial DNA, and that they contain prokaryotic type ribosomes. The double membrane, and mitochondrial specific transcription and translation machinery all point to this conclusion. Looking at the phylogenetic tree of life (Figure 1) it is widely recognised that from the root it split in two separate directions. Bacteria in one direction and what would eventually diverge into the separate domains know as Archaea and Eukarya in the other direction. The exact time of origin of eukaryotes is not pinpointed in the fossil record. Prokaryotes were definitely first seen 3.5 x 10 The size of cells in microfossils remained constant from their earliest appearance until about 1.6 x 10 After this point, the size of some of the cells began to increase (1.4-1.2 x 10 This is interpreted by some as the approximate time protoeukaryotes or eukaryotes first developed. The evolutionary origins of eukaryotes can be grouped into two main categories of theories; autogenous theories and symbiotic theories. In autogenous theories it is suggested that all structures and functions of eukaryotes evolved gradually from a single stock of prokaryotes. One common feature in this type of theory is the proposed infolding of regions of the cell membrane forming internal vesicles, which subsequently evolved into the various organelles. In symbiotic theories it is thought that certain eukaryotic organelles evolved from prokaryotic organisms, which entered into symbiosis with an ancestor of eukaryotic cells, \"the protoeukaryote\". The theory is known as the endosymbiotic theory, literally meaning \"inside symbiont\" or \"internal symbiont\". The theory suggests that a stable residence was established by aerobic bacteria inside the cytoplasm of a primitive eukaryotic-like cell, providing the cell with energy in return for a protected environment and an easily obtainable source of nutrients. This symbiotic relationship created what was to be the forerunner of the mitochondrion in the modern eukaryotic cell. Similarly a primitive eukayrote would have gained photosynthetic properties after the endosymbiotic uptake of an oxygen producing phototroph, the forerunner of the modern chloroplast. Evidence suggests that mitochondria probably arose from a major group of Bacteria called the Proteobacteria, specifically from relatives like the Agrobacterium, Rhizobium and the rickettsias. Like", "label": 1 }, { "main_document": "argues that the various social movements in the Third World challenge international law in two ways. First, they have their own conceptions of modernity and development that conflict with those of international law. It is a new way of thinking law: how to write resistance into international law? They use the language of human rights as the legal tool and 'discourse' of resistance. It is at the same time part of the dominant legal discourse, but also the strongest asset social movements have in order to achieve their resistance. They propose alternative human rights, different from the ones on which the state focuses, but without conflicting with the state. While the mainstream approach is characterized by a dualist position: either statist or anti-statist, social movements go beyond this duality. Social movements move international law from its institutionalist and formalist realm and propose new definitions of democracy. The neoliberal property rights regime is being rejected by the re-appropriation of resources by the local and by a redefinition of the concept of property which instead of being exclusive becomes inclusive and shared. Second, the global international system is not appropriate and is being transformed at the local level. A social movement perspective brings theoretical challenges to international law. Social movements are identity-based. There is not one definition of social movements since they encompass a plural reality, various actors with various motives and methods of resistance based on different 'political cultures'. This inherent plurality is precisely the challenge to the reductive, universalist and rationalist character of international law For example, the feminists have shown that the distinction between private and public in the liberal theory was not relevant. Ibid. 247-8 Finally, instead of the unified political space produced by liberal internationalism, social movements propose 'cultural politics' where identities are related to 'survival strategies' Ibid. 243 What does this new definition of the political mean for international law? Social movements 'reveal the limitations of a Kantian liberal world order based primarily on individual autonomy and rights, and a realist world order based primarily on state sovereignty' Ibid. 245 The World Social Forum is the expression of a counter-hegemonic globalization within which a subaltern cosmopolitan legality which will change legal perspectives is being forged. According to the Porto Alegre Charter of Principles, the World Social Forum cannot organize specific collective actions in its own name in order to preserve its comprehensiveness and inclusiveness. It is not an event, nor a conference, nor a party, nor a NGO, nor a social movement. It has no ideology. Its structure does not correspond to any modern model of political organization. It is 'a forum that facilitates the decisions made by the movements and organizations that take part in it' Santos (2005 : 45-6) To borrow Hardt and Negri's framework, the World Social Forum is a place where the 'singularity' of each organization and movement, its uniqueness and the 'commonality' Indeed, within the World Social Forum, social movements do not have to be identical to work together. Differences are respected and are not a basis for exclusion. Hardt and Negri (2005", "label": 0 }, { "main_document": "The trip involved a visit to an integrated farm. Sustainability of the whole system was the main aim of the farm as in the organic farm but in a different approach. The farm is characterized as sustainable as it employs techniques that increase biodiversity and at the same time minimize the environmental impact. The farm is integrated having the same aims as the organic farm. It splits over two sides, with the main crops being cultivated, cauliflower; coriander; sweet-williams, Swedes, kohl rabi, spinach and onions. The farm is on reclaimed gravel mining. All the land has been gone from a gravel peat to this since 1978. The cultivation method in this farm is based on bed systems, being prepared in the late autumn. They are stale seedbeds allowing weeds to germinate and destroy them before the crop is planted. What is really innovative in this farm is the use of G.P.S. The whole farm has been marked on 25m The fertilization as well as spraying take place separately on this 25m The tractors are linked up to this satellite system making the whole system intelligent and innovative at the same time. With respect to nature, in the farm there is a pond 20 years old being a wildlife habitat. Apart from the pond at the end of the field there are ditches for accumulation of drainage water. The fact is that they are kept as clear as possible by spraying. In addition, the water that is being accumulated in the ditches goes back to the pond. Finally, hedges surround the field margins. They consist of native deciduous species, such as elm or blackthorn acting as wildlife corridor and windbreak simultaneously. In the field there are marked nests to avoid being sprayed and also to increase biodiversity. The turnover is about 8,000-9,000 Finally the integrated farm is based on advice from English nature and other conservations bodies. There is no doubt that weeds are a huge problem that any farmer has to deal with. In this farm weed control is achieved in an effective way taking into account nature and promoting sustainability. The use of stale seedbeds allowing weeds to germinate and destroying them before the crop is planted is a successful technique. Weeds are destroyed using chemicals, such as Roundup, rather than mechanical destruction, applying them at the right amount and at the right quantity. The fact is that this technique eliminates the use of pesticides to a minimum and at the same time promotes sustainability reducing the environmental impact. Fertilization is one of the most vital factors leading to a successful production. Comparing to the organic farm, in the integrated farm the whole system of fertilization is by far stricter. Marking the field on 25m In consequence, fertilizers are applied separately on this 25m In addition, they are applied in accordance to what each block requires, eliminating leaching of nutrients and environmental impact. Not only fertilizers are applied on 25m Management of the hedges is an essential part of a sustainable system. In the integrated farm the hedges are trimmed every", "label": 0 }, { "main_document": "and West Germany's Red Army Faction (RAF) were still quite active. Nevertheless the elimination of physical barriers has led to an improvement in the movement of goods and labour. Customs formalities were simplified initially and then abolished along with border controls by 1 January 1993. In response to the concern about major crime in the EU a system of frontier-free police and criminal justice cooperation was created. Europol, the European police force, is part of that response. So is the Schengen Information System whereby national police exchange information on wanted or suspected wrongdoers. The elimination of technical frontiers basically means breaking down the barriers of technical regulations or standards on the factors of production, either by harmonisation or mutual recognition. Most of these regulations were based on different safety, health, and environment standards. Goods were prevented from moving freely due to the differences in the standards. The lack of mobility of labour and persons was due to the differences in, for example, immigration policies as well as pension schemes. With regards to movement in capital, this means removing exchange controls and any other restrictions. The European Parliament has pointed out that capital liberalisation should be backed up by full liberalisation of financial services in order to create a unified European financial market This should encourage economic progress by enabling capital to be invested efficiently. An integrated capital market would also reduce the cost of equity, bond and bank finance and lead to a rise in Europe-wide GDP growth by 1.1 per cent The idea was to create more competition in the financial sectors i.e. banks, insurance, and securities thus allowing a greater variety of investment products for consumers to choose from. As for other types of services, the differences in the recognition of professional qualifications among member states limit their free movement. In eliminating technical frontiers, there was the issue that member states were forced to lower their standards to those which prevailed in others This was argued in the 1987 case about Germany's import of beers hence producing a potential conflict between consumers' interest and the drive to remove trade barriers. McGriffen, S.P., \"The European Union. A Critical Guide\", Pluto Press, 2001, p.70. The removal of technical barriers has made an immense achievement in the movement of goods but to a lesser degree for labour. The Commission however is currently focusing on the services sector as this sector is seen to be the least progressive. Their efforts include more deregulation in certain areas for example to ease price-fixing by professional associations. A free services market should enable service providers to realise economies of scale more efficiently. The Commission proposed VAT approximation among the member states as one of the attempts to remove fiscal barriers. Member states had varying rates of VAT, between 12% and 22% in the 1980s. Since border controls were to be abolished, it was essential to have little differences in VAT levels so as to make fraud pointless. However in Britain the approximation would mean the end of their VAT zero-rating of basic goods such as food and", "label": 0 }, { "main_document": "is These rights are, specifically, the individual rights to life and liberty, and Walzer claims that the rights of a state to political sovereignty and territorial integrity stem directly from, and are analogous to, these individual rights. Using an unexplained notion of contract he further defines a political communty as possessing a \"common life\" It is order to protect this \"common life\" that states possess the right to mass, violent self-defence. The implication here is to directly equate territorial integrity with the individual's right to life, and political sovereignty This comparison between state and individual allows Walzer (1992:58) to envisage the \"international society\" refered to in Chapter 1, and compare it to a society of individuals. If we accept the individual right to self-defence, and if political communities are merely the result of the collectivisation individual rights, then it logically follows that political communities possess the right to self-defence. It is clear what the anti-war pacifist must do here. The individual right to self-defence is accepted, and so it must be shown that political communities are not be the same kind of entity as an individual and therefore cannot possess analogous rights. John Rawls, whose just war theory is in many ways similar to that of Walzer's, uses the term \"common sympathies\" (Rawls 1999:24) Namely the \"monopoly of the legitimate use of physical force\" (Weber 1979:901-2) inside one's territory If, as Walzer claims, there is a direct analogy between the right of an individual to life and liberty and the right of a political community to territorial integrity and political sovereignty, then the analogy must hold firm in the case of unjust aggression by one state against another. We have agreed that an individual does possess a right to self-defence and that in certain circumstances, notably that of an imminent, potentially lethal unjust assault, it is permissible for the defender to override the attacker's right to life (and by extension, liberty) and kill him. It is not clear that the rights of a political community can be so overridden. In fact, if we carry the direct analogy through, it would appear that the rights to territorial integrity and political sovereignty would justify a state which had been threatened with invasion to override the rights of the aggressor state- ie. invade the aggressor in retaliation, annex its land and subjugate its citizens This may or may not be a valid response to unjust aggression, but it does not in any way logically follow from the individual's right to kill in self-defence, a right which has been shown to stem from necessity, and not any notion of justice or punishment It may be argued that this is exactly what happened to East Germany, and to a lesser extent, Japan, after the Second World War. However, this does not demonstrate any right held by the Allied forces, merely the prerogative of the victor. It does not seem as if the limitations placed upon these states in anyway resembles the desperate self-defence of an individual whose life is at stake. The analogy again falls short if", "label": 1 }, { "main_document": "Autoregressive exogeneous (ARX) model is mainly used for prediction and control target. This assignment will display how to implement the recursive least squares in the MATLAB to solve the problem. At the beginning, the algorithm is applied to do recursive least squares identification. Subsequently, a mechanical \"master\" robot of a master/slave tele-manipulator will be identified using some provided real data. At last, the RLS algorithm is extended through introduced the instrument variables. The ARX is firstly used to identify a model given input and output, and the number of numerator and denominator coefficients. At the beginning of the algorithm, the variable Na, Nb, LN and the true value of A, B are initialized. The function \"filter()\" is used to calculate the true response of the system, meanwhile the step and random are taken as input to be generated. Theta and P(0) are also initialized, and then perform the recursive least square(RLS) (5),(6),(7),(9) for each output point to estimate the parameters. At last, print out the theta and plot the estimated parameters to compare with the true one. Furthermore, add the noise to the output for comparison with the noiseless one. In the XiaA.m file, the parameters of model are defined by: The estimated parameters values are produced through a set of input and output by the file and the figure 1 illustrate that the four parameters promptly converge to their true values. From the figure 1, it is obvious to get that the final estimated parameters value is So, the convergence of the estimated parameters towards the true value is pretty well as early as 5 recursion of the algorithm. In the current section, the random input is introduced into the recursive least algorithm in the same manner just as the step input. Subsequently, the result of comparison between step and random input is easy to inspect. Due to the same algorithm of the model used, the difference of the convergence evolution is purely caused by the different type of input. The algorithm is fed 200 random input pairs, and then the estimated parameters are achieved. As the same approach, the true model parameters are defined by From the figure 2, the estimated parameters are displayed as The figure 2 shows that the estimated parameters are almost exactly same as their true values after probably 5 recursions of the algorithm. The result of the estimated parameters for the random input is the same as the step one. Random values are added to the output data to check the effects of noise on the \"recursive least squares\". The output added the noise is generated thus: Y=y + random number. Figure 3 shows the great effect when the noise is added to the prediction of the true system parameters, it is due to the system becomes more complex and unpredictable as the noise is introduced into the system and the algorithm attempts to reach parameters that would produce the same noisy output. The algorithm produces the following estimated parameters However, the true parameters is It is obvious to discern that the estimated parameters values", "label": 0 }, { "main_document": "48 Simmental x Holstein-Friesians (mean bodyweight 424.2kg) were fed a mixture of grass and maize silages Crude protein was calculated, and compound feed added to provide all steers with isonitrogenous diets. Once the liveweight of 560kg was reached the animals were slaughtered and compared against the pre-treatment slaughter group of 8 steers. Using ANOVA, there was a significant increase in dry matter intake and metabolisable energy with diets based on maize silage (P<0.001). The number of days on trial decreased with the use of maize silage, by approximately 6 weeks (P<0.001). There was also an increase in daily carcass gain (P<0.001) and killing-out percentage (P=0.05). There was no significant difference in the daily gains of meat or bone tissue between the diets. Fat deposition per day increased with maize silage (P<0.001), with the colour becoming less yellow, and more of a whitish-cream (P=0.004). There was no change in meat colour or pH change with the diet changes. This experiment shows that using maize silage as the main feed for silage produces a higher carcass weight in a shorter time period, with more appealing visual characteristics of the beef product When comparing grass silage with maize silage, maize can have a benefit over grass when feeding as forage to beef cattle. Such examples are that as maize matures, the digestibility remains relatively constant at around 70% as the grain develops. Whereas grass on the other hand, looses its digestibility as the grain develops (Jones, 2001). Another example is that currently the British weather has a sunnier, drier summer periods, which can cause grass to burn and wilt in the hot weather. Maize tends to cope better in this heat due to the plant's origin in Mesoamerica. The use of maize instead of grass can cause different rates of gain in cattle due to the different nutritional values of the types of silage. Figure 1 shows the different chemical make-up of the two types of silage on a percentage basis. The column chart shows that grass contains a higher proportion of neutral detergent fibre compared to maize. This is due to the higher cellulose, hemicellulose and lignin composition in grass silage. Grass also contains lower, or no starch content compared to maize silage which has a high proportion of starch. The lower NDF and higher starch leads to higher intakes of dry matter in maize is provided as the main source of fodder. The high dry matter also ensures that there is a rapid drop in pH when put into silage, aiding the fermentation processes. There is however, a lower proportion of crude protein in maize silage. Therefore, a higher concentrate is needed to supply the same amount of crude protein that is present in grass silage. When contemplating the change to a different feed source, it is best to ensure that the composition and appearance of the final beef carcass will not deter the consumer from purchasing the product, which will cause a reduction in the carcass price. A 2004 survey carried out asked the consumer to identify and rate each of the", "label": 1 }, { "main_document": "fund the treatment' [53]. Such a jurisprudence would contribute to improved decision-making in the NHS; increased transparency, which guards against narrow financial motives; and greater public understanding of the tragic choices that must be made [54]. Moreover, it may eventually pave the way to a measure of substantive review. Unlike in questions of access to healthcare, where courts recognise the relevance of considering resources but decline to actively adjudicate on it, when it comes to setting standards of care in negligence actions, they deny that resources are even an issue [55]. It is understandable why the courts take this approach. The public would no doubt find it repugnant if the courts explicitly allowed shortages of resources to excuse a hospital causing damage to a patient. Moreover, slippery slope objections can be raised [56]. Although there is no universal fundamental right to healthcare of any standard, we have a system in which the Secretary of State undertakes to provide a 'comprehensive' service to all [57] and national guidelines are in place to ensure uniform standards for all [58]. Yet, how realistic is this current jurisprudence, which puts pressures on the NHS to meet a standard in terms of rights and responsibilities despite its defence on resource grounds? The consequence is that the only option for hospitals and doctors is to turn patients away altogether rather than giving some care, albeit of a lower standard. Is this the best way? While recognising the political difficulties of so doing, Newdick calls for greater candour about standards of care which can be provided within the NHS [59]. Recent media debate on Accident & Emergency care is right to criticise government targets which distort clinical priorities, providing financial motivations hospitals to see the maximum number of patients within the shortest time [60]; however, this is not necessarily an unprincipled approach. Why is it assumed that sending one patient home within 4 hours, and therefore giving him/her a standard of care which may fall below the ideal, is not justified in order to provide prompt treatment to others? If we recognise that rights of access to healthcare are subject to qualification, why not also recognise that rights to a particular standard of care may be similarly qualified? Newdick suggests a variable standard of care, set by what ought to be expected from reasonable doctors or hospitals in the circumstances, given the available resources, but, crucially, subject to a minimum obligation [61]. These arguments may be more potent in the U.S. where healthcare is paid for, as comparisons can be made with a commercial context where a buyer accepts greater risks in return for a lower price [62]. Yet it could feasibly apply in the UK. The There have been signs of support for this position in the English courts [65], which might be revisited in the future as part of a different strategy to meet scarcity of resources by providing maximum access to healthcare rather than treating fewer patients at an optimum standard. Equally controversial is the proposition that resources should be taken into account when deciding on", "label": 1 }, { "main_document": "make a claim against George, he has to prove that George was driving negligently at that time. Negligent driving then can be determined by what other average reasonable driver would do on the road. George may not be in breach of that duty if the accident occurred was due to other factors such as sudden puncture of the tanker's tyre. If the breach of duty is proven, Percy would be able to recover from the cost of his spoilt crops and also claim for the proceeds which would have been made from the damaged crops and not for the loss that should be incurred due to his inability to plant and sell further field of crops However, the Court may also consider the argument that Percy suffered from pure economic loss as a result of property damage and it is possible for him to recover the losses. Unfortunately, other factors such as the weather factor could also be the cause to his inability to make future profit, but this would be too remote a damage. It is also important to note here that George is an employee and hence his employer could be vicariously liable for any of his negligent actions. Percy could sue George, George's employer or both if it is proven that George has acted negligently during the course of his employment, in this case, negligent in driving. Thus, it is important for Percy to firstly prove that George was negligent. Lord Atkin : \".... Who then in law is my neighbour? The answer seems to be those persons who are so closely affected y my act that I ought reasonably to have them in my contemplations as being so affected when I am directing my mind to the acts or omissions which are called into question.\" Spartan Steel v Alloys Ltd v Martin & Co. Court held : The claimant could recover for the damage of the metal in the furnace for the damage of property loss and also for the loss of profit on the sale of that melt because although it was economic loss, it was caused by property loss and so it is consequential loss. They could not, however, recover for the loss of profit caused by the power cut because these were not directly consequential upon any damage done and so are purely economic loss. Before deciding to settle a case in the Court, it is essential that each party consider all the costs involved. The Woolf Reforms 1999 suggested that every party should opt for early settlement or settlement out of court. This saves time, money and energy.", "label": 0 }, { "main_document": "hexagon method because in most situations another hexagon would be drawn adjacent to the previous one. It would be more efficient to perform the translation and then remove it if not necessary than to check whether a translation is necessary every time. The new method made it possible to continually call the hexagon method to display hexagons in a straight line one after the other. The first problem related to the initial positioning of the drawing axis. It was simple enough to move it down to the next line using a simple translation of position in the Y direction. However to the move the drawing axis back to the start of the line was a bit more difficult because the program had to know how far the axis had moved in the first place. Also the program needed to know which lines had to be offset by a half a pixel. To deal with translation in the X direction the program would need to know how many hexagons had been drawn on a line to make sure it moved back the right amount. This could be achieved by recording the number of hexagons drawn. However the number of hexagons to be drawn will be fixed before any hexagons are drawn - either a number entered by a user or the pixel width of an image. Therefore this part of the problem is easily solvable. However when dealing with the half pixel offset there were a couple of methods that could have been used. The first method would have been to draw every other line. Then the program would translate the drawing axis back to the top and draw the lines in between but offset by half a pixel. The simple Y translation could then be employed to draw in the remaining lines. The benefit of this method is that the offset only needs to be calculated once. The difficulty is that more complicated calculations are required to make sure complete tiling is obtained. The other option was to draw every line sequentially and continually offset every other line. The benefit of using this method is that it follows a more sequential pattern making it easier to follow the code later when more difficult problems emerged. The disadvantage with using this method is that it could be deemed inefficient to continually check whether the offset was needed. It was decided to use the method to draw every line sequentially. This was because it would be easier to link this method with the image processing later on in the project. An algorithm was developed which ensured a correct hexagonal tiling. A check was made to detect whether the line had been offset - a return of 1 would mean the line had been offset, 0 would mean it had not been offset. If a line needed to be offset, the program would detect that the offset check was 0; the line would then be offset and the offset check set to 1. If the line was already offset then the program would return the", "label": 1 }, { "main_document": "mainland standard of spoken Mandarin, is the national language in all of China, which now again includes Hong Kong.' Indeed, after 1997 the strength of English in education was weakened when it was decided in 1998 that secondary education should be conducted in Cantonese. However due to public demand 100 schools were permitted to continue using English as the medium of instruction. Yet even this did not subside the public outcry which followed as Boyle (1998: 36) recalls: But was this a response conditioned by imperialism? Phillipson (1992) argues that it was, citing the influence of the colonisers which was maintained even after their physical presence had gone: 'The ideal way to make people do what you want is of course to make them want it themselves, and to make them believe that it is good for them' (Phillipson 1992: 286). It is possible to see the truth in Phillipson's argument as in Hong Kong, English is and has historically been viewed as desirable; 'a value added commodity' (Li 2001: 24). However, whether this opinion is justified or whether it is a tool to perpetuate neo colonialism is unclear and difficult to clarify. For example Brutt- Griffler (2002: 50) points to the genuine importance and economic benefit of English speakers in commercial colonies such as Hong Kong, especially with a growing service industry that is dependent on an international lingua franca such as English. Conversely though it would be na For instance a typical example is the language policy which benefited English speakers, putting those who were not proficient in English at a distinct social and economic disadvantage. Ultimately though I agree with Li (2001: 24) who concluded that: The introduction and development of any foreign language, (in this case English) in any country and for whatever reason will inevitably have some impact on the existing indigenous languages, and Hong Kong of course is no exception as I have already highlighted. However unlike many other British colonies (such as in Africa) there is little evidence to suggest that the introduction of English has been at the cost of other languages. For instance, Li (2001: 26) states that 'one of the clearest indicators of English linguistic imperialism in former British or American colonies is that the vitality or existence of local languages is under threat.' Yet Cantonese is in a period of continual growth (Pennington 1998), as the native tongue of Hong Kong is the dominant language at home, as Bolton (2002: 11) describes: 'in personal domains such as family, friends, social activities (...) the use of English is superseded by Cantonese.' It is also the predominant medium of instruction in most schools (in spite of parent's demands) as well as having a growing influence within the government and the law, where it is the norm for spoken interaction to take place in Cantonese (Bolton 2002). Although I have found no evidence of Cantonese being subsumed or negatively affected by English (at least after decolonisation), one language which may be threatened or which the policy on English may have serious consequences for is", "label": 1 }, { "main_document": "It is interesting that many managers, researchers as well as employees believe that happiness leads to job performance without deeper investigation. Acceptance of this proposition offers a solution for management (Steers and Porter, 1979), since it makes sense to reward more to satisfied and well-performing workers. Similarly, Ledford (1999) identified two possible reasons referring to ideological and aesthetic aspects in order to explain the prevalence of the belief. Firstly, he claimed that the \"a happy worker is a productive worker\" notion rooted deeply in management ideology. Tracing back to the Human Relations movement, human relations supporters presumed that organisational performance was depended on meeting employees' needs for social relationship, satisfaction and self-actualisation. Meanwhile, employees have ideological reasons to accept this proposition so that they can ask for more from management. Secondly, people are likely to agree with this thesis since it is elegant and neat (Ledford, 1999). We prefer to believe the theories which represent both performance and attitudinal outcomes. Therefore, morale, productivity, quality, employee commitment and so on could be simultaneously improved. Although most attitude researchers presume that the link is from attitude to behaviour, others (e.g. Lawer and Porter, 1967; Olson and Zanna, 1993) have challenged this view by indicating either theoretical rationale or empirical results. Though there are different explanations for this prediction, the most broadly established one is based on the supposition that performance leads to valued outcomes that make workers satisfied (Judge et al, 2000). In details, dissimilar performances determine different rewards, suggested by Steers and Porter (1979), which in turn construct \"variation in employees' expressions of job satisfaction\". In addition, according to expectancy-based theories of motivation, satisfaction follows from the rewards generated by performance. Expectancy theorists such as Lawler and Porter (1967) highlighted that performance led to satisfaction through intrinsic and extrinsic rewards. In favour of their opinion, Locke (1970) tended to consider satisfaction as a function of value attainment and goal-directed performance. With reference to the empirical researches, this proposition is supported by quite a few studies. The findings of Bowen and Siegel (1970) as well as Greene (1972) showed a relatively strong correlation between performance and satisfaction. These correlations were notably higher than the low correlations between satisfaction and performance, which suggested the lower possibility of satisfaction leading to performance. A general logic could be offered to understand it. A worker could be motivated and then feel happy to undertake his task. However, whether he could perform well does not only depend on his willingness but other factors such as his ability, the existence of performance obstacles and degree of supervision. On the other side, when a worker is performed superiorly, she could get better rewards extrinsically and even gain a sense of achievement intrinsically. Thus she would feel satisfied and happy by and large. However, Steer and Poter (1979) pointed out that the correlation coefficients in Bowen and Siegel's studies were not strong and the possibility of other variables could not be ruled out. Accordingly, Steer and Porter (1979) argued that the statement \"performance determined satisfaction via rewards\" received some support but that", "label": 0 }, { "main_document": "writings sprang, to disappear out of focus. Where style is not brought to the reader's attention, the presence of the author is marginalized. The perpetual change in Nietzsche's mode of presentation brings our attention vividly to his presence as the author behind what we are reading. Added to this, his pervasive use of hyperbole and provocative questioning magnify this presence. We cannot ignore the bias from which interpretation originates as we are again and again confronted with his \"exaggerated, swaggering, polemical, self-conscious and self-aggrandizing, un-Socratic style\". When we are focused on not only In this way, then, Nietzsche exists as an example or incarnation of perspectivism itself. Through his stylistic variation, coupled with uncompromisingly vitriolic language, his opposition to the possibility of a neutral philosophical language is made clear. The point made by his experiments in form is precisely that form and content are inextricably tied to one another; and Nehamas, Alexander. Nehamas, Alexander. Hales, Steven D. and Rex Welshon. As we have seen, the idea of perspectivism raises interesting difficulties. To hold that each person is inescapably immersed in a particular worldview \"which has its origin in particular physiological, psychological, historical, cultural, and political needs, desires, beliefs, and values,\" We have looked at two instances where he employs strategies in order to neutralize the effect of a dogmatic reception of his works by embedding those works within styles that serve to undermine his own authority and highlight his presence as just one bearer of interpretive practice. Although it is not certain whether these strategies guard perfectly against unthinking assimilation, we must admit that the Nietzschean idea of perspectivism not only a crucial element of the content of his works, but is exhibited in a highly original performance. And despite his efforts, Nietzsche recognized that the possibility of a truly creative, truly perspectival thinker lay not with him, but in its purest form only in the future: Cox, Christoph. California Press, 1999, p. 110. For the present time, however, Nietzsche's remarkable performance is echoed in the words of Zarathustra: \"This is thus I answered those who asked me 'the way.' For", "label": 1 }, { "main_document": "Despite the research and debate concerning human resource management for many years, it is still not possible to provide a precise definition (Worsfold, 1999). However, it is believed that the traditional ways which just simply focus on productivity and cost factors alone should be ignored. Instead, many authors agree the aim to achieve employee commitment by managing organisational culture is the main characteristic of HRM (McGunnigle, 2000). Storey (1995) gives the definition of HRM as Since there is great evidence showing that a good management of organisational culture is inevitable to achieve employee commitment, much of recent literature play keen interest in the 'best practice' HRM. Purcell (2001) stresses the need to recognise the central importance of strategy in the HRM models. Nickson (2002) argues that to create a high performance and commitment workplace, firms should adopt a universalistic approach which includes a prescribed range of HR policies and practices. However, it is believed that the \"best practice\" approach is considered as an ideal model remained on paper and is rarely existed in the contemporary. \"They are often constrained by industry and organisational economics from implementing a deluxe version of 'best practice'.\" (Boxall & Purcell, 2002 pg 199). Nevertheless, these 'best practice' models do give valuable contribution. Nickson (2002) suggests that although much of the literature put emphasis on giving \"overly optimistic prescriptions of the universality of best practice models\", it helps encouraging employers to adopt a high commitment management through bundles of HR practices with a high skills workforce to obtain high value. For these reasons, 'good practice' seems to be a more suitable term when discussing the HR practices currently used. What are the good practice to encourage commitment and culture changes? Price (1994) defines 'good practice' as that 'which is required or encouraged by legal provisions in terms of policies, procedures and other arrangement' (pg 48). He claims the bare minimum responsibility for all employers is to comply with standards set by legislation. In people management, the key areas are recruitment and selection, training and development, reward systems and employee empowerment (Mcgunnigle, 2000). Since recruitment and selection is frequently identified as the dominant activity within these areas (Jameson, 1999; Mcgunnigle, 2000), this article is focused on this part. The adoption of the new HRM approach has led to a series of changes in recruitment and selection method. First, to ensure the recruitment and selection process is fair, the Government has established several laws such as the Equal Pay Act 1970, the Sex Discrimination Act 1975 and the Disability Discrimination Act 1975 which put pressures on employers to select employees in a non-discriminating way in terms of sex, race and disability. Regarding the selection tool, though the cost of interview is low, it has been extensively criticised as being unreliable, invalid and subjective on employment selection (Torrington, 2002). In view of this, there is great evidence showing that there is an increasing use of psychological tests as which are considered as better predictive ability and can get the right people who conform with the commitment culture (Goldsmith & Nickson, 1997;", "label": 0 }, { "main_document": "at many traffic lights (Oxford City, 2004). All of these rules can improve the traffic and environment significantly. Oxford and Bath are very similar to each other in some aspects. There are many historical buildings, museums and other attractions in both places. As for Oxford, the Ashmolean Museum, Carfax Tower, Oxford University Museum of Natural History, Pitt Rivers Museum, Bodleian Library etc are all worth visiting. According to Star UK (2004), Oxford University Museum of Natural History and Pitt Rivers Museum are two of the major free admission attractions in Southern England. There were estimated 277,366 visitors and 142,995 visitors for the Oxford University Museum of Natural History and the Pit Rivers Museum respectively in 2002 (Star UK, 2004a). Besides, the University Church of St. Mary the Virgin was one of the major admission charged attraction in Southern England; there were estimated 300,000 visitors in 2002 (Star UK, 2004a). As for Bath, the Roman Baths, Bath Abbey, the Royal Crescent, and Thermae Bath Spa etc are the most well-known attractions in Bath. In 2002, the Roman Baths was one of the major admission charged attraction in the South West England which had 845,608 visitors (Star UK, 2004b). Also, the Roman Baths was one of the most popular and major admission charged attraction in the whole UK in 2002. In terms of accommodation sector, it is operated in a similar way in Oxford and Bath. The accommodation is divided into two parts, which are serviced and self-catering accommodation. Both destinations have an excellent range of accommodation to suit all tastes and budgets. They have an interesting mix of serviced accommodation ranging from large corporate hotels to very high quality specialist hotels, guesthouses and B&Bs, and some self-catering properties and hostel accommodation. In Bath, occupancy levels have been consistently high over a number of years. Annual average room occupancy in serviced accommodation is 65% although recently falls of approximately 5% have been experienced in the sector. Generally, the room occupancy levels in the rural areas are slightly lower than in the city centre (Bath & North East Somerset Council, 2004c). In Oxford, 75% staying visitors were staying within Oxford, it is the same proportion as in 1990 (Oxford Online, 2004c). Overall, the outcome figures of the occupancy levels of two places are probably at the similar level as the similar situation of the accommodation sector in Oxford and Bath. In catering services, restaurants in Oxford and Bath are more or less the same. Since both destinations are the famous cities in the U.K., they attract thousands of visitors from locally and internationally. Thus, it is essential to provide a wide range of cuisines for tourists to choose, they include Chinese, Thai, Japanese, Indian, Italian, Indonesian, American, French and English etc. And there are also various types of catering services such as bar, cafe, night club, public house, take away, wine bar and tea room. All are considering the needs of tourists and so providing pertinent food services is one of the most important reasons for attracting people to visit. . Guides and walks are", "label": 0 }, { "main_document": "silt and clay in approximately 50:50 ratio. The overall colour is pale brown to grey in plane-polarised light (PPL), and under cross-polarised light (XPL) the b-fabric is crystallitic speckled with grey and white 1 The microstructure is primarily massive, with the grains closely embedded in the fine material, with a few channel and chamber voids. The sediment is poorly sorted with particles of all sizes represented. There is no orientation pattern in the sediment, and no obvious distribution pattern, apart from the organic material which tends to be concentrated in patches and lenses in the upper third of the slide (visible as areas with slightly darker brown colouring in PPL). There is a large (13mm) flint present near the bottom of the slide. The sample has significant organic material present (Table 2), including charred, water-logged and siliceous remains. The phytoliths could not be identified to any particular species or genus, but include long cells and short, probable epidermal cells. Also present are various species of diatom. The most common type is shown in Figure 6, and a preliminary identification made to the Diploneidaceae genus, which is commonly found in marine environments (Round 1990). The diatoms were clustered near the water-logged organic matter. Several post-depositional features are apparent (Table 3). Small black framboids of opaque material were present in many areas of the slide (Figure 6), particularly associated with organic matter. These were identified under oblique incident light (OIL) as pyrite framboids through their characteristic reflective properties. Gypsum is also present in small amounts (Figure 7). Clay laminations can be observed lining voids and in thin bands throughout the slide. The laminations are weakly birefringent with striated fabric showing alignment of clay particles. Some of the channel voids also contain fine silt in circular structures. This material is weakly birefringent from fine silt quartz particles and similar to the fine material of the main unit. Where these features occur in the same void, the silt appears to cut through the clay deposition (Fig. 8). The sediment is a poorly sorted sandy clay loam with very limited channel and chamber voids, many of which contain clay linings. The presence of clay laminations within the massive sediment suggests that the structure was once more open. Charred and waterlogged organic material is present in significant amounts, particularly in the upper third of the slide forming the diffuse boundary to the reed peat above. These features suggest that incipient pedogenesis may have occurred in this sediment. Yendell (2004) suggests that there is a palaeosol preserved at Site B at Goldcliff East, but the compression and post-depositional alteration of the sediments appears to have destroyed any evidence of peds. The sand component of the sediment may result from the underlying parent material, which is Ipswichian beach sand/gravel and head deposits (Bell 2004). The finer material may have been brought in by fluvial processes, and mixed with the underlying sands and some organic material. This mechanism would explain the poor sorting of the sediment and the sub-rounded shape of the coarse grains. The plant remains, though not well-preserved, display", "label": 1 }, { "main_document": "US trade deficit, which counts to 6% of the GDP by now and is rising. It seems that Japanese automobile producers adjust their prices as Dollar falls in comparison to other currencies, so Japanese producers will keep their share in the US car market, using the \"price to market\" principle. As can be seen from the data above, the decline of Dollar against Euro, will make European goods less attractive in the US market, and hence have a negative effect on the European economy and possible positive effect on US trade deficit. However, some criticism occurs against the view, that declining Dollar would solve the problem altogether. Other aspects, such as interest rates, foreign investment to US markets, and the interests paid abroad as a result of this have to be considered as well. As Dollar declines, European economy faces threat of further decline, but it is not said, that the US trade deficit would fall.", "label": 0 }, { "main_document": "of ten million unemployed and the CCC only employed 0.8 million young Americans (Amenta and Skocpal in Weir et al 1988). American 'spending on these programmes was not guided by economic objectives' (Weir 1988:68) but on political objectives thus investment in these schemes was not to relieve unemployment but to ensure political support. As the 1930s continued it became increasingly 'difficult to assemble a majority in support of increased government spending and public employment' (Weir 1988:74) as Congress and conservative interests turned against it; thereby in America socio-economic policies were dictated by political factions more than economic considerations. As Piven and Cloward argue (1979:83) the new American public works programmes turned out to be 'far from equal to the magnitude of need, or indeed even to the magnitude of the President's promises'; moreover, they were far from equal to Nazi mobilisation. The Nazi and the New Dealers pursued similar policies to stimulate industrial recovery with both adopting varieties of corporatist policy. Corporatism is the 'conservative archaic concept of socio-economic organisation... between socialism and capitalist plutocracy' (Garraty 1973:912). Capitalists and workers were meant to join together and eliminate competition, bring order, industrial peace and efficiency in industry. Both countries implemented government sponsored cartels to regulate output and production controls with price and wage manipulation. The same pattern existed in both America and Germany with systems of self-governing industrial codes and with both government policies being justified by \"national emergency\" propaganda. However 'in America the process went not nearly so far' (Garraty 1973:913) as the National Industrial Recovery Act (NIRA) though initially welcomed by the business community was later restricted by this same group as it failed to stimulate the economy. In contrast, the industrial strategy in Germany was very successful in stimulating growth as the Nazi's created a 'private capital market' (Overy 1996:42) to ensure funds flowed where they wanted and to extend their formal control over the banking and capital structure. In both countries corporatist policies and greater state intervention into the economy faced opposition across the board and yet where Hitler saw the victory of his all powerful one-party state; Roosevelt's government had to concede to the interests of the capitalists. Garraty (1973:914) in presenting the similarities between Nazi and New Deal social policies argues 'the success or failure of American and German efforts to stimulate industrial recovery is... not central to my argument'; however, this is critical as a major difference in the implementation of these reforms is that the Nazi's were the successful where the Democrats failed. The Nazi and the New Dealers both reformed social insurance and social welfare; Germany had a long-standing tradition of centralized social reforms whereas in America new and unprecedented forms of relief and social security were introduced. Rimlinger (1987:45) argues few historians accept Piven and Cloward's thesis that welfare policies were primarily used as instruments of social control. However, he argues fascist Germany is the exception as the Nazi 'concept of social policy represented the full development of welfare policy as an instrument of social control' (Rimlinger 1987:47). In contrast Roosevelt's social policy", "label": 1 }, { "main_document": "Wilde may also be poking fun at the highly emotive traditional Victorian novels that were based on complicated, turbulent and highly moral tales. This ridicule can be recognised in the dramatically earnest language used by the characters, often in conjunction with paradoxes such as \"If you are not too long, I will wait for you all my life\". As well as using Some people said that his works were pieces of frothy nonsense and nothing more, which anyone could write. He mocks them when Jack attempts some literary criticism, and Algy retorts \"You should leave that to people who haven't been to university. They do it so well in the daily papers!\". Miss Prism is created in response to the hypocritical critics that called it an immoral play. She is a pious woman intent on morality and improvement, especially in literature. She frequently claims earnest yet ridiculous things, such as the study of German grammar being an \"intellectual pleasure\". However, she eagerly throws all that away without any consideration in order to be with Chasuble. By doing this Wilde shows how trivial he thinks the 'serious' critics are. All this exemplifies the way in which Oscar Wilde made full and creative use of his idea that \"We should treat all the trivial things in life very seriously and all the serious things of life with sincere and studied triviality\". It strikes me as ironic that Wilde's work is taken so seriously by English Literature students today, studiously slaving over his deliberate trivialising of important issues, while critics of the day trivialised the importance of his contribution to literature. Clearly he wanted his trivialising to be taken seriously, rather than his seriousness trivialised.", "label": 1 }, { "main_document": "the child were born it would suffer from such physical or mental abnormalities as to be seriously handicapped'. The difficult questions posed is threefold: (a) whether abortion on grounds of foetal disability should be allowed; (b) which conditions of disability merit termination; and (c) what are the factors to be taken into account when considering 'seriousness'. First off, this is contentious on its own. Some would argue that aborting a severely disabled foetus should be acceptable, at the very least understandable. The 'Parental Interests Argument' contends that abortion on grounds of foetal disability is justified because 'the strain of caring for a disabled child may be substantially greater than that of caring for a non-disabled one' S. Sheldon and S. Wilkinson, 'Termination of Pregnancy for Reason of Foetal Disability: Are There Grounds for a Special Exception in Law?' (2001) 9 85. The authors explored two other arguments to defend abortion on grounds of foetal disability (the 'Foetal Interests Argument' and the 'Replacement Argument') but found them to fail. Others would argue that disability should not be perceived as a ground for abortion in the first place. The 'Disability Discrimination Objection' relies on the idea 'that disability is relevantly similar to other categories, such as race, gender and sexuality and that, therefore, selectively terminating disabled foetuses is like selectively terminating black or female foetuses (which would, in the context of a general ban on 'late' abortions, be unacceptably discriminatory)' S. Sheldon and S. Wilkinson (2001),ibid. The authors argued against this objection by distinguishing that in addition to harms caused by social discrimination, disability involves harm caused directly by impairment. Thus the rationale for allowing abortion on grounds of foetal disability is still open to question. Secondly, and perhaps more importantly, the difficulty is in distinguishing which conditions of disability merit termination. The statute does little to further the meaning of 'seriousness', and the Royal College of Obstetrician's and Gynaecologists' guidelines regarding this point do little to help. See R. Scott, 'Interpreting the Disability Ground of the Abortion Act' (2005a) E. Parens and A. Asch, 'The Disability Rights Critique of Prenatal Genetic Testing: Reflections and Recommendations' in E. Parens and A. Asch (eds), L. Rev. 265. While it is relatively clear at the extremes (for example Turner's syndrome versus a susceptibility to moderate obesity), it is the huge range of mid-spectrum disabilities in between that is most contentious. D. Wertz, 'Drawing Lines: Notes for Policy-Makers' in E. Parens and A. Asch (eds), at 261, as cited in R. Scott (2003), ibid.. R. Scott (2003), Apart from the controversy that this test relies on the judgment of doctors S. Sheldon (Pluto 1997), Thirdly, in deciding whether it would be unjust to deny a woman's access to abortion, whether other circumstances, such as the family's ability to raise a disabled child, or the parents' perceptions, should be taken into consideration as well is still open to question. In other words, should parents' interests be taken into account? On one hand, there would be circumstances where it would be unjust to deny legal protection; on the other,", "label": 0 }, { "main_document": "comprises of rRNA, tRNA and protein Mitochondria and chloroplasts contain own ribosomes1 Both mitochondria and chloroplasts have prokaryotic type and size ribosomal structures and ribosomal RNA (rRNA). The typical size of cytoplasmic ribosome in eukaryotes is 80s but prokaryotic ribosomes are of a much smaller 70s, which is the same as that in mitochondria and chloroplasts. The unit for ribosome: \"s\" is the Sevedberg unit, which is the rate of sedimentation of a particle in the ultracentrifuge is expressed as an indirect measurement of size and molecular weight Antibiotic specificity of symbionts1 Several antibiotics, such as streptomycin kill Bacteria by specifically interfering in transcription and translation of their ribosomes These antibiotics affect both mitochondria and chloroplasts in the same way by impairing their ribosomal functions to synthesise proteins but they do not interfere with protein synthesis in the cytoplasm of eukaryotes Another example is rifampicin, an antibiotic, which inhibits RNA polymerase in bacteria, as well as the RNA polymerase inside mitochondria but having no effect on the RNA polymerase within the eukaryotic nucleus On the other hand, diphtheria toxin, which inhibits protein synthesis in eukaryotic ribosomes, has no affect on the protein synthesis in bacterial or that in mitochondria and chloroplasts. Molecular phylogeny1 Due to advanced technology, an increasing number of complete sequences of chloroplast genomes are available for phylogenetically systematic studies. The rRNA sequencings of chloroplasts and mitochondria have revealed that they are more closely related to Bacteria than to the host rRNA sequences The DNA sequence analysis and the phylogenetic tree developed suggest that nuclear DNA contains genes that probably came from the chloroplast, supporting the lateral gene transfer ability of eukaryotes and prokaryotes, which was mentioned above Furthermore, it was observed that mitochondria DNA is monophyletic and that some prokaryotic sequences fit into that group In addition, these sequences have also suggested that modern eukaryotes have evolved from an association of two organisms, and thus two genetic systems, which brings us to the next point. Double membranes of mitochondria and chloroplasts Both are double-membrane bound organelle The inner lipid bilayer would have been the bacterial cell's plasma membrane, and the outer lipid bilayer would come from the cell that engulfed it The inner membrane lipids of mitochondria and chloroplasts are similar to their prokaryotic counterparts. The plasma membrane in prokaryotes is involved in energy metabolism, similar to the inner membrane in mitochondrion, which has all the electron transport enzymes required to maintain a large H+ gradient across the membrane for ATP synthesis The plasma membrane in eukaryotic cells is able to control ion permeability, which is similar to the outer membrane of mitochondria, which is used to transport ATP produced out of the mitochondria into the cytoplasm As the nucleus is also a double-membrane organelle, the This theory was not supported until 1994 when Gupta The amoeba In 1987, Professor Jeon noticed that his collections of amoeba were developing a large number of dots. These large numbers of dots turned out to be bacteria, which were killing off most of the amoeba. However, Jeon noted the least sick ones", "label": 1 }, { "main_document": "problems which I encountered. Ackroyd, P and Dudgeon, P. Dickens' London: An Imaginative Vision. Headline. 1989. p7. Saunders, E. The Christian Science Monitor .(no date). Retrieved December 3, 2005.from Ackroyd, P and Dudgeon, P. Dickens' London: An Imaginative Vision. Headline. 1989. p7. My first work was to find some materials which could help me in the 'modernizing' process of the action and would give me a better idea of today's image of London or the city in general. In order to stick to the spirit of The study of their daily lives, activities and their language could be helpful tools in the The first two sources turned to be useful for the composition but to a certain extent. To start, I watched Ken Loach's film presents the lives of today's teenagers in poor suburbs. Liam, the young hero, has many problems facing him every day, including small time crime such as stealing cars and vandalism. The story relates his descent into the criminal and drug underworld. As far as the rewriting was concerned, it could re-use some aspects of Ken Loach's film, concerning teenager's daily life such as eating fast-food or smoking but also their physical appearances - wearing cap and trainers for example, and their language - the use of slangy if not rude terms. I then turned to literature and to Irvine Welsh's The novel is similar to Ken Loach's film in many respects. Similarly, it presents problems of drug, antisocial misdeeds. Nevertheless, both the film and the novel were inadequate for the main purpose of the rewriting since their action takes place in Scotland and the idea of the composition was to work on London. Moreover, the characters speak in Scottish and their language and vocabulary is probably very different from those of the Londoners. However, both sources eventually became key elements in the project, since they brought up the idea to work on language and accent. This aspect would be explored later in this commentary. Materials which could be considered as helpful had to be set in London. Hence, I came to Zadie Smith's Smith deliberately rewrites Dickensian London from the perspective of immigrant communities, however her novel turned to be poor in descriptions of London and the focus is more on certain groups of people living in the capital. The novel particularly focuses on Indian and Caribbean immigrants. Zadie Smith makes her character Clara speak English with Caribbean accent. This linguistic devise brings a lot to the novel which sounds more exotic but also realistic. I thus decided to integrate the multicultural landscape of London in my rewriting since it is indeed an actual contemporary aspect. However, Zadie Smith's novel turned to be not very helpful in terms of external descriptions of the city, of its atmosphere. In that respect, Sam Selvon's Sam Selvon is himself part of the great Caribbean immigration to Britain after World War Two. His novel deals with the Black British experience, telling the story of a group of Caribbean immigrants and their struggles in postwar London. Both the characters and the narrator's", "label": 0 }, { "main_document": "Carr argues that it is this very desire to remain all-powerful that induces the hegemonic power to \"self-sacrifice\"(Carr 1946, p. 168) in order to win the consent of those not benefiting as much from the existing power structure; for, wielding naked power could provoke rebellion in the long run. This is how peace is maintained in the international system. Thus, Carr makes it clear that in the realist paradigm, \"emancipation\" of the powerless cannot be the end but the means to the end of maintaining the existing power structure. This is precisely what Joseph Stiglitz implies in his book The book basically exposes how even 'impartial' international organisations, like the International Monetary Fund, the World Bank and the United Nations impose policies on powerless nations, ostensibly for the latter's emancipation. However, in actuality, these policies benefit powerful countries with veto power in these organisations, more than they benefit the needy countries they are supposed to target (Stiglitz 2002, p. xiv-xv); thereby proving that far from replacing \"power and order\" emancipation itself gets reduced to the sheer behind which powerful nations further their own ends. I therefore rest my case, vis- Emerging from the above-mentioned utopian realist proposition is Booth's second argument, that individuals rather than states should be the \"primary referents for a satisfactory theory of security on a world scale\" (Booth 1991, p. 540). He argues that states should only be the means and individuals the end. According to him this would translate into greater stability in the international system, than when states are treated as ends in themselves; for, power struggles would then reduce. Though I agree with Booth that states must only be the means, and the welfare of its people the end, I do not see how this shift can ensure greater stability in the international system. For all states are not at par. While the basic needs of individuals are almost the same across the globe, the providing states do not have the same amount of resources to meet these similar needs. Thus Booth's utopian realist vision of a state-system that is not quite a \"war-system\" (Booth 1991,p. 545), might not hold true even if the focus is shifted to individuals; for the struggle for power (resource power here) shall continue and war shall remain an imminent danger. Ken Booth's concept of 'utopian realism' developed as a reaction to the inability of existing theories and practices, of international politics, to question the existing status quo. He argues that instead of catalysing the reorganisation of human society, in a way that would be more just to all, theories of international politics simply explain the existing organisation of the international system. The theory of realism stands first in Booth's line of critique. Thus, in contrast to realism, 'utopian realism' offers an alternative model for the international order in the form of a community-in-anarchy (Booth 1991, p. 540). This community-in-anarchy aims at developing positive relations between states, based not only on \"mutual self-interest\" but also on \"moral obligation\" (Booth 1991, p.540). By making \"moral obligations\" one of the legs on", "label": 0 }, { "main_document": "determines the properties resulting from the hardening and quenching process. For full hardening to be obtained the actual cooling rate of the section must NOT exceed the critical cooling rate for that steel. \"For any steel analysis and quenching medium there is a section size, above which the work-piece will not satisfactorily through-harden. This is known as the limiting ruling section and is the main design parameter that needs to be considered, in combination with the geometry and property requirements of the work-piece, when specifying a hardening and tempering treatment. As quench severity increases, as it does if air is replaced by oil, and oil is replaced by water, the limiting ruling section increases for a particular steel composition. However, the use of more severe quenching is limited in turn by the increased risk of distortion or cracking during quenching, due to the higher thermal stresses induced in the work-piece\" Therefore when specifying the material to be used for a large component it's necessary to ensure that it is possible to attain the specified hardness throughout the component. In the case of the two materials tested in this report, the ruling section for 070M55 would be lower than that for 817M40 due to its inferior hardenability. The CCR for 070M55 is higher for this steel and therefore must have a higher cooling rate to achieve full hardening. On thick sections this cooling rate may not be fast enough to achieve full hardening, and faster cooling may induce greater stresses in the component. Therefore switching to the 817M40 steel allows a lower cooling rate due to its better hardenability characteristics, whilst achieving the required hardness. Where possible the 070M55 would be used due to its lower cost, until its ruling section determines it unsuitable for the application. Tempering is a subcritical heat treatment process used to improve the toughness of quench hardened steels. Quench hardened steel is of little use because it is very brittle. Following quenching the steel is in its hardest but brittle condition, to obtain the optimum balance of properties additional heat treatment is required. Tempering consists of re-heating the steel to a lower temperature and holding it there for a specific time. The time and temperature are dependant on the on the composition of the steel and the required properties. The use of high temperatures for tempering increases ductility and impact strength of the steel but reduces its tensile strength and hardness. For example steel tools are tempered at a temperature that reduces the quench-hardened hardness by 2 to 4 points on the Rockwell C hardness scale, where the effect of such tempering has only a slight effect on the tools strength. Where as springs are tempered so that their hardness reduces to around the mid 40's on the Rockwell C scale for maximum toughness. To give an idea of the values, hardening of engineering steels (Carbon content 0.3-0.55%) ranges between 800 - 900 The speed of quenching of the steel bar from its unstable austenitic state determines what the resulting microstructure when the steel is in its equilibrium", "label": 1 }, { "main_document": "Given that state/national functions are transferred to local levels of government there is a need for careful planning and adequate organization. There are examples of decentralisation schemes, which had not been well planned, and as a consequence of the bad planning and implementation they failed to meet their objectives (Miller, 2002). A good example of ineffective implementation of decentralisation is Indonesia. In particular, in the case of Indonesia, both central as well as local governments did not have the experience and knowledge required for the management, planning and implementation of decentralisation. There was also lack of organizational capacity in that governments were not efficient in allocating responsibilities and authorities among central and regional governments ( Despite the fact that decentralisation reduces anti-social behaviour of citizens and conflicts between governors and governed, it has the potential to be the reason for emergence of conflicts between national and local governments. The reasons lying behind that are two. Firstly, decentralisation in the form of participatory governance ensures that the needs and interests of local constituents are met. However, local interests may not necessarily agree with national interests and conflicts are possible to emerge between local and national levels of government. These differences of course mean that not only national but also local interests are considered which is an advantage of decentralisation is (Miller, 2002). Secondly, even though it has been said that decentralisation ensures equitable resource sharing between the centre and the regions, what happens in reality is completely different. Particularly, central governments tend to capture the bulk of power and resources leaving local governments with inadequate resources that makes them not capable in their role ( To sum up, decentralisation in the form of devolution of power, responsibilities and authorities from the centre to sub-national levels of government has positive aspects as well as possible risks and negative consequences. Even though decentralisation has been connected with the reduction of the centre it is necessary for any attempt of decentralisation to succeed to maintain a strong centre. This is particularly important for the successful planning and implementation of decentralisation schemes as well as for the establishment of coherence between local and national levels of government (Miller, 2002).", "label": 0 }, { "main_document": "price comparison it can be noted that Aggregator (Cosmos) offers the best price to the customer and that the principals' price is the highest. However, one needs to take into account that the three travel packages were not exactly identical, as Cosmos offered coach transportation to the hotel, whereas with Expedia and principals the car was hired. When comparing the three channels it can be concluded that booking holidays through intermediaries is more convenient for the customer than booking through principals. Cosmos price is the lowest; however, with Expedia the product can be designed and delivered according to specific individual's needs. From customer's perspective booking through principals is not very convenient as it is more expensive as well as time consuming. Customers' overall holiday booking experience was the best when booking through Expedia. According to Chaffey (2002), a good page design should allow the user to change the size of text. As some of the fonts used on Cosmos Web site are relatively small, Cosmos should provide a customer with an option of changing them into bigger ones. Very bright colours of the Web site are not particularly attractive from the customer's perspective. Cosmos should consider this if targeting older customers. The structure of the Web site could be improved, the amount of frames and tables reduced and the line widths increased. There should be no bad links on the Web site. Cosmos should pay particular attention to navigation facilities as navigating the Web site can be rather difficult for the customer. On some of the Web pages there is no feature 'Home' available to enable the customer to return to Cosmos Home page, which can be rather confusing. Online information search capabilities are important to provide users access to the information they require easily and quickly (Adam , 1999). Cosmos should consider offering a search engine on its Web site. In terms of value adding features, some additional could be introduced, such as 'Maps' or 'Fare watcher'. The Web site should also be improved in terms of personality and interactivity. Language options should be offered and visitor's basic information together with his/her preferences stored for the future use. Easily accessible customer support should also be available. A search engine as well 'What's new' functionality could be offered to allow customers easy and fast access to the required information. In terms of unique features, 'Fare watcher' feature could be introduced. There should be an option of other payment methods, such as for example sending a cheque by post, if a customer does not want to pay online. Expedia offers additional services such as attractions and airport parking. However, attractions for Iceland can only be booked until the end of year 2006 (less than a month ahead), which is not very convenient from the customer perspective. Expedia should consider this and improve their offer by making bookings available up to six months, for example. Another recommendation for Expedia would be to introduce intelligent agents (avatars) as a part of their Web site service. Avatars can be described as a piece of software which", "label": 0 }, { "main_document": "higher in the portfolio. This indicates that the linear relationship is stronger and so the accuracy of predictions is greater. This means lower risk, compared to investment in one single company, in making predictions based on market fluctuations and is why investors develop diversified portfolios. This makes intuitive sense, because a portfolio including every single stock in the market would be the same as investing in the market as a whole, and so would move linearly with it. However, the beta in this case is not lowered by investing in a mix of all three, indicating that risk due to market sensitivity is not necessarily lowered by developing a diversified portfolio. TO: Harold Gagnon FROM: Brenda Hagerty DATE: 12 July 2006 SUBJECT: Appraisal of the use of Beta Coefficients in Risk Evaluation, including suggestions, based on three stock options: Hilton, Texas Instruments, and Giant Foods. Analysis of Beta coefficients provides a way of evaluating the risk attached to a stock or portfolio of stocks by the strength of their sensitivity to market fluctuations. The greater the beta coefficient the greater the sensitivity is to market fluctuations. Before we analyse the beta coefficients, however, we need to ensure that the relationship between the company's stock price and the market price is sufficiently linear by performing a regression analysis. After satisfying ourselves that the relationship is sufficiently linear, we then know that the results of the regression analysis are useful. One of the most useful results is the beta coefficient, which is an indicator of market sensitivity of the company's stock price. There are other risks involved that are not purely related to the beta coefficient. It is important to also consider the non-market risk, and this is another aspect of the regression analysis. Analysing the non-market risk, we see that it is greater for Giant Foods than for the other two stocks. In the appraisal of Beta coefficients in risk evaluation we therefore conclude that it is a useful tool but that other risk factors need to be taken into account. The remainder of this report analyses the beta coefficients of Hilton, TI, and Giant Foods, and, in conjunction with analysis of other risk factors, a suggestion is made as to the best single investment option. Taking market fluctuations on a per month basis over the past 60 months for each of the three companies as compared to the S&P 500, the following are the results of the Beta analysis: Looking at the beta coefficients, it is clear that Giant Foods is least sensitive to changes in the market. If it is a stock that performs well, then it should perform relatively well in a down as well as an up market, when compared to the other two stocks. Choosing Giant would lower the beta coefficient of the current portfolio, which is currently at 1.4 and which, 10 years from retirement, is regarded as high. If market fluctuation is the most important risk factor then the suggestion would be to choose Giant Foods as the stock to invest in.", "label": 1 }, { "main_document": "It is possible to suggest that the most clearly obvious way into both of these poems is on a graphological level; in both there is a distinct lack of punctuation. \"Pan Recipe\" is structured as one sentence, although this is complicated by both the many clauses, and by the separation of the verse into paired lines. \"New World A-Comin'\" is divided into three verses, each of which is also structure as one sentence. In both texts this is indicated by capitalisation at the beginning of the verse and a full-stop at the end. This is one of the techniques that give both these texts cohesion. Both texts use lexical deviation as a method of foregrounding. \"New World A-Comin'\" is riddled with examples of words which could have several meanings and functions. This is often caused by graphological deviation; i.e. where the line ends in relation to the words and phrases, and the way syntax is used. The first three lines, for example, are particularly ambiguous in meaning. The line break between \"this\" (line 1) and \"leader\" (line 2) makes us question the functions of both words; it is unclear as to whether \"this\" should be read as a demonstrative article or demonstrative noun. If we take the former function, \"leader\" would be read as the pre-modified noun object of the phrase - a \"Helpless.../leader\"; if we take the latter \"leader becomes adjectival and, if read as being hyphenated with \"less\" into a compound adjective (line 3), becomes part of a second simile. It is likely to be the latter if we look at the paralleled structures between these lines and the repetition of \"-less\" as a suffix. However, the syntax between lines two and three also confuses the word functions of both \"leader\" (line 2) and \"less\" (line 3). This mode of foregrounding through deviation reoccurs frequently throughout the poem particularly at the beginning of the second verse. In Agard's poem there is also foregrounding in the ambiguity of the meanings of words, which is consequent of semantic deviation. The Caribbean poem is an extended metaphor in which Agard uses the vehicle of the steel pan to convey a sense of new life born out of the past. This extended figurative structure is further complicated by the use of the term \"Recipe\", and the semantic language and clause structures associated with cooking. Within the body of the poem itself are also many other examples of metaphor and also puns. In line one, for example, the word \"rape\" could be understood in two ways, depending on what function we assign to it. If we take the verb function of \"rape\" the line would read as a basic phrase structure meaning to 'despoil' \"a people\". However, we could also read this in a figurative, metaphoric way by taking the noun definition of the word, where \"rape\" is a rich yellow plant, which would then be functioning adjectivally in the metaphoric structure to describe the people of the past, from which the present has been formed. Both poems also use semantic deviation in the application", "label": 1 }, { "main_document": "the required data sets for evaluation. Although the level of analysis carried out was simple, potential of the produced maps in terms of application in reality and the ability to represent the necessary information are adequate. Maps clearly illustrate the potential sites for new forest plantation which meet the objectives. The maps have effectively selected the most appropriate areas and issues of interest for initiating the evaluation process and have shown the areas for prioritisation. Comparison of two maps; woodland and agricultural land would be useful to find out geographical relationship between those areas under different land cover, indicating the feasibility and difficulty in creating the buffer zones at a particular area. There are several aspects which limit the accuracy of the data presented in this case to fulfill the required information for the potential application. 1. Soil properties Land use history over time may have changes soil properties unsuitable for the plantation of trees. Such local specific analysis on the feasibility of growing trees from biological point of view is need for further evaluation. 2. Conservation In order to enhance conservation values, connectivity to existing forests and other important conservation sites should be further examined. There was lack of data on ecological features in details such as bird population and wildlife distribution around the existing forest and isolated habitat patches in the vicinity to the proposed buffer zones. This will help select which areas along the potential buffer zones should be prioritised for future planning and to convert from current land uses to forests if necessary. 3. Social perspectives Subjective information, such as preference of local residence on the size, location, structure, visual landscape of forest and buffer zones, and recognition on the importance of the buffer zones should be taken account in the real world situation. As mentioned, the significance of GIS approach is to enable map makers to integrate various information on factors affecting and/or likely to affect land uses in the form of maps (Makczewski, 2004, Zek and Keles, 2005). Data gathering on social aspects should be carried out in reality to maximise the potential use of GIS. 4. Other environmental factors Elevation, climatic conditions were assumed to have little constraints in the area of interest based on the fact that there are woodland patches in broader spatial scale around the existing forest of the selected region, indicating that much of surrounding land is likely to be relatively homogeneous in relation to the potential of land properties for forest growth. However, topography would be an important attribute since visual landscape is greatly affected by changes in elevation and on the distribution of forest cover (Phua and Minowa, 2005, Zek and Keles, 2005). Such factor may be considered to contribute to aesthetic value for which the role of community forest should be targeted. In conclusion, in spite of the limited amount of datasets used to give solution to the objectives; to identify the potential sites for extension of forests along the motor way in the close proximity to the existing community forest, the result provides a comprehensive map to", "label": 0 }, { "main_document": "The advent of the Solidarity strikes and protests halfway 1980 struck a serious blow at the Polish communist regime. Whilst it provided a powerful response of workers to the repressive character and 'determination [of the Polish regime] to maintain [the political and economic] systems, whatever their human costs and lack of legitimacy', Where from came this powerful articulation of demands, and of 'the most powerful, sophisticated and advanced working-class movement yet seen, certainly in the 'communist' sphere and perhaps anywhere in the world'? Economic and political developments throughout the 1970s united with a blend of national-historical and wider Eastern European economic grievances to prepare the ferment for 'a new generation of protesting workers with a clearly different way of articulating their grievances [as] a strictly Polish phenomenon'. Walter D. Connor, 'Social Change and Stability in Eastern Europe' in Colin Barker, Walter D. Connor, 'Social Change and Stability in Eastern Europe' in Jadwiga Staniszkis, The prehistory of the Solidarity movement was, most importantly, sparked by an increasing crystallisation of a \"class consciousness\" within the ranks of the Polish work force, many of whom were becoming convinced 'that they [could only] improve their lot collectively'. While official policy claimed to pursue a policy of egalitarianism, the reality behind these claims was only limited. The producing classes, those of the workers and peasants, found themselves on the same position in the material hierarchy as before, while a decrease in real standards of living were initially concealed only by the relative success of social mobility. Walter D. Connor, 'Social Change and Stability in Eastern Europe' in Georg Konr Walter D. Connor, 'Social Change and Stability in Eastern Europe' in R.J. Crampton, Konr Similar to the England of the early nineteenth century, the creation of a new Eastern European working class indeed took place within the rather short timeframe of one or two generations. Over the decades after the Second World War, an average of nearly fifty percent of the younger peasantry came 'to form the new working class, [and] to create the social drama of a new life, a new world'. Connor, 'Social Change and Stability in Eastern Europe', p. 21. Ibid, p. 28. Garton Ash, Timothy Garton Ash's assertion that 'December 1970 [was] the single most important date in the pre-history of Solidarity', Its reaction to the government's implementation of food price increases revealed signs of a growing unity and singular purpose of action. When the government resolved to give up its policy of consistent non-unemployment 'aimed at forcing greater productivity through the use of threat of unemployment as an incentive for better and more efficient work', After strikes and violent confrontations throughout January 1971, the government withdrew its economic reform plans, while Gomu This, and the fact that many protests 'involved a large group which was generally representative of the worker population as a whole', Ibid, p. 12. Jan B. de Weydenthal, 'Poland: Workers and Politics' in Jan F. Triska and Charles Gati (eds.), Crampton, Weydenthal, 'Poland: Workers and Politics', p. 194. Barker, Weydenthal, 'Poland: Workers and Politics', p. 195. Yet, what made for", "label": 0 }, { "main_document": "a commodity over which tourists have rights is not simply perverse, it is a violation of the people's cultural rights\" (Greenwood, 1978, in Smith, 1989: The ethical dimension of commercially using indigenous images is rarely fully considered. The value of Maori imagery to tourism has long been recognised so has often been exploited. In 1987 the Maori Tourism Task Force highlighted the issue: \"It has been of deep concern to the Maori that the Maori image has been used as a marketing tool in the promotion of the tourist industry for over a hundred years [...] It is clear that the Maori image has commercial value. The expressed desire of the Maori people is that they should control their image...\" (Maori Tourism Task Force, 1986: However, nothing changed and in 1994 the Aotearoa Maori Tourism Federation identified the \"misuse of Taonga This involves the misuse of names and language, mispronunciation, inappropriate use of images on souvenirs and photographs, and the incorrect and inappropriate communication of histories and tribal lore\" (cited by Ryan, 1997: Taonga: Property, treasure. Used more widely to mean culture as well as tangible items. (Ryan, 1997) There is still significant stereotyping, misuse of images and symbols, exploitation, misrepresentation, commoditisation and/or bastardisation of Maori culture. Modern indigenous culture is often neglected in favour of meeting overseas tourist expectations, so a rose-tinted view of a more primitive, bygone age is presented (see for example right - Discover the World, 2004: It is important to recognise that any culture is \"dynamic and emergent\" (Kroshus Medina, 2003: The misrepresentation of indigenous culture and misuse of symbols, due at least in part to Furthermore, a lack of legislation leads to problems securing intellectual property rights (OTSP & TPK, 2001: While for the Maori it is an important, respected and valued part of life, in the tourism industry, Maori heritage is largely viewed as an asset; a tool for attracting visitors to New Zealand. The use of indigenous people and culture as a marketing tool is by no means restricted to New Zealand. There are abundant examples in a wide variety of brochures and tourist board promotional materials (Audley Travel, 2005a; Audley Travel , 2005b; Discover The World, 2004; Last Frontiers, 2005; Tourism Australia, 2005; Wanderlust/Journey Latin America, 2005) as well as in company logos (see right - Air New Zealand aircraft livery, in Audley Travel, 2005a: Easily identifiable images which tourists immediately relate to a particular country or destination are obviously immensely valuable for tourism promotion. There is little problem when those images are buildings (e.g. Sydney Opera House), monuments (e.g. Eiffel Tower) or other features of the built environment (e.g. Golden Gate Bridge), but when those images are of deep social, cultural or spiritual significance to a community then an assessment must be made regarding their suitability for commoditisation. Maori culture is undeniably valuable to New Zealand tourism and there are notable successes of Maori communities involved in the industry (e.g. The Ng Other, The New Zealand Tourist Board is right to emphasise the importance of Maori heritage for New Zealand tourism and", "label": 1 }, { "main_document": "The voyage of Christopher Columbus, in the late fifteenth century, motivated subsequent European explorations of North America, for example John Cabot of England in 1497, and Juan Ponce de Leon of Spain in 1513. Whereas the failure of Cabot's voyage led to an abandonment of exploration until the late sixteenth century, the Spanish continued to explore throughout the era. Spanish expeditions along the North American coast were led by Hernando de Soto between 1539 and 1543, and inland missions under Tristan de Luna between 1559 and 1561, then by Juan Padro between 1566 and 1581. The discovery of the As a result, both England and Spain founded early coastal settlements in North America in the late sixteenth century and throughout the seventeenth century. Although English and Spanish colonies each had their own distinctive 'colonial identity' Alan Taylor, N. Canny and A. Pagden (eds.), Canny and Pagden (eds.), Both countries' settlements attracted migrants by offering a geographical extension of the search for employment and stability, when all opportunities in their motherland had been exhausted. On the other hand, Bailyn asserts that it is not known On the other hand, many Spaniards were 'forced to leave profitable positions and become soldiers.\" This is comparable to English convicts who were sent to North America, often as servants. Bernard Bailyn, Theodore Corbett, 'Migration to a Spanish Imperial Frontier in the Seventeenth and Eighteenth Centuries: St. Augustine', 3 (August 1974), p.427 The earliest successful Spanish settlement in North America was in Florida's capital, St. Augustine, which was established under Pedro Menendez de Aviles in 1565. Although trading outposts had already been established earlier in the century, for example in Columbia, St. Augustine's was destined to be a military outpost from where the Spanish could defend their trade routes and treasure from privateers sanctioned by the English government (until the 1604 Peace Treaty). The majority of the small Spanish population in Florida were Thus, there were few families, and those that did exist were often hybrid as a result of intermarriages, with their children being identified as English settlements were not so complex as their 'Northern American societies saw less racial mixture.\" The complex stratification of the Spanish is further enhanced by the variations between Hispanic immigrants in St. Augustine, including divisions between As a result, St. Augustine has accurately been illustrated as 'a melting pot of ethnic groups, loosely bound together by... the Spanish Crown.\" Corbett significantly observes that the source of migration to St. Augustine, during the period of 1671 to 1691, was actually from Spanish Mexico, highlighting internal migration from colonies of New Spain. English settlements also contained people derived from varying ancestries, as a result of internal movements, particularly in the middle colonies of New York and Pennsylvania. Anthony McFarlane, Corbett, 'Migration to a Spanish Imperial Frontier', p.414-430 Corbett, 'Migration to a Spanish Imperial Frontier', p.414 The earliest English colonists 'preferred to search for precious metals, in emulation of the Spanish conquistadores.\" However, in direct contrast to early Spanish incentives for North American colonies as military bases, the result of the three English missions", "label": 1 }, { "main_document": "telephone. A semi-structured interview format allowed me to explain questions fully and ask respondents to elaborate on their replies. Respondents react positively to this format as they talk about themselves and have the listener's full attention. (Seale & Filmer, 1998). She accepts syntactic shorthand (' She expressed concern that young people may fail to appreciate the importance of 'correct' grammar and spelling in the future. She does not dislike shorthand and understands it but thinks it raises concerns about writing habits. With the increased use of screen based texts she feels schools should be making clear distinctions as text shorthand is not acceptable in formal writing. She feels it is essential for speed and space and considers it 'correct' language, understood and accepted by her generation. She does not feel her language in other situations is affected as she is aware of the distinctions between formal and informal uses and sees no need for lessons. The respondents apparently interpret text language by relating shorthand to prior linguistic knowledge. They provide coherence, making sense of what they read, by creating meaningful constructions not only from the text but also from what is missing. (Yule, 1996). The responses suggest that technical restrictions coincide with pragmatic understanding for lexical and syntactic abbreviation in message production. All messages sent by the respondents were to friends or family indicating a shared knowledge of terms, pragmatic understanding, and readiness for a direct approach, allowing speed and space restrictions to be overcome. This foregoes the social convention between strangers which requires a less direct, more formal, approach. (Segerstad, 2002). I conclude that there are generational differences in attitudes to text language but this does not extend to interpretation. Indications are that acceptance of language change begins with the young as they quickly adapt to changing contexts. The impact on education seems to lie in the acceptance of text language officially, giving it some status as a mode of communication with a role to play within existing and other emerging modes. We have seen how technology affects the language behaviour of people already proficient in its use and how the persuasive power of advertising is enhanced through the use of multimodal discourse. These tasks have emphasised the role of language in creating social realities. Each methodology has allowed us to see the influence of context and how our personal interpretations of context are reflected in our language use.", "label": 1 }, { "main_document": "car. Renault R24 as shown in Figure 6a was a single-seater built by Renault to compete in the Formula-1 racing in year 2004. The engine used to drive R24 was the Renault RS24 engine, which was an improved evolution of the RS23 engines that had been used in the Renault R23 (2003). Renault had been developing R24 since September in order to improve the overall engine performance. The RS23 engine launched in 2003 was a 111-degree V10 engine. It was generally reliable except that it has major problems with vibrations. Therefore RS24 had been developed from RS23 by changing some configurations and its design is based on the Supertec engines. RS24 has a 72-degree V configuration with 10 cylinders arrangements. As RS24 engine is higher, it has a centre of gravity that is 20mm higher than that of RS23. But according to the Renault F1 team, this problem has been compensated by making changes to the engine's casing. Therefore, the centre of gravity of the car R24 is not much higher than that of R23. The problems of vibrations have been improved in the development of the RS24 engine. In February 2005, Renault launched a new RS25 engine that has been developed from RS24. RS25 is basically the same as RS24, but its centre of gravity has been made lower. After the 72-degree V10 engines (RS24 and RS25), Renault was planning to develop a new 90-degree engine. The future developments for Renault engines will be more on weight reduction as well as improving its reliability. Shortly after, more effective direct fuel injection system was introduced by Mercedes. Renault then introduced turbocharged engines into F1. But the cooling systems of engines was not well developed, many turbocharged cars went down due to overheating. But these increase in horsepower reduced the handling of the cars. The ground-effect of the cars were improved which increase their cornering speeds. This made it even harder to control the cars. During that period of time, many deaths of F1 driver were caused by severe accidents during races. The first 700 hp 3500cc induction engine was built which was much better than the previous racing engines. A 'baffle' had been introduced to enhance the cooling of the engines. F1 limited the capacity of the engines to 3000 cc in mid-90s. With the capacities limited, the engines could only be developed further by improving the mass centralization and increasing the horsepower. Horsepower can be increase by increasing the piston's bore and decreasing the size of cylinders as these will improve the volumetric efficiency of the engine. Finding the optimum cylinders configurations can improve the mass centralization (stability) of the car as well as the overall performance. Motorcycle racing is a variety of sports involving motorcycles competing with each other. Some common examples of motorcycle racings are Isle of Man TT (road racing), MotoGP (circuit racing), Daytona (endurance racing) etc. Engines of motorcycles are gasoline engines and can be either two strokes or four strokes. They either use fuel injection or carburetors to feed the fuel-air mixture into the combustion chamber.", "label": 0 }, { "main_document": "one to have an understanding of the evolutionary nature of paradoxes. One of the earliest 'solutions' towards the Liar is to ban the use of self-reference. By doing this, the Liar would cease to be paradoxical because the 'this' in 'this sentence is false' would not be able to refer to the sentence it was in. Quine's paradox, however, is specially formulated to deal with the challenge of barring direct self-reference. Quine has constructed a paradox that makes no use of demonstratives (e.g. this), or indexicals (e.g. I). The paradox instead utilises the ability of an English sentence to name itself through the use of quotes, and allowing inherent properties to interact with the sentence named. By encoding latent paradoxical qualities in a sentence that are expressed under the condition of repetition, Quine overcomes the disadvantage of the Liar - namely, direct self-reference. However, banning self-reference would have dire consequences on the rest of the natural languages. Non-paradoxical sentences such as 'This sentence is printed' make perfect sense. Also, G Additionally, designating sentences that possess non-referring grammatical subjects as meaningless would render such innocuous statements such as 'every unicorn has a horn' to be without meaning, despite being understandable to anyone who possesses the relevant ideas, although the subject matter may not exist. Alfred Tarski, a polish logician, proposed in a 1933 paper a method by which true sentences could be defined that avoided the use of other semantic terms. His formal conception of a true statement was one that satisfied the criteria of material adequacy: X is true if and only if p where `X' is replaced by the structural-descriptive name (or quotation name) of a sentence and `p' is replaced by the sentence. However, under this treatment, named Convention T, a paradox could be obtained upon substitution of a sentence of the form 'U is not true'. The Liar and Quine's paradox are both eligible candidates for this situation (under Convention T a legitimate declarative sentence is either true or else false This is because both expressions make use of natural language's ability to talk about its own semantics. In order to prevent both paradoxes from operating, Tarski set forth his theory of the hierarchy of languages. Under this proposal, the truth of an object language, O, can only be defined by the metalanguage, M. Thus, the liar can only come about in the innocuous form 'This sentence is false-in-O', which must itself be a sentence of M, and hence cannot be true-in-O, and is false instead of paradoxical. This renders the Liar sentence unable to produce a contradiction because its truth or falsity is beyond its ability to state. For Tarski, truth cannot be defined for semantically closed languages, where the global truth predicate \"is a true sentence\" can be characterised. Instead, truth in a particular language must appeal to another language a step higher in the hierarchy. This is Tarski's 'Undefinability Theorem'. Under this treatment, both the Liar and Quine's paradox are powerless to achieve their desired perplexing ends. However, by forcing Tarski to make this claim, the", "label": 1 }, { "main_document": "frescoes\" (Fellini cited in Chandler 1995:174). It is an interesting way to assess history and enables people to see that we can not know everything of the past but by piecing sections together, we can begin to create a good basis upon which to set out our arguments. The spectacle of the film is very different to that of Ben Hur, Spartacus and Cleopatra. Traditionally films based on antiquity have tried to achieve the greatest spectacle through lavish costumes, extraordinary set designs and a large number of extras. Satyricon has none of these elements and the focus is not on the spectacle of the buildings or props, but rather that of the people of the time. The crazy, loud and mad people are the centre of attention rather then the extravagant elements of the society. Betsy Langman (cited in Baxter 1993:249) stated that Nothing was meant to appear realistic or accurate, it was meant to make the audience feel that they had been transported to a new world that they could not understand and knew little about. It also wanted to show elements of antiquity that had been concealed by previous films. Ben Hur, Spartacus and Cleopatra wanted to encapsulate the beauty and splendour of Rome. In all the films we see glorious buildings and are given a glimpse of what the Romans had been able to achieve. Satyricon shows us another part of Rome, the darker elements of the society. Encolpio wears a toga that is too short for him, perhaps revealing his poverty, the brothel is dark, gloomy and grey and is seen falling down upon its poor inhabitants. Moravia (1978:166) notes how there is This is a new Rome that we are being transported to, one that refuses to delight in just what the rich could afford and enjoy. It shows not just the slaves but what the poorest members of the society had to experience and live through. Fellini reveals that the Rome that has fascinated people for generations is not that of the true Rome, but the Rome lived in by the elites. Fellini also uses visual elements to put across the idea that every human has to fight for life. Snyder reveals how this is especially shown in the scene with the Widow. She is captured crying over her dead husband, and is dressed in dark clothes. A soldier comes in and she chooses him, even agreeing to give up the body of her husband so that the soldier will not be killed for allowing a prisoner's body to have escaped. This is a key moment of the film, and clearly marks the need for people to embrace life over death. This is shown when the Widow leaves the corpse as \"her transition is marked by a process of pigmentation. The significance of this process lies in its relationship to the story of Encolpio\" (Snyder 1978:168), as both of the characters have to make a choice; life being the right one for it is coupled with colour and fulfilment. Encolpio, upon abandoning the old men eating", "label": 1 }, { "main_document": "When modifying their economy's output level, governments often resort to fiscal policy: manipulating government-spending and taxation to influence overall economic output. Whilst it has been described as \"the central reason for impressive economic growth\" Its effectiveness is variable, and primarily depends on the size of government-spending and taxation multipliers and on the extent of an economy's crowding-out (to what degree an increase in private spending in the goods-market - triggered by tax cuts or boosted government expenditure - is undesirably countered in the money-market by a drop in investment and an increase in savings). The size of crowding-out and of these multipliers varies with the slopes of the IS and LM curves, as economies reach equilibrium where IS=LM. These slopes in turn depend on an economy's sensitivity of investment to both income and interest rate, its marginal propensity to consume, its income-tax levels, and its sensitivity of money demand to both income and interest rate. Mathematically and economically analyzing each of these components will hopefully yield insight into the use and effectiveness of fiscal policy in today's economies. \"Consumer market insights: Consumers in Argentina are Equilibrium in an economy's goods-market is expressed by the IS curve, which follows the equation Y = [(Co - C As G (government-spending) and T (taxes) are also central to this equation, fiscal policy - changes in G or T - clearly affects an economy's goods-market equilibrium. Fiscal policy also impacts Money-market equilibrium (where real money demand equals real money supply) lies along the LM curve, which follows the equation i = (M/ d When the IS and LM equations are solved simultaneously, they produce an income level Y Both IS and LM therefore impact fiscal policy's effectiveness. Expansionary fiscal policy, involving increasing spending (G) or decreasing taxes (T), would for instance shift the IS curve to the right (increasing output to Y To return to equilibrium (where IS once again equals LM) the economy undergoes the following automatic responses: the increase in IS triggers excess money demand of Y Demand for bonds therefore drops, decreasing bonds' price, which in turn causes the interest rate to rise (i This rise increases returns to saving as well as the cost of investment - as a result, investment drops while savings rise, withdrawing money from the economy's circulation and thus reducing spending. Output therefore moves back down (C D), settling the economy's new equilibrium output level at Y This drop in output is a result of the crowding-out effect, by which increased government-spending boosts private spending but counter-availingly decreases private investment. The larger crowding-out is, the less effective fiscal policy therefore becomes. Graphically, the extent of crowding-out varies with the IS and LM curves' slopes: Comparing a relatively flat LM curve (LM The interest rate has to rise by less to choke off EDM - i Crowding-out is therefore lower and fiscal policy more effective when the LM curve has a smaller slope. Comparing a set of shallow IS curves (IS If both sets shift by the same amount due to equal increases in government-spending (dG), the resulting EDM", "label": 0 }, { "main_document": "Wells, H.G., So begins H.G. Wells' classic novel in which Martian life forms take over planet earth. As the Martians emerge, they construct gigantic killing machines - armed with heat rays - that are impervious to attack. Advancing upon London, they destroy everything in their path. Victorian England is a place in which the steam engine is state-of-the-art technology and powered flight is just a dream. Mankind is helpless against the killing machines from Mars, and soon the survivors are left in nothing less than a new Stone Age. Over the centuries, London has become one of Europe's leading cities. By the Victorian era, the city had reached a zenith of importance as centre of the British Empire and financial powerhouse of the world. The world's first super city. In the years preceding the fated maiden voyage of Titanic, man knew no boundary. His ego was unbreakable; he expected his power to dominate nature. But nature inevitably finds a way. It takes a disaster to force man back into his place. The city tempts us with the power it exudes. 'The city is a place of the spectacular. It is where major historic events take place, grand architecture is constructed, big decisions are taken, great art is produced.\" We are constantly aware of the city's power. But what of its vulnerability? What when its power falters? How is this incorporated into the performance of the city and an instructive vision of the twenty-first century metropolis? Borden, I, et al (eds) In the acclaimed 'Lights Out for the Territory', Iain Sinclair walked the 'territory' he had made his own, the streets and rivers of inner London. He uncovers a history of forgotten villages, suburban utopias and hellish asylums, now transformed into upmarket housing, all the while walking (and in the case of his collaborator, Chris Petit, driving), a disappearing landscape, as the countryside is engulfed by commerce. Throughout this essay, I will make connections between Although Sinclair only mentions it briefly, there are many parallels between the two narratives, especially concerning the similarities between the London portrayed by H.G. Wells and Sinclair/Petit at the dawn of the twentieth and twenty-first centuries. Moreover, Sinclair recommends 'the literature of the future written a hundred years ago' Petit, Chris/Sinclair, Iain, From the beginning, We are shown the Millennium Dome, forever associated as the great landmark having no idea what to do with itself. We see the collapse of sixties tower blocks, themselves a solution to one of London's problems, until the contemporary city outgrew them. Further, the idea that the city itself is an entity that performs is shown at the beginning of As the tower blocks tumble in a cloud of dense smoke, we hear the cheering of onlookers. The demolishing of one of London's mistakes becomes a performance. The identity of the city's mistakes is I think, an important theme in The problems of the city are examined, framed in the ongoing concrete circle. Whilst in orbit, Sinclair and Petit explore the cracks, traces of subsidence in London's structure which are reflected in its", "label": 1 }, { "main_document": "(e.g.Vitols, 1999; Jurgens et.al, 2000) indicated that Banks still played a critical role in the system of corporate governance. They pointed out that Germany remained a bank-centred financial system, as banks have direct ownership of shares and companies still finance themselves via bank credits. However, banks are no longer the only influential players on strategic performance and monitoring in corporations. Insurance companies play a significant role in German economy in controlling the market value. They used to manage their asset as banks did, whereas, they have changed their investment policies recently in turning to increase their equity holdings (Jurgens et.al.2000). Large Insurance groups, like Allianz have separated their asset management sector and the cross-shareholding activities have been discouraged (Ponssard, 2005). It could be argued that the changes in the incentives and behaviour of banks and insurance companies have somewhat shifted the German system of corporate governance towards a shareholder value orientation. Since banks and insurance companied are becoming more impatient in playing their traditional role for providing industrial loan, a rising amount of German corporations are seeking for capitals from equity market. They undertake public share offering, engage into merger and acquisition and expand stock-based compensation (O'Sullivan, 2003). Subsequently, O'Sullivan (2003) highlighted that German enterprises were more involved into stock market activity than before. A new type of stock market, the It has started to replace the main market in Germany. There were 319 companies listed on the He also indicated that the Neue Markt represented \"the changing face of German capitalism\". Additionally, some big German companies have sought to be listed on the foreign stock market from the middle of 1990s. By the early of 2006, 17 German companies are listed on the NYSE and eleven on the LSE (see table 2). According to O'Sullivan (2003), German companies who are willing to be on NYSE listing are waiting for chances of getting acquisitions. In the contrast, firms listed on Nasdaq are motivated by its access of raising capital in the US markets. Coffee (1999) demonstrated that the firms who are seeking to list on US stock markets will voluntarily adopt the shareholder-oriented system of corporate governance in order to gain foreign funds. However, some argued that although the stock market plays a more important role recently, it still play a smaller role compared to its US or UK counterparts. In terms of merger and acquisition, the hostile takeover of Mannesmann by Vodafone in 1999 is the most cited one. Nevertheless, the number of takeover has risen in recent years in Germany, \"involving foreign buyers and sellers as well as all-German transactions\" (Jurgens and Rupp, 2002) (see table 3). Moreover, after analysing the data from Thomson Financial, O'Sullivan (2003) stated that German organisations have been a major acquirer than be purchased. Similarly, the use of stock options has increased in Germany, though it is mostly occurred to the large companies listed on the new market. Multinational Corporation (MNC) is one that operates in multiple nations. These companies usually have production and marketing facilities in different countries and a centralised head office in", "label": 0 }, { "main_document": "regard the globalisation as a 'myth' which conceals the reality of still powerful national governments (Held et al 1999, 2-7). Regardless of whether we accept or deny one of the two contending perspectives, in any meaningful sense, liberalism is undergoing a series of fundamental challenges under globalisation. First of all, the strongest challenge of globalisation against liberalism comes from Adam Smith's 'invisible hand'. Fundamentally, Adam Smith and David Ricardo assumed capital immobile and only available for national investment. Smith's 'invisible hand' presupposed the internal relations of community, so that the capitalists feel a 'natural disinclination' to invest abroad (Burchill 1996, 57). In addition, recurring financial crisis across the continent calls into question on Adam Smith's 'natural order' (Deane 1978, 10) discussed above. This is the case that globalisation in finance deepened asymmetric interdependence Less-advanced nations have an incentive to the access to huge export markets and advanced nations want an external outlet for domestic capital surplus. Inter-state co-operation in macro-economic level is also becoming compulsory to efficiently counter unbridled capital movement. Keohane and Nye stress the importance of 'asymmetric interdependence' while Kenneth Waltz (1979) emphasizes 'symmetry' and 'reciprocity' in interdependence. However, currently increasing inter-state co-operation does not automatically guarantee the triumph of liberal thinking. Behind the co-operation, the losers might be discontented with the 'relative loss' The potential hostility such inequality might cause represents the instability and vulnerability the globalisation confronts. The repatriation of capital profit and short-term capital taxation, for example, still remain as a 'hot potato' which irritates liberalists and their adherents. The neo-realists have emphasized 'relative gain' while neo-liberalists have stress the existence of 'absolute gain' (Waltz 1979, 105). In addition, the over-speeding capital flow only searching cost-productivity and state's deregulation have coerced the domestic industry and labour into structural adjustment. Adam Smith previously asserted that free trade should be gradually phased over a large enough time span to complement the competitiveness of domestic industry and labour (Crane and Amwai 1997, 56). But this classic liberalist idea was, in practice, ignored by liberalist policy itself because of the worldwide deregulation and the wane of the states' power. Furthermore, international institutions tend to be mobilised by individual states in order to hasten the domestic structural adjustment accompanied by domestic resistance as a result of high unemployment and deepening income inequality in labour market. Indeed, as Keohane depicted as a 'common blind spot'(Keohane 1988, 392), the domestic politics under globalisation was neglected by both game theoretic strategic analysis and structural explanations of international regime (that is, two representative modern liberal approaches). The validity of the domestic-international dichotomy is fairly arguable. However, it is the case that Laisser - faire policy commonly maintained by liberal thinking in domestic economy exacerbates income inequality and deepens the bipolarisation globalisation cause. Secondly, the worldwide proliferation of regionalism also cannot be explained by liberal assumptions that free trade enhance the interdependence and finally pacify the inter-state relationship. The liberalist approach on trade international relations theory begins with fundamental assumption that 'trade wars' both between and within trade blocs is essentially unthinkable (Richardson 1995, 291). By contrast,", "label": 0 }, { "main_document": "Hyperfine Field Strength (BHF) and Isomer and Quadrupolar shifts (ISO and QUA)) are input by the user for a chosen number of subspectra. A theoretical spectrum is generated based on the input parameters. This is fitted to the experimental spectrum and more accurate parameter values are calculated, this will only work if the input parameters are similar to the actual values, if they are not then the fit will be poor and the parameter values given will be inaccurate. Normos also calculates where This was used to accurately calibrate V, it was found that V = 11.3mms Magnetite is known to produce two subspectra due to its tetrahedral and octahedral crystal structures, referred to as sites A and B respectively, with a relative isomer shift of 0.4 mms Results from the fitting are given in Table 3.1, the final spectrum is shown in Figure 3.1. The fitted double spectrum (figure 3.1) shows good agreement with experiment, Barium Ferrite also has tetrahedral and octahedral crystal structures and so was treated in the same way as the Magnetite sample (its relative isomer shift is -01 This double spectrum agrees well with experiment, Upon entering the octahedral site the iron p-orbitals split from their degeneracy to form an upper pair of orbitals d Fe The eight Fe Eight of the Fe The remaining eight Fe Above a threshold temperature of 119K \"hopping\" of electrons removes the difference between the Fe Therefore the two different spectra correspond to the A sites and the B sites. It is the parallel spin B sites that create the larger positive isomer shift. The split in degeneracy associated with the octahedral site (illustrated in figure 4.1) creates an asymmetric electric field, which influences the s-electron density, consequently changing the field at the nucleus. The areas under each of the absorption peaks (assuming Lorentzian line shapes) correspond to the relative abundance of the associated site The recorded spectrum for magnetite shows, quite clearly, the magnetic sextet associated with the iron nucleus. The A sites have a higher nuclear magnetic field resulting in a broader energy range of Zeeman splitting. For clarity the first two peaks have been labelled with a green and red dot for the A and B sites respectively. However, this is not the whole story, because there does not appear to a difference between the 5 This is due to the positive isomer shift on the B sites caused by the asymmetrical charge distribution described earlier. For the B sites the spectrum has been shifted by an amount so as to compensate for the difference in magnetic Zeeman splitting for the 5", "label": 1 }, { "main_document": "relationship, which encompasses the other two relationships that the book explores is only developed in the penultimate two chapters of his book and could have been explored further. The book explores three interesting relationships that co-exist in the East End of London and Hobbs substantially explores the relationship between both the detectives of the East End working class to entrepreneurialism. However, the inherit or conscious construction of the third relationship could have been explored further. Hobbs does however provide an interesting insight into the lives of both East Enders and the working of the CID in this area. One of the book's strengths is the affect of the City on the East End and the emergence of its unique character. Hobb's additionally provides an interesting foresight on how the area will be changed with the further development of the City and changes in London's economic markets, which he firmly establishes as affecting the people of the East End area and its characteristics.", "label": 1 }, { "main_document": "pearlitic structure shown in figure 5 become narrower, as there is a grain size reduction. Since the microstructure is affected by the cooling rate, the associated mechanical properties can also be affected by increasing or decreasing the cooling rate. A reduction in grain size caused by increased cooling rate results in an increased strength and hardness. If the cooling rate is increased to an extent where the carbon is unable to diffuse from the austenite, rather than forming pearlite, the austenite transforms into a body-centred lattice with the un-diffused carbon residing in interstitial solid solution. This constituent is known as Martensite, and is very hard and very brittle. Often Martensite is produced for its hardness by quenching heated steel, into water or oil. However, this increased hardness comes at the expense of ductility. By cooling the carbon steel at a lower rate, the pearlitic structure which forms will be of a much coarser texture, caused by a small amount of undercooling, before nucleation and growth occurs. The most common composition for a brass, is that containing 40% zinc, and is termed Munzt metal. It is reasonably corrosion resistant and can be machined. With reference to the Copper-Zinc phase diagram (see appendix B), the molten metal begins to solidify at approximately 900C. Upon cooling through the solidification period, the alloy consists of homogenous phase At this point, the phase begins to precipitate As the alloy is cooled further, to approximately 400C, the beta phase transforms into a ' phase. This phase is identical except at this temperature the crystals are more ordered. At room temperature, the microstructure of the Copper-Zinc alloy appears as shown in figure 12. The grinding process performed on the alloy produces the characteristic \"Tramlines\", termed mechanical twins. This twinning is a characteristic of the Copper-Zinc alloy. Figure 12 shows some areas of dark spots, these inclusions are cause by acid attacking the stress concentrated regions, and is caused during the etching process. A simple Aluminium-Silicon alloy usually has very poor mechanical properties due to the Silicon phase occurring in coarse flaky grains. To produce an alloy with more useful mechanical properties, the alloy is commonly modified by adding a small amount of sodium (~0.01%). The resultant Silicon changes from coarse grains to smaller fibres. The modified silicon helps reduce crack propagation, preventing fracture. Alloys containing Silicon are suitable for the manufacture of sand and die castings, and less commonly, can be used for welding. Typically, modified eutectic Aluminium Silicon alloys display somewhat higher tensile properties and improved ductility. Improved performance in casting is also characterised by improved flow and resistance to elevated-temperature cracking. During cooling, the molten alloy transforms at approximately 600C, a Liquid + phase begins to form. The primary alpha phase begins to form in heaps, and the observed growth is termed Dendritic growth, and appears as shown in figure 13. as the alloy reaches the eutectic point and beyond, the liquid phase fully transforms to give an + phase as shown in figure 14. The photographs of the microstructure are shown periodically below, in figures 15", "label": 1 }, { "main_document": "universalisation of literacy, extension of technical education and access to safe drinking water and health facilities by all (Dasgupta 1998, 95). One of the criticisms to the neo-classical model is that the assumptions mentioned above do not consider the realities of the countries in which it is implemented. An example is the idea that free trade is the perfect model to achieve full employment of all actors of production. For Milward (2000, 36), this is unrealistic in less developed economies. States do not have equal influence in their trade with each other. In fact, there is an unequal exchange between the primary products from developing countries and the products of advanced economies. 'The outcome is that free trade produces an increasing gap between the national income of the poor and rich nations.' Another criticism of SAPs refers to the loss of autonomy of countries in the politics of policy formulation. Despite the fact that 'IMF intervention is routinely at request of governmental authorities', therefore it cannot be interpreted as a threat to sovereignty (Held 1989, 196, quoted in Mohan 2000, 80), critics claim that structural adjustment programmes have been applied in a doctrinaire and coercive way which leaves little room for manoeuvre. As these policies are implemented in economies with preceding years of hyperinflation and economic hardship, sometimes even the opposition parties within the country resign themselves to the inevitability of the adjustment. However, this does not mean that there is no internal resistance to the SAPs. Latin American countries, for example, have experienced strong opposition to the political neo-liberal policies imposed by the IMF and the World Bank. As a result, political populism, thought to be dead in the 1980s, has been rearing its ugly head again in the region (Mohan 2000, 84). In the past, 'populism was depicted as the very antithesis of the new neo-liberal order through its association with irresponsible state spending and corrupt state-led economic strategies'. The new populism, by contrast, is referred to as a lower-class backlash against austerity, inequalities and market insecurities attendant on neo-liberalism (Roberts 1995, 83, quoted in Mohan 2000, 85). The analysis of recent electoral results in the region - Bolivia and Venezuela for instance - provide evidence of the re-emergence of this phenomenon. In countries where political opposition has not been strong, resistance may come from social movements and non-governmental organisations (NGOs). In fact, Brown (2000, 171) reported an increase in the number of civil society associations all around the world created to challenge the neo-liberal consensus. That is the case of the movement of landless workers in Brazil. The MST is the first national agrarian movement in Brazil. Its grassroots can be traced to the country's history. When the Portuguese crown claimed the land in 1500, it was decided that the territory would be divided into 12 giant provinces, called These people were allowed to exploit the areas to their own benefit despite the fact that they did not own the provinces. For over 350 years, slaves brought from Africa were forced to work in sugar fields, goldmines and coffee", "label": 0 }, { "main_document": "might challenge to stop dealing with work of Jacques Derrida on the newspaper. The FTA threatens the hearer's negative face by imposing a request on the hearer. Negative politeness is used to mitigate the threat by giving hearer the option not to do the act in other words, ' The FTA threatens the hearer's negative face by imposing a request on the hearer however positive politeness strategy enables to reduce its threat by noticing hearer's good point with the first utterance and approving the hearer. Positive politeness is used to call student's names and it shows in-group membership with hearer. The FTA threatens negative face of hearer as student is imposing on stopping the action. It is very bald and using the FTA on record without redressive action. It threatens hearer's positive face as the speaker disapproves the hearer and it can be considered as being offensive on purpose.", "label": 0 }, { "main_document": "to cut short any answers. In addition to this we could have made \"small talk\" about the weather or the journey as this is helpful in reducing the interviewee's anxiety. However, if this goes on for too long it could in fact cause more anxiety. Furthermore, at the beginning of the interview we briefly explained what sort of questions we were going to ask and said we would include a question on disputes within the workforce. It could be argued that by using the word 'dispute' we immediately made the interviewee feel anxious and it may have been better for us to be more general about the content of questions we were going to ask. Interviews should be as 'normal' as possible for the interviewee which means the vocabulary used should be as close to the persons normal everyday social conversation in order not to distract the interviewee and make them feel overwhelmed by the questioning. It can be seen that this can often be a problem as the professional can be worlds apart from the interviewee in education, income and class. A real effort must be made to put all these things aside in order to obtain the information that is needed from the interview. (Newell 1994) I have also learnt that the conditions of the interview need to be carefully considered in order to optimise the chances of a successful interview. For example, excessive noise can raise stress levels and also gives the impression of a lack of privacy. This was evident in our interview and could have been one of the reasons the interviewee's hands remained crossed and she did not appear to be relaxed. If this cannot be avoided the noise should be acknowledged with the interviewee and if it is too difficult to continue then as a last resort it could be rescheduled (Newell 1994). In hindsight we should have mentioned the noise levels during our interview. Furthermore we needed to be aware of critical thinking as this is a very important quality for a health and social care professional to have. Research has shown that it makes a big difference to the quality of service that the service user obtains when the professional is a critical thinker. This involves the careful examination and evaluation of beliefs and actions (Gibbs and Gambrill 1999). With this in mind I feel we could have altered our questions slightly to probe more into whether the interviewee was a critical thinker. We needed to confirm that she could recognise many different perspectives within one topic and would not just accept an outcome if she did not agree with it but would be ready to \"question the answer\". Without critical thinking the outcome could be helper induced harm, for example institutionalised healthy deaf children could become incorrectly labelled as having emotional problems (Lane 1991). When we came together in our group we decided we would each come up with a set of questions and then meet on another date to collate them and pick out the best from each list. We decided", "label": 1 }, { "main_document": "The main features of the data for this assignment are the features of the main variables used in the regression analysis in question 6. The regressand, Over the three years, the time period in which the observations were made, the average mark was 65, in this case both the median and mean value (the actual exam results do not contain decimals). The main variable in the regression is clearly the variable Looking at the graph in the Appendix, it seems that the variable follows roughly a normal distribution. This variable has a mean and median of around 25. The correlation between The value for correlation between the regressand and the regressors, is biggest for the previous variable Appendix, P.1 Other variables are also important in the following analysis, especially This has a correlation 0.256 with The data overall has several variables, some of which are in a form that cannot be estimated in regression analysis without modification into smaller variables. The main new variables used in the analysis are mentioned in the correlation table Some of the variables are very closely linked to each other and cannot therefore be used in full analysis. The correlation table in the Appendix implies that there is linear relationship between the listed variables; and the following regression analysis will investigate, if there is indeed causality between the variables of the whole data and Appendix, P.1 When estimating a bivariate regression of qtmark against attr, we estimated how proportion of revision lectures attended affects the mark gained in first year statistics exam. This regression is analysed restricting sample for the students who have observations on The model is as follows: As the regression is analysed using Eviews, the value of the intersect 63.74 (2 DP) and the value of In an econometric analysis this model turns out to be insignificant. If a normal t-test is run on this regression with the null hypothesis, that Also it has to be noted, that the value of the coefficient is rather small, it cannot explain movements in qtmark very well. In this model ability and hrsqt are added to the above model as additional explanatory variables, and the sample is not longer restricted. As a result the following model is obtained: As the new variables were included the coefficient on From this we can conclude that the proportion of revision lectures attended has effect on the mark achieved, when ability and average hours spent per week on all lectures are added to the regression. So the effect of the proportion of revision lectures attended has effect on When the significance of Explanation of the t-tests and overall significance, Appendix, P.2 The overall significance of the regression was analysed as well, testing the null hypothesis, This was done using the normal F-test, using the R As a result the null hypothesis was rejected, and the coefficients are therefore jointly significantly different from zero, even with 1% significance level For this question dummy variables are created from the variable This is done using the The They take values in intervals of 2,", "label": 0 }, { "main_document": "livestock to infectious disease, however indirectly. The prevention of these scenarios is of course the ultimate objective for immunoparasitologists. There have already been successes in the fight against these organisms in the form of vaccines against parasitic infections but few have reached large-scale commercial production. The progress that has been made however, includes two of the major parasitic infections of man: Malaria and Schistosomiasis. The success of previous work has been hugely significant in saving countless lives, an example of this seen in the decrease of cases and death rates of people with Malaria, which is prominent in approx. 100 countries. Jones (1967) concluded from a variety of sources that previous to the eradication efforts that began in the late 1940s, there were approximately 350 million cases of malaria per year over the world, which can be compared to the current 100 million (approx.) cases per year. Relationships between parasites and their hosts following evolution exist in all species, and as pointed out by Wakelin (1984) exploitation of the environments provided by the bodies of living animals is, in essence, no different from the exploitation of the environments provided by the seas, the land fresh waters.", "label": 1 }, { "main_document": "part of the material causing more to yield and become plastic. This explains that once past yield, the stresses remain almost constant as very little additional load is required to push the remaining elastic material to its yield point, hence the divergence from linear elastic to non-linear plastic. The strains continue to increase with additional load but as the material transforms into its plastic region the strains increase greatly compared to the applied increase in load. The stress remains almost constant, since the yield and limit stress values were found to be close together. Once past the yield point the material is permanently deformed, in the case of the tensile test, stretched. Below the yield point the component will return to its original shape. The beam in Figure 7.4 shows that after the beam was loaded to 1.3 times its yield the beam was permanently deformed (bent). The conclusion from these results is that once yielding has occurred the increase in load required to create complete yielding of the component is much lower than that required to initiate yield in the first instance. Also the strains in the component post yield increase greatly for a given additional load, and that the predicted strain values are the lowest possible value of strain present for a given load. Strains may however be much greater than those predicted. Post yield the component will be permanently deformed, the level of deformation being dependant on the load applied. Once subjected to a post yield load, the materials properties are changed, it is possible to continue to load and unload the material which will operate in an elastic state provided that the loading does not exceed that of the initial preload load. In effect the material has a new yield point, higher than its original yield point, although residual stresses are now present in the material structure. All three types of beams investigated were required to last at least 100,000 cycles before failure. Fatigue theory was used to estimate the maximum load that could be applied in order for this component life to be achieved. The calculations showed that for the plain beam, a maximum load of 5.48KN could be applied to the plain beam to ensure the beam would survive 100,000 cycles. The results of the plain beam can be seen in Table 6.3, this shows a large variation in the life of the three beams tested. All samples exceed the life requirement, although test 1 only achieve 100,476 cycles before failure. The average number of cycles for the 3 plain beams tested was found to be around 195,000 cycles, although it can be seen that each test varies considerably. This is a good example of how difficult it is to accurately predict the fatigue characteristics of a component. Even after test many samples the results have large amount of result scatter, where samples fail before or after the predicted cycles. It can be seen in Figure 7.1 that the plain beam failed toward the right hand side of the beam. Since the loading scheme ensures constant", "label": 1 }, { "main_document": "Ecosystem properties are regulated through interactions between abiotic factors such as soil fertility and climate and the functional traits of individual organisms that occur in a community (Hooper,. Two components in biodiversity, species richness which refers to the number of species present in a unit scale of a habitat, and functional diversity; the range of traits in which each species is distinguished from one another in a particular species pool, have been the focus of interests in biodiversity-ecosystem functioning study. Many research findings support that an increase in diversity leads to an increase in ecosystem functioning, which is often measured by productivity. If species added to a community differed in spatial and temporal patterns of resource uptake, an increase in diversity level would enhance the resource use efficiency which results in greater productivity (Kinzig,. This review will outline the proposed mechanisms for this outcome in biodiversity-ecosystem functioning relationship and examine some aspects in the natural environment which have been given little attention in research but should be considered as important factors affecting those biodiversity effects. The observed response patterns of ecosystem properties to changes in diversity; an increase in ecosystem functioning as diversity level increases, have led to suggest three qualitatively explicit mechanisms called Biodiversity Effects (Loreau,. This model states that highly diverse ecosystem is likely to contain some superior species which have strong effects on ecosystem functioning (Kinzig,. This is based on the assumption that some forms of competitive success at species level are associated with high productivity that is measurable in quantitative way. However, there are some contradictory arguments that competitive interactions in greater diversity may correlates to other properties irrespective of growth rate and production, and that there may be some important rare species with low biomass production but have significant impacts on ecosystem functioning (Hooper,. There are an increasing number of experiments which investigate the determinants of competitive success and dominance hierarchy in ecosystem properties other than biomass production such as nutrient supply dynamics, decomposition processes (Loreau & Hector, 2001, Tilman, 1996, van Ruijven & Berendse, 2005). Therefore, it is expected that better explanation on this effect would be suggested together with those various new approaches. Complementarity arises through niche partitioning between species, resulting in the alleviation in interspecific competition over the limited resources (Loreau,. Ecosystem functioning should be correlated to biodiversity if resources limit the growth of species present in a community and if the resource partitioning allows for the increased efficiency in total resource use over temporal and spatial scale such as through exploiting different root depth and structure, rather than causing competitive exclusion by developing the same architecture (Naeem,. Facilitation occurs when a certain species has the ability to mitigate the harsh environmental conditions to which a community is exposed or when there is supply flow of resources from one species to others which is beneficial to the survivorship of the recipients (Hooper,. While the consequences of sampling effects would be the dominance of some species achieved through the reduction in the biomass production of other species in high diversity level, niche complementarity and facilitation", "label": 0 }, { "main_document": "Following the premiere of his controverisal black-comedy McDonagh's newest play Replacing the rural Irish setting of plays such as Substantiating the tension of a grimy police cell, in which the writer Katurian Katurian is interrogated on the corrolation between the content of his stories and a series of grisly child-murders, are abstract explorations of artistic responsibility and child abuse through the medium of fairy tales. What punctuates the play, however, is the 'absolutely McDonaghesque' (Independent Sunday, David Benedict, 23.11.03) humour, and like his previous offerings, Staged at the National Theatre in 2003 to considerable critical acclaim, it was declared 'dizzying - not only are the stories woven with a real mythic power, the comedy is effortless, a brash delight in saying the unsayable combines with subtle Pinteresque flourishes that pitch the audience into hilarity.' ( The success of the original National production prompted a nationwide seven-city tour in 2005, and it is this production performance at Warwick Arts Centre that will be discussed. The 2005 As such, the set had to be adaptable for the various theatrical spaces the production would visit. Compared to the Cottlesloe, \"the smallest, the barest, the most potentially flexible\" Other tours in the venue included the Newcastle Royal Theatre, which with it's 1500-capacity seating, and more traditional 9.01m wide x 9.53m high Proscenium stage, is an extensive variation on the far smaller, yet adaptable Cottlesloe. The new production had to successfully transfer from venue to venue without compromising the play's subtleties. The tour at Warwick made good use of the stage's extensive height and, as in the original, were able to enact out the Katurian story-sequences directly above the unfolding of the author's own story below. Particularly memorable was the image of \"The Little Jesus\", tortured and buried alive by her evil foster-parents, slowly sinking down in her coffin to the level of the interrogation room. It was at these moments, when the naturlistic plight of Katurian and his brother mingled with the narrative unfurling above them, that the duality of the set realised it's full potential. Indeed, the surreal and nightmarish Grimm-stories Katurian retells semi-autobiographically are suspended as \"queasily beautiful tableaux\" (Sunday Times, Victoria Segal, 23.11.03 ) above the bleakly-comic naturalism below to quite startling effect. Coupled with the play's Slavic-inspired score these stories are hauntingly reminiscent of the traditional fairy tales to which each of us is accustomed: 'something wicked and whimsical also haunts Paddy Cunneen's enchanting original music. His melodies carry the fantastic echoes of a music that probably once lulled the children of Hamlin away into the mountain. Neither a set nor a score is suggested by McDonagh in the script, which is void of all but the most basic stage directions, containing only the most necessary descriptions: All text quotations are taken from: MacDonagh, M. McDonagh leaves the particulars of the set in unspecified opacity: like the details of the totalitarian state the characters inhabit, the playwright prefers a sense of gloomy ambiguity to permeate the play's setting. Indeed, it is attributable to Scott Pask's set design that 'Katurian's dark imagination explodes", "label": 1 }, { "main_document": "was brought up in London in commercial surroundings, and generally in an atmosphere of struggling poverty. Mentally he belongs to the small urban bourgeoisie...\" Among the works presented by Churchill, only the last mentioned, Alexander Welsh's The Churchill's book establishes a first approach to the treatment of London by Dickens, it mentions useful materials and quotations. Nevertheless, it remains mainly anecdotal and it is only by analysing proper critical texts that the writing of a critical history is possible. Churchill, R. C. The Macmillan Press LTD. 1975. p 170-178. Churchill, R. C. The Macmillan Press LTD. 1975. p 204-208. Churchill, R. C. The Macmillan Press LTD. 1975. p 129-132. Churchill, R. C. The Macmillan Press LTD. 1975. p 129 Churchill, R. C. The Macmillan Press LTD. 1975. p 129. Churchill, R. C. The Macmillan Press LTD. 1975. p129. Churchill, R. C. The Macmillan Press LTD. 1975. p131. This essay focuses on three major critical books, J. Hillis Miller's They are regularly referred to by more recent studies on Dickens's London. This essay also mentions a non-academic work, the book introduced by Peter Ackroyd and commented by Dudgeon, Besides, two other texts were useful for this critical history: M. Baumgarten's essay \"Fictions of the city\" (2001) and B. Deutschendorf's article \"Dickens' Both Hillis Miller and Schwarzbach present their studies by looking at a certain number of Dickens's novels and proceed by following the chronology of their publication. Miller explores six novels and analyses them as the transformation of the real world of Dickens into an imaginary world with certain special qualities of its own. He argues that \"throughout all the swarming multiplicity of his novels\", Dickens created a \"world which is unique and the same\". Schwarzbach's approach is similar to Miller's but if the latter analyses the world of Dickens in general, the former focuses on the treatment of the city only. Schwarzbach stresses that London played an important role in Dickens's own life and is at the centre of the novelist creative imagination. His work explores the Victorian writer's approach to the city and examines how it evolves through his novels. As for Welsh, he presents Dickens's treatment of the city as a great metaphor. At the difference of the two critics previously mentioned, he is more concerned by the symbolic meanings of the novelist's portrayal of London than the descriptions in themselves. Welsh divides his study in themes which are symbolically related to the Dickensian city. For example, he underlines the association of London with death and hell, greed and money, or the hearth and home in the different works of the novelist. As far as Ackroyd and Dudgeon are concerned, their study is more focused on the autobiographical dimension. It highlights the fact that Dickens incorporated his personal approach and experience of the city into his work. As for Baumgarten and Deutschendorf, both critics underline the connection between Dickens's characters and London and the way the novelist used the city as a reflection of his protagonists' personalities. All these different critical texts encompass a period of thirty five years, Miller's", "label": 0 }, { "main_document": "the farms located on the vicinity of the Waza National Park to avert the elephants during their migration. Mishra, C., Allen, P., McCarthy, P Allen, T.,Madhusudan, M.D., Bayarjargal, A., & H.H.T.Prins, (2003): The role of incentive programs in conserving the snow leopard. In Tchamba, M.N., (1996): History and present status of the human/elephant conflict in the Waza-Logone region, Cameroon, West Africa. In 37. Ibid., p.37. Ibid., p.38. Often human-wildlife conflicts arise from humans invading wildlife habitats. However, John Knight reports on a new occurrence called 'commensalism,' Biologists Merrigi and Lovari have found that the wolf in Southern Europe demonstrates this acclimatisation. The canid usually feeds on wild herbivore species, however, after analysing their scat and stomach contents, it was discovered that wolves in Southern Europe have adapted to feed mainly on other food resources like, fruit, rubbish and domestic ungulates. According to the optimal foraging theory, wildlife 'tend to feed in a manner that maximises their nutrient intake in the minimum possible time.' (as cited in Sukumar 1994) This theory explains the wolves' adaptive behaviour. It seems livestock is 'left unguarded in the countryside' Merrigi and Lovari state that shepherds originally tolerated the wolf, however its long absence has made its reacceptance difficult and by preying on domestic ungulates, wolves have acquired the status of pest. On other hand, it seems this conflict has been culturally institutionalised, through children's stories and games etc where the wolf is portrayed as cunning and evil, which exacerbates the problem. Knight. op.cit., p. 6. Meriggi, A., & S. Lovaris, (1996): A review of wolf predation in southern Europe: Does the wolf prefer wild prey to livestock? In p.1562. The symbolism associated to certain species often creates human-wildlife conflicts. Mary Douglas's notions of pollution that suggest 'dirt' is just 'matter out of place' (Douglas 1966) can be applied to human - wildlife conflicts, where pests are simply wildlife out of place, they somehow transgress the subscribed boundaries established by humans. Putman concurs that: Knight. op.cit., p.14. I earlier referred to the wolves' status as a pest perhaps resulting from its sinister portrayal in children's stories and games. The same culturally institutionalised epithet is applied to the fox. In classic tales like Aesop's Fables and Beatrix Potter the fox is portrayed as a 'sly, amoral, wily, cowardly and self seeking creature.\" These negative connotations cannot fail to influence human perception of the creature. Interestingly, structuralist models of animal symbolism classify the world into groups, but phenomena that resist classification are deemed anomalous. According to Knight 'foxes have a special anomalous or ambiguous status because they straddle the boundary between field animals and remote wild animals.\" They inhabit a liminal, negative symbolic sphere. Knight suggests another possibility that 'underlying many people-wildlife conflicts are ideas of balance and reciprocity, with respect to which the behaviour of this or that animal may be deemed to be problematical.' Marvin.op.cit., p.190. Knight.op.cit., p.15. Ibid., p.16. In this respect, foxes apparent tendencies for feeding frenzies in hencoops, upset this balance and incur the anger of the farmer and establish a conflict. In conclusion, anthropology", "label": 1 }, { "main_document": "from VisitBritain can be obtained to promote the campaign. Press of this campaign will be released at their 'media centre' online. This is aimed to build image and awareness for oversea visitors. PR Campaign - Open weekend for locals. From the second campaign month, each member selects a weekend to invite local residents visit their houses for free. This pulls the visits of local non-users. It will be reported. Local residents need to book ahead and register, it is for keeping record of host visits and database In summer, ten houses will launch a 'Treasure Houses Pass', which can save up to 40% of admissions, encouraging the visit to visit all treasure houses. Annual Pass is available to purchase with special offer. During summer, direct mails are sent to local residents, encouraging them to invite friends and relatives to visit England in the best season. The financial objective is to both catch the attention of local residents and raise awareness of oversea potential visitors by using budget sensibly. The details of each cost are in the following table and it goes along with each activities. Total cost is within the budget, and allowing for a contingency cost. This campaign runs from 1 Website communicates directly with oversea visitors and goes through the campaign, as well as web newsletters. Local communication tools are used extensively to catch attention at the beginning for 10 and 20 days. Radio advertising will be dripping during the whole campaign. As audiences get awareness, the PR campaign will take place in June 05. To keep the trend, there is the sales promotion from the ten houses lasting for 3 months. In August and October 05, local newspapers promote visits before some houses close in October; and March 06 announce reopen. Direct mails are sent in summer and before Christmas as the best time for VFR trip. The campaign begins with diagnosis. Treasure houses need to identify problems first by themselves, given the problem to marketing research manager to conduct a research to analyze problem in its current business environment. Before conducting campaign, financial manager need to figure out how much budget is available for the campaign. An advertising agency will be chosen to work with. Treasure Houses communicate objectives with account manager. The agency will be responsible for creating ideas, media purchasing, advertising production etc for advertising campaign. Local journalist of each house is invited to write press release for PR, and VisitBritain offers help for PR. Production. The result of a marketing campaign is expected to achieve objectives, in terms of effectiveness, efficiency and economy (Pender, 1999). First of all, the total number of visits can be measured by ticket sales. The number of repeated locals is hoped to get by database built during campaign. Other indicators such as annual pass, special local offer may be able to recognize host group. Secondly, qualitative research is needed. Questionnaires can be designed to investigate the media effect. To existing visitors, they are distributed very simple questionnaires that focus on the purpose of visits, communication channel and frequency of visits.", "label": 0 }, { "main_document": "1 (AP1) of the Geneva Convention. There are six criteria for meeting the definition of mercenary and all six must be met. The ability to include PMCs in this definition fails on with subsections b and c. Subsection b describes a mercenary as someone who \"does, in fact, take part in the hostilities.\" PMC employees may or may not be directly participating in the hostilities - if they are acting in a logistical role, or gathering and supplying intelligence, it would likely be concluded that this did not classify as taking part in the hostilities. Subsection c states that a mercenary includes someone who is motivated to take part in the hostilities essentially by the desire for private gain and, in fact, by or on behalf of a Party to the conflict...\" PMC employees are not in an employment relationship with the State party; their employment relationship is with the PMC. \"Those who work for PMCs receive a salary from the security company for whom they work. Consequently, they are not profit-driven.\" As all subsections must be met, it seems unlikely that a PMC employee would be captured by the definition in its current form. Supra, n 1, 277 1125 No. 17512 (entered into force 7 December 1979) [AP1] at article 47 Ibid Supra, n 1, 282 While this definition has a number of failings, it is the current definition for a mercenary in both IHL and international law. This definition was drafted in a narrow manner, targeting specific groups and individuals. This is evidenced by the failure of the definition to include \"those induced by ideology or religion, and those who may not participate directly in the hostilities.\" Additionally, the definition also fails to include those who train or advise state parties in an armed conflict. This definition, like much of IHL, was not drafted for \"new war\". Supra, n 1, 278 There is the added challenge that many PMCs, including the giant DynCorp Inc., are American or British companies. This presents a problem because, since the definition of mercenary would likely not apply to many of the PMC employees, as neither the United Kingdom nor the US has ratified AP1. It has been clearly illustrated in this section why PMCs and their employees cannot be classed as mercenaries. In their article discussing the privatization of war, the ICRC clearly illustrates that they feel that PMCs are outside of the definition of mercenary, but (arguably) within the IHL system: \"in situations of armed conflict, IHL regulates both the activities of the staff of PMC/PSCs [Private Security Companies] and the responsibilities of the states that hire them.\" In the next section of this brief, the applicability of IHL to PMCs will be examined and debated. International Committee of the Red Cross, Currently, there are three international legal instruments that regulate or ban the activities of mercenaries. The 1907 Hague Convention included a provision that prohibited the recruitment of mercenaries within State parties' domestic borders. This prohibition, however, was \"limited to countries policing their own national territory, and were not extended to include", "label": 1 }, { "main_document": "The present study aims to (a) describe and critically analyse the extension approaches of a regional government institution called CATI in Sao Paulo State (Brazil), (b) explain the factors which led to the development of the identified extension approaches, and (c) suggest how the extension approach at CATI is likely to change over the next five years. A governmental office called Coordenadoria de Assist At the moment, CATI coordinates 40 sub-regional offices which are responsible to assist 485 'agricultural houses' where extension workers provide services at local level, supported by the municipalities. The figure 1 illustrates the geographical distribution of the sub-regional offices, supervised by CATI (Lima, 2001:14). According to FAO studies, presented by Axinn (1988) Source: Through its four departments (including a library), CATI provides (a) technical support for autonomous extension workers through training in several topics; (b) production and distribution of seeds and young plants, young fruit plants, including native fruit plants; (c) production of sunflower seeds, maize and green fertilizers; (d) capacity strengthening for in-house staff, other institutions, producers, and farmers; (e) design and sale of publications, technical CD-ROMs and videos; and (f) database on soil, climate, culture, livestock, agricultural trade, and agro-industry Source: Based on the services provided by CATI, almost all approaches indicated by Axinn are utilized, in one or other way, depending on the needs of extension workers and farmers; except for the T&V and cost-sharing approaches. An example of the Some extensionists from CATI are specialized in wheat production and offer all services related to this specific commodity, from information on seeds, soil conservation to market orientation. CATI also provides support on increasing production and improving livelihood, in a A good example of this practice is the National Programme to Strengthen Family Farming (PRONAF) The See Regarding the But would the cost-sharing approach exclude the resource poor-farmers? A study by Dinar and Keynan (2000) The study concludes that \"even poor farmers are willing to pay for a service that improves their economic efficiency and ability to earn a living. Nicaragua's producer clients understood that without cost sharing, the system would not endure.\" Source: The analysis above brings to the conclusion that an organization may not adopt one exclusive extension approach, but can adopt a combination of several approaches that suits better its purpose. The literature suggests that an integration of several approaches would be more appropriate to a large diversity of clients. CATI provides a considerable number of services, from collective services (to group of professionals/farmers) to individuals searching for advices on a specific topic. CATI also adopts from technology transfer or increasing production oriented approaches to a more participatory and problem solving intervention. However, it does not necessarily mean that the services provided by CATI are efficient, effective, sustainable and at low costs. More on those issues will be addressed in the coming sections of this study. Rural extension in Brazil originated from an American model, called the \"classical model for rural extension\" according to which technological progress was the only way to promote development and modernization (adapted from Figueiredo, 1984). After a series", "label": 0 }, { "main_document": "own religion without fear or discrimination. This is particularly relevant in the context of minority communities' right to maintain and preserve their religion and beliefs. Under Article 1, Refer to the full text of the Article 1 in Consisting of eight articles, the Declaration purports to create a right to religious freedom in the international fora prohibiting discrimination based on religion or belief and is inclusive enough to imply even non-believers within its ambit. However, this blanket right to practise one's religious customs becomes problematic in the context of women's rights since most religions are extremely gender biased and premised on patriarchal domination resulting in the conflict between two competing rights. Donna J. Sullivan The formal requirements of religious law, interpretations of that law, and social custom derived from it may all infringe women's rights. Religious tenets governing rights associated with the life of the family, particularly those pertaining to marriage and divorce, inheritance and personal status, frequently set religious law in opposition to the prohibition of discrimination against women\". For example, Article 2 of the CEDAW talks about non-discrimination in favour of women in all aspects of their lives whereas Articles 5 & 6 of the Religious Declaration give a right to the parents and legal guardians to impart religious education to their children and also religious communities are allowed to preserve their discriminatory rituals and customs under the broad right of religious freedom. Further, Article 15 of CEDAW provides equality in civil law matters but certain religious practises provide for judicial decisions based on the inferior position of women thereby not granting them equality in law. Donna J. Sullivan, \"Advancing the Freedom of Religion or Belief Through the UN Declaration on the Elimination of Religious Intolerance and Discrimination\" (1998) 82(3 In most States, whether secular republics, constitutional monarchies or religious ones, because of the inherent patriarchal nature of the State structure, even the so-called gender neutral laws are in fact grounded on the stereotypical notions of 'masculinity' and 'feminity' and tend to perpetuate and strengthen the oppression of women. Moreover, with the recent rise in religious fundamentalist movements in Iran, India, North Africa, USA, etc, efforts to curtail even limited rights of women and their mobility and access to economic resources have ensued since women are supposed to be the bearers of their cultural and religious identity and purity. In Hanna Papanek's Control over female sexuality and reproduction requires male power, and enhances it, in kinship systems preoccupied with the purity of descent\". Hanna Papanek, \" The Ideal Woman and the Ideal Society: Control and Autonomy in the Construction of Identity\" in Valentine M. Moghadam (ed.,) With the integration of most countries into the global political economy and with the emergence of 'free markets' with easy capital flows, transnational corporations, the charges of 'cultural hegemonisation' and the western assault on non-western cultural systems have flown thick and fast resulting in the growth of reactionary movements in many developing countries with its focus on the 'women's place' in the family. Thus, the contest between women's rights and religious freedom is being", "label": 0 }, { "main_document": "the Philippines and Malaysia) have tried use the opportunities offered by the global market. The Latin American approach was supposed to reduce its dependence on foreign capital, technology and markets by promoting home-grown industries, but in the end increased the foreign debt as deepening of import substitution required borrowing from abroad. Once the countries were unable to service the loans as their exports declined, devastating debt crises followed. Countries like Brazil and Mexico which adopted the ISI strategy overall experienced less growth, less stability and possibly greater inequality. (Balaam 2006) The East Asian approach relied on strong state guidance of economy, supporting manufactured export oriented industries and attracting foreign capital to finance the development. Contrary to prediction of the dependency theory, Asian countries which started as poor and underdeveloped have utilized foreign capital and trade to promote growth without experiencing underdevelopment. Taiwan presents the dependency theory with a paradox. It is both more integrated in world capitalism than the poor market economies and more developed. (Amsden 1979: 342) Export oriented growth strategy results were very strong in terms of success in creating income and growth. These developments undermine the dependency theory in two ways. Firstly, dependency theory would suggest that countries which avoid the risks of dependence are more successful than developing states which liberalize and integrate their economies into the global system. However, Newly Industrialized Countries in East Asia have clearly outperformed the Latin American countries which experienced greater level of inequalities. Secondly, the success of NIC in achieving economic growth and eliminating national inequalities by relying on trade and capital relations with developed countries goes contrary to what would be expected by Dos Santos. A development not predicted by the theory are the deteriorating prices of manufactured goods. While the theory only talks about the volatility of commodities and primary goods, with an increase of global competition the prices of manufactured goods have also decreased making it cheaper of developing countries to import. Thus the trade exchange of developing countries does not necessarily have to be negative based on imports from the developed countries. A major factor which affected the balance of payments balance has been the volatile and increasing prices of oil rather than imports of manufactured goods. A particular related issue when addressing a question about dependency theory is the development of the gap between Global South and North. While World bank report talks about a positive net flow from developed to developing countries, Broad (1996) has suggested that once China and India are excluded majority of the 140 Southern countries are actually not getting closer to economic development levels attained by the North. Broad (1996) has argued that contrary to World Bank 1994 report, majority of LDCs have not been part of the narrowing gap phenomenon envisaged in the report. \"The bottom line is that about a dozen countries have been doing well for the past few years, while the vast majority of the South is either slipping backwards, stagnating, or growing slower than the North.\" (Broad 1996: 9) It seems that it does come down to", "label": 0 }, { "main_document": "distribution or a misspecified model. We thus conduct tests for these. In order to check whether misspecification is the actual cause we use the Ramsey Reset test. The null of no omitted variables is rejected which shows that significant improvements to the model are possible. We thus try various combinations of functional forms using logs, squares, cubes and square roots. The most appropriate functional form appears to be when the dependant variable (U5MR) was in logarithmic terms. A log of the dependant variable implies the estimates would give us the percentage change in U5MR due to a unit change in the dependant variable. Also after making these changes in the model for U5MR the regression errors become homoskedastic. We can also see this by comparing the two plots of standardised residuals against predicted values below. The first one for the infant mortality rate depicts heteroskedastic errors (which we had not corrected but had chosen to use robust standard errors in our regressions), and the second one is for U5MR which shows constant error variance. However, we are still unable to generate normally distributed residuals. We thus check if it could be caused due to the presence of outliers in the distribution. The presence of outliers can be detected by tabulating the standardised residuals of the equation. This shows the presence of one outlier and that too with a standardised residual value of just -4.1. The problem of outliers is corrected by using robust regression estimates. This attaches a weighting scheme such that outliers would not have as much impact on our estimates. Thus, having discussed the variables to be included and the removal of heteroskedasticity, multicollinearity, non-normallity and misspecification, we shall move on to the actual model and its analysis. Equations 1 and 2 present the model we are estimating. The results of both regressions are then presented in table 4. EQUATION 1:- EQUATION 2:- Note: * coefficient significant at the 10% level of significance ** depicts coefficients significant at the 5% level of significance *** depicts coefficients significant at the 1% level of significance. The rest of the variables are insignificant. On the whole the same variables are seen to have a significant affect on both, infant mortality as well as under-5-mortality and in the same direction. Urbanisation and percentage households having toilet facility are the two significant Economic Development Variables. Percentage households with access to toilet facility can in some way be taken as an indicator of district income levels as well. As one might expect, it has a significant positive impact on child mortality. Surprisingly, the level of urbanisation has a negative impact on child mortality at the ten percent level of significance. This result seems strange considering unadjusted child mortality in rural India is twice as high as rural india This result may be a reflection of the deplorable condition of slum dwellers in urban areas. Low levels of sanitation and lack of information about treatment facilities have been identified as major causes of high child mortality rates among them. However, when adjusted for other factors, the difference", "label": 0 }, { "main_document": "In order to develop an understanding of the standard tensile test, study the mechanical properties of some important engineering materials, obtain values for the yield stress (or proof stress), tensile strength and ductility for those materials and to ascertain the variability of these properties for certain nominally identical specimens, the materials and production assignment associated with the mechanical testing laboratory was set up. In this laboratory, different types of metal specimens were available to test with the Hounsfield Type W Hand Tensometer. And in this assignment, two carbon steels were tested, one was carbon steel AN with 0.1% C, the other one was carbon steel N with 0.8% C. By the tensometer, two copies of force-extension graphs for each of these specimens were obtained, and values of yield stress, tensile strength, ductility could be worked out from the graphs as well. Finally, those data derived from the experiment was compared with a reference data which was done in previous years. Then, the values could be assessed whether it agreed or disagreed with the reference data, and even found out the problems of the experiment. In this materials and production assignment associated with the mechanical testing laboratory, six types of metal specimens were available to test with the Hounsfield Type W Hand Tensometer. The aim of the experiments was to develop an understanding of the standard tensile test, to study the mechanical properties of some important engineering materials, to obtain values for the yield stress (or proof stress), tensile strength and ductility for those materials and to ascertain the variability of these properties for certain nominally identical specimens. During the test, the specimens were extended in the direction of the applied load and reduce in cross-section in the perpendicular direction, they were ended with fracture. And the tensile curve was obtained, which showing how the important mechanical properties were derived. In the assignment, the objective was to assess whether the values derived from experiments could identify a real variation in the properties of selected carbon steel specimens. The theory of the analysis of results was stated in section 'The tensile test' of the laboratory briefing sheet in Appendix 1. And all the equations that would be used were listed as below. Hounsfield Type W Hand Tensometer Six types of metal specimen (Specimens were listed in section 'SPECIMENS' of the laboratory briefing sheet in Appendix 1) The specimen was fixed between tension head and operating screw, it was linked with a force-measuring system and a means of applying the extension. And a worm gearbox applied to the extension, it could extend the specimen. Furthermore, the force applied to the specimen was transmitted to a beam which would deflect, as the force is directly proportional to its deflection, the force could be obtained by measuring the deflection. And the deflection could be measured by puncturing a graph sheet at frequent intervals, which was directly linked with the gearbox. Therefore, a tensile curve would be achieved. N.B. Please check method details in section 'The Tensometer Type W Testing Machine' of the laboratory briefing sheet in Appendix", "label": 0 }, { "main_document": "holds true, one must appreciate the fact that women also benefited hugely from the NHS- alleviating the burden of giving birth and looking after sick members of the family. It seems apparent that social democrat critiques of the Labour government's approach to education differ in the favourability of their conclusions somewhat. Many accept that the abolition of fees for secondary schools, the raising of the school leaving age to 15 and the implementation of a state funded scholarship scheme for university led to a greater equality of opportunity. Moreover they realise the Education Act also was strong in that it addressed the geographical disparity of quality of schooling, the financial weakness of the voluntary sector and the latent antagonism between voluntary and state schools. However it also seems evident to social democratic analysts that compared to Bevan's health policy, education was 'innately conservative' [George, V. and Page, R. (1995), 'Modern Thikers on Welfare']. In terms of increasing the school leaving age and the demarcation of primary and secondary schools, Labour was merely enacting the recommendations made in the 1926 Hadow report. Moreover public schools survived and went from strength to strength after financial and political trouble in 1940; Grammar schools were now funded by the treasury; leaving less room for the more democratic multilateral schools to blossom. People may argue that the preservation of a tripartite system of education was politically astute as it meant that people's right to decide upon their and their family's future was not impinged, but there is indeed no doubt that the education system hindered momentum to greater equality. As both private schools (with the obvious price-barrier) and Grammar schools (with the arbitrary eleven plus examination favouring children who were surrounded by more educated and articulate people) had an inbuilt-bias towards the middle class. With 200,000 houses being destroyed in the war time, a further 250,000 rendered uninhabitable, a birth rate that was going to rise sharply, and a substantial increase in public expectations of the quality of housing, Attlee's government had their work cut out in providing the British citizens houses to live in. A popular attitude to Bevan's response is one of ambivalence. Whilst analysts criticize the Minister of Health for not making the programme more far-reaching, most accept that he was constrained immeasurably by the fiscal dictates of the treasury, the stipulations of Marshall Aid, the lack of raw materials at home, and the lack foreign currency reserves to import the materials from abroad. It is fair to say that the government went some way in meeting the challenges posed. The Government initiated the repairing, converting and refurbishing thousands of old houses, a resourceful idea in the immediate aftermath of the war. It presided over the building of 125,000 prefabricated which were surprisingly popular and long-lasting (some are still around today despite their so-called life expectancy of ten years). And it administered four fifths of building licences to Local Councils and imposed stringent rent controls on private landlords further increasing the access to more affordable accommodation. As hinted earlier it was the extent not", "label": 1 }, { "main_document": "unproductive labour 'broadly', various distinctions and (re) interpretations are manifest and thus total surplus value and the labour considered as unproductive are subject to these interpretations. Marginson (1998) suggests that 'the boundaries of productive labour are expanding' and that a discussion of value creation in the service sector is possible, if not necessary due to 'the centrality of services in contemporary economies'. The author develops Marx's analysis of commercial capital as dual capital, which extends the definition of productive labour into the financial services, for example, creating value 'as producers of service commodities'. There is a dual nature of surplus value in the financial services, therefore, where, on the one hand they add no surplus value to money capital but on the other, 'the capitalist services they themselves provide do create new surplus value'. Thus, Marginson's (1998) analysis has sought to explain the growth of the service sector in terms of the creation of surplus value. The hypothesis he proclaims has a certain appeal in that the growth of financial services, and the service sector in general, has been a factor of their ability to create surplus value as value creating entities in their own right. Nevertheless, the application of Marx's concept of surplus value creation to the service sector does not eradicate the category of unproductive labour, but mars the boundaries and further complicates the theory. Gough (1972) indicates that productive labour is that which produces or modifies a commodity, a use-value. Yet, this distinction is obscured when analysing necessary and luxury goods, workers in the state sector and, as indicated above, the impact of service sector workers in, for example advertising, sales and finance. Gough (1972) infers that Marx would analyse the latter range of workers as unproductive based on 'his analysis of the determination of needs under capitalism'. Nevertheless, Marx would consider the expansion in number of workers producing luxury, unnecessary goods as productive as they produce use-values. This would be consistent with Marx's original definition of productive labour but it loses a certain qualitative appeal as goods which some would deem unnecessary, including, for example, arms production, would be included. Relating this discussion back to the labour theory of value is problematic as Marx's definitions are open to interpretation and it is therefore impossible to provide definite categories of productive and unproductive labour. It could be argued, however, that certain service sector labour warrants the definition of productive labour as discussed in Marginson (1998). Yet, it is important to note that within modern day enterprises there are a significant numbers of workers involved in unnecessary and unproductive labour (e.g. advertising) that are supported by surplus value created in the production process. The existence of surplus value is, therefore, a definite reality as 'surplus value or profit, in the words of Marx, is the unpaid labour of the working class' (Sewell and Woods 1983). Examples of the extraction of surplus value from workers can be found in an analysis of Japanese working practices where 'Japanese car makers obtained a big advantage over their foreign rivals partly on the basis", "label": 1 }, { "main_document": "a \"low tax haven\" which has attracted many multinationals by giving the lowest tax on profits in the EU (12.5%) However, the imposition of certain laws by certain countries is argued as creating unhealthy competition. In the example, a low tax policy is argued as reaping off the tax revenues of other countries However, multinationals seek to use this location factor advantage that has lenient laws and regulations as an opportunity to lower cost of production and therefore gaining more revenue. This motive agrees with 'Gimme shelter', Finally, companies that become a multinational are able to This simply means that firms are able to move around its resources or production strategically within any countries without incurring a large amount of cost. For example, many (formerly) American and European based companies who have traditionally had a high reliance on customer oriented call centres such as Prudential (Insurance) and HSBC (Bank / Financial Services) have now resorted to ' The cost savings that resulted was large enough that it actually offset the cost of relocating the whole operations permanently. In conclusion, corporations have subsidiaries abroad for three main reasons as stated above. However, there are plenty other reasons that somehow push a firm to become multinational and it is normally due to more than one reason that will bring a firm to invest broad. Besides that, it is to note that the profit maximisation motive is the main motive that has actually caused the intention for MNCs to have subsidiaries abroad.", "label": 0 }, { "main_document": "cultural borrowings were not considered. Other important aspects of culture such as language, values and beliefs were also ignored. Many social scientists contend that there is no such thing as national culture because subcultures in a country can vary greatly in their values and beliefs Hence Hofstede's framework may only be appropriate for business analysis. Clark, Terry, Culture's Consequences: Comparing Values, Behaviours, Instititutions and Organizations Across Nations. Nevertheless it cannot be denied that Hofstede's framework has been immensely influential in the social sciences. His framework has been widely used and has played a massive role in providing a good starting point for further research in many fields. At the same time, I support the view that culture is something which needs to be understood from a personal point-of-view that is through active involvement. Nevertheless regarding international business, firms seeking to invest abroad must firstly be introduced to the notion of cultural differences before going any further and I feel that Hofstede's framework works well to achieve this.", "label": 0 }, { "main_document": "skills\", and come back to the fundamental belief on \"self-discipline\" and \"individual initiative and responsibility\" of early days. (Schoen 1957) It is worthy analysing the implications of HRT on control mechanism compared to bureaucratic control, and exploring the fundamental nature of the two. Bureaucratic control is run by incentive, which may generate conflict among workforces in terms of work rules and job guarantees. Control is externalised rather than internalised. In contrast, normative control, rooted in the human relations theory, works through informal process and value system rather than a formal mechanism (Ray 1986). Moreover, while traditional bureaucratic control is associated with tight supervisory attention and formulated rules, human relation control adopts teamworking and peer pressure to achieve management objectives. The study of Roberts (1984) on two different managing approaches of two managers is highlighted on obtaining a deeper view on bureaucratic and human relation controls. One person is a scientific management type of manager and he avoids any human contract with employees, who are seen as \"economically motivated automatons\". Another manager takes a human relation approach and she talks and listens to employees and motivates them to work. No matter what approaches they use, Grey (2005:46) pointed the vital point is that they both want to control their employees: \"one by In addition, Grey stated that human relations theory is a response to the \"limitations of scientific management as a means of organizational control\". It is not an alternative but an extension of scientific management. Etzioni (1961) introduced three types of control, which are coercive control, remunerative control and normative control. Bureaucratic control is more related to remunerative control and human relation approach can be categorised into normative control. Control is essential in organisations, since managers are responsible for shareholders, whose concern is largely on economic performance. The constant conflict between workers and management also require control and surveillance. Johnson (1949) suggested, however, if managers can convince workers that higher production links to better standard living for all, without preventable risks to their jobs, management and workers could share the same goal, that is, a joint interest in increasing productivity. When workers are willing to cooperate, business will gain superior prosperity. In summary, this essay has discussed the implications of human relations theory to organisation of work in three dimensions. Employees' social needs are influential and management could take advantage of the potential to form well-organised team work activities. Effectiveness and productivity would be improved in certain organisations under certain conditions. The underpinning management concept of encouraging teamworking and empowerment is to internalise control within and beyond the workplace, and to extract maximum commitment and endeavour from employees. One limitation of this essay is that the implications of HRT are mainly considered from managerial perspective rather than workers' point of view, which could be explored further. Likewise, criticisms are largely referring to management actions in adapting of human relations approach. A critical feature of human relations approach, indicated by Barnard (1938), is the social engineering role combined with management through sustaining integration of the organisation. Though human relations are not concerned with", "label": 0 }, { "main_document": "tune into and remember the language they hear, babies seem to be more sensitive to some speech sounds than adults. This is especially apparent when regarding the differences between phonemes. In English, the sounds /r/ and /l/ are pronounced, heard and spelt differently. So if a recording plays sounds where /r/ gradually becomes /l/, English adults will perceive a sudden change from one consonant to the other (Gopnik et al, 2003): we turn a graded input into distinct categories. Conversely, Japanese adults, who do not distinguish between these phonemes in their language, will perceive no sudden change (Gopnik et al, 2003). Whilst adults seem constrained by their mother tongues, babies under six months old who have only been exposed to one of English, Hindi and Salish can discriminate the phonemes of the other languages as well as their own (Butterworth et al, 2002). This initial high sensitivity to many changes in language sounds means humans have the ability and potential to learn any language they are exposed to, regardless of their race or genetic heritage. However, this impressive ability to distinguish so many phonemes soon diminishes. The Japanese babies will no longer perceive /r/ and /l/ differently after they are around six months old (Butterworth & Harris, 2002) when their phoneme recognition ability becomes similar to an adult's: 'exposure to a particular language has altered their brains' (Gopnik et al, 2003, p.104). Gopnik et al (2003) believe this process is caused by babies listening to what we say and constructing prototypes for each phoneme; 'grouping the sounds they hear into the right categories, the categories their particular language uses' (p.108) so English babies will develop separate /r/ and /l/ prototypes, but the Japanese will include both sounds in one. Whilst such narrowing may seem restrictive to a baby's potential, in fact this eventual 'learning by forgetting' (Duplox & Mehler, 1994, in Butterworth & Harris, 2002) means that they will be able to focus more attention on understanding the phoneme distinctions that matter in their own language. This will help them later to recognise and learn whole words and their meanings. As well having equal potential to learn any language and preferring to listen to human voices over other sounds, four day-old babies have already learnt to recognise their own language. French babies of this age prefer listening to French over Russian (Butterworth and Harris, 2002). They seem to be able to make this distinction by paying close attention to various sound features; as well as phoneme categories babies recognise the prosodic and rhythmic features of the language they hear. For example, a two month old English baby will distinguish more between Japanese and English recordings than Dutch and English recordings, presumably because Dutch has similar stress patterns to English (Christophe & Morton, 1998) and they will prefer to listen to languages with similar rhythms to English (Cutler et al, 1993, in Butterworth & Harris, 2002). This awareness of the many distinguishing features language sounds have and the preference they have for listening to their native language sounds will also help infants to", "label": 1 }, { "main_document": "of malaise among the populace of Europe. At Maastricht, the parameters for which legitimacy improving measures could be implemented were agreed. 'Thus', as Majone put it, 'the same Treaty that stimulated the current debate over the democratic deficit also laid the foundations on which the autonomous legitimacy of [EU] institutions could be firmly established'. Accordingly, 'careful institutional design can provide a number of instruments which political principals may use to minimise the danger of bureaucratic drift and reconcile independence with accountability'. The regulatory system of EU governance is legitimate 'as long as the tasks assigned to this level are precisely and narrowly defined, non-majoritarian sources of legitimacy - expertise, procedural rationality, transparency, accountability by results - should be sufficient to justify the delegation of necessary powers'. This is not included in Majone's hypothesis. Popular consensus relies on the imagined natural and essential nature of political organisation, Ibid., pp.22-8 Daniela Obradovic, 'Policy Legitimacy and the European Union', Paul Howe, 'A Community of Europeans: The Requisite Underpinnings', Neither giving the European Parliament, the technocrats, nor the member states greater power will prove effective. The lack of majoritarian consensus is often seen as a determinant of the \"democratic deficit\" in the EU but this is actually a good thing in a pluralistic community as it ensures the rights of minorities. The 'homogeneity' thesis, therefore, is unconvincing. The Habermasian idea of \"civic solidarity\" does not necessarily require a naturally predetermined conscience but, rather, an imagined togetherness that is perceived to be natural yet emanates from functional co-operation and coexistence. Furthermore, as the EU remains contingent in many aspects, particularly in the public arena, upon national political machinations it is difficult to achieve a coherent association with the people. Marcus H The Unsolved Legitimacy Problem of European Governance', Ibid., pp.252-6 Ibid., pp.256-7 National governments need to start championing the cause of the European association - which benefits them - or else the system will wither. 'To solve common problems more effectively...major decisions and supranational authority at the European level must be given dependable recognition'. The existence of a multi-level polity is a vital asset of transnational governance but it requires the people of Europe to develop an identity that recognises it as such. It is of particular importance, in the meantime, that the EU does not develop into a quasi-autocratic political machine, attempting to swallow up the competencies of the state, for this will only trigger a negative popular reaction. Ibid., p.263 Lene Hansen and Michael Williams, 'The Myths of Europe: Legitimacy, Community and the \"Crisis\" of the EU', Richard Katz, 'Models of Democracy: Elite Attitudes and the Democratic Deficit in the European Union', Moravcsik, along with Majone, Thomas Zweifel, Scharpf and others, paints a convincing picture of the nascent, non-majoritarian, multi-level system of governance in Europe in which the EU is designed to aid the resolution of transnational governance issues that affect sovereign states. For instance, the Commission's legislative initiative on issues where greater public involvement is customary - say, environmental regulation, consumer protection, and executive appointments - is in practice falling to the directly elected", "label": 1 }, { "main_document": "Sensation and perception are easily confused, so to clarify the boundary between the two I will define these terms. Sensation is recognised as the immediate and basic experiences which are generated as stimuli fall on our sensory systems, whereas perception refers to the interpretation of these sensations by the brain giving them meaning and organisation (Matlin, 1997). The speed at which a sensation becomes a perception is so great that many people take for granted the way sensations are processed and interpreted by the brain, to the extent that they do not realise any sort of processing is carried out. Questions regarding the information which informs us of what we perceive a sensation to be have been considered for hundreds of years with many different theories being formed. The theories can be divided into two groups: those which outline a top-down process and those which outline bottom-up processing. Top-down processing refers to the way that our brain interprets sensations according to our own knowledge, whereas bottom-up processing involves us understanding the meaning of a sensation purely from the information the stimulus itself provides us with (Gleitman, 2004). More than 80% of perception is accounted for by vision, so most of the case studies I will use will investigate how visual sensations are perceived. The first top-down theory I will present is one which says that we are each born with innate knowledge about the world which allows us to create accurate perceptions from sensations we have never before experienced. It is necessary to investigate subjects who have no previous experience of certain sensations, such as blind people who have no experience of visual stimuli, or young infants who have not experienced much at all, so we can be sure that the knowledge used for perception is in fact innate and not learned already. A case for innate knowledge involving infants has been researched by Gibson and Walk (1960), who were interested in the way that infants perceive depth. They conducted an experiment which involved a 'visual cliff' (see fig. 1); this is a table topped with glass strong enough to support a child's weight. Directly underneath the glass, on one side, was a piece of checked material. On the other side there was also checked material but it was a considerable distance below the surface of the glass. In this way, it looked as though there was a drop down although the glass was in fact very strong and the child could not fall. The infants were tested by placing them on the centre board of the visual cliff and firstly asking their mothers to call them across the 'shallow' side of the table. The infants would crawl to them quite happily. However, when their mothers called them across the 'deep' side, 92% of the children would not cross over it. The experiment was repeated with day-old chicks, newborn goat kids and kittens, and none of them would cross over the 'deep' side. This seemed to show that the perception of this apparent dangerous fall was innate, however the results for the", "label": 1 }, { "main_document": "that the value of securing the francophone culture is worth any loss of opportunity and alienation that could occur. However, it is obvious from this case, and the similar cases that have transpired in the past that the minority group themselves may not agree. In the case of Singer v. Canada, the applicant's business, Alan Singer Limited, served a predominantly anglophone clientele in a predominantly anglophone region of Quebec. From 1978 onward, he was notified by Quebec authorities that he was not permitted to advertise in English, as a result of the relevant sections in Bill 101. He took the government to court, and his case was dismissed at the Court of Sessions in Montreal, Quebec Superior Court and the Court of Appeal. In 1991, Allan Singer made a communication to the United Nations Human Rights Committee under Article 5, paragraph 4 of the Optional Protocol to the International Covenant on Civil and Political Rights. In their statement, it was declaraed that, \"the Human Rights Committee, acting under article 5, paragraph 4, of the Optional Protocol to the International Covenant on Civil and Political Rights, is of the view that the facts before it reveal a violation of article 19, paragraph 2, of the Covenant.\" Following the submission of Mr. Singers claim to the Committee, but preceeding the Committee's decision, the law was re-written (in 1993) in a manner that no longer required the notwithstanding clause. However, as the law had been changed to allow additional langauges besides French on signs, the Committee concluded that Singer has been given a resonable remedy by the State, and closed the claim. Supra Note 20 Over the years, there have been significant changes to the legislation that governs minority language rights in Quebec. Since its creation, some of the provisions of Bill 101 have been relaxed. There is no longer a requirement for parents to have been educated in English in Quebec; now, education at the primary level anywhere in Canada will suffice. The laws governing English on commercial signs have been relaxed as well, likely a direct result of the decision of the United Nations. However, the QA population continues to clash with the QF majority, particularly in regards to rights concerning language in education. It cannot be denied that the QA are one of the better treated minorities in the world. However, for violations of minority rights to take place in a country that is recognized as a world leader in the protection of minority rights is unacceptable. The basis to a solution is respect between the linguistical minority and majority in Quebec - respect that has sadly been lacking through much of Canadian history. Supra note 1 at 70. Translation: That I will never arrive. But it is movement and it warms me and I find my hope.", "label": 1 }, { "main_document": "is the time taken to kill all the spores at a temperature of 121 It is a legal requirement for the bacterial spore load of a can to be less than 10,000. The heat is transferred to the tin cans by convection, conduction, or convection-conduction; depending on the properties of the food product in question. The second law of thermodynamics states that the heat will flow from hotter regions to colder regions, thus dispersing he heat through the product. In a still retort, heat in the form of steam is transferred to the wall of the tin can. Heat is then transferred to the contents by conduction if the product is very viscous or convection if the product is suspended in either a brine or syrup. It is important that after heating, the cans are cooled as quickly as possible to a final temperature of 35-40 The cooling is achieved by gradually reducing the pressure, and gradually introducing cold chlorinated water into the retort until level with the cans. It is necessary to avoid too much manual handling of wet cans, as post-process contamination can occur. Drying cans quickly after cooling and reducing manual handling of cans are very important for minimising post-processing contamination. It is also important to ensure the closure (seal) is airtight, thereby eliminating post-processing contamination. During the formation of a vacuum, extremely small quantities of water can enter the can, it is therefore important that the cooling water is chlorinated. Non-spore forming bacteria can enter the can if it is not handled carefully after processing. Blanchers, fillers and equipment in which food residues may remain at optimum temperatures are a potential source of spoilage bacteria. It is therefore important that good sanitation is maintained. Correct labelling (should include drained and filled weights) and storage of the cans is important, the cans must be kept cool during shipping and before they leave the factory. A Target spoilage rate of < 1 in 10 The table was taken from Low-acid foods are considered improperly canned if any of the following are true: The food was not processed in a pressure canner. The canner's gauge was inaccurate. Up-to-date researched processing times and pressures were not used for the size of the jar, style of pack and kind of food being processed. Ingredients were added that were not in an approved recipe. Proportions of ingredients were changed from the original approved recipe. The processing time and pressure were not correct for the altitude at which the food was canned. The apples were prepared as necessary by washing in 1% HCl to remove wax, pesticides and herbicides; then washing in water, peeling, coring, and slicing into small regular shaped pieces, and placing in water. Any defective fruit was removed to prevent contamination. Meanwhile 5 litres of 15% sugar solution was prepared from water and table sugar, and the concentration was tested and adjusted using the refractometer. The sugar syrup was boiled, using an appropriate piece of processing equipment. The processing equipment was prepared; containing soft water heated to 83 The apple slices were", "label": 1 }, { "main_document": "the idea of the truth about Nisa's account, and justified publishing it as a way of balancing the one sided picture that has been painted by past researchers (1990:350). This action is political and is in line with feminists proposal that the researcher's position in relation to those whom she is representing needs to be thoroughly explored, in terms of her own social, political and personal interest, and the assumptions she brings to her understanding of those she is researching (Gillies and Alldred, 2002:42). Seeking consent for publication is another aspect of the case study where ethical issues reflected. The question is, what obligation does the researcher have to the women she interviewed, particularly when it comes to publication of her findings? Johnson drawing from Fichter and Kolb (1953) outlined the responsibilities of the researcher to six entities: sponsors, funding agencies, publishers, other scientists, society, and participants. He adds a seventh one, the researcher, as she/he could be faced with negative reactions as a result of the research (1982:84). In the case study, Shostak experienced some negative reactions from the community she researched. Members of the community misinterpreted her research, as women she interviewed were labelled as unfaithful to their partners. A man who accused his wife of infidelity reflected this in this statement: 'if you aren't having affairs, what does Marjorie (referring to Shostak) speak to you about all the time?'(Shostak, 1990: 350) In seeking Nisa's consent for confidentiality and to publish, Shostak behaved responsibly to her research participants (Shostak, 1990:350). According to Barnes, with publication, researchers run the risk of making public what participants would prefer to keep secret, he advised that one way of controlling the effect of publication is to make sure that those affected agree to what is being said about them (1970:246). I find Shostak's discussion with Nisa deceptive. Shostak stated that she would conceal identities, although it is unlikely that any repercussions from the book would reach Nisa (1990:251). This gives the impression that Shostak published the book with very private details of Nisa's life, knowing that Nisa would never know how she was represented in the book. Johnson advised that researchers in preparing their manuscripts should assume that both the identities of the location studied and the identities of the individuals will be discovered, and reflect on the consequences before publishing, and think about what people will feel when they read about themselves (1982:87). This did not apply to Nisa as she could neither read nor have access to books (Shostak, 1990:351). Shostak, further displayed her sense of responsibility by promising to share the resources from the book with Nisa and with the Kalahari People's Fund (a foundation of the !Kung people). To what extent Nisa understood this remains questionable. Some feminists have proposed that researchers should link ethics, methods, methodologies and epistemology. For Code, there are ethical issues involved in research relationships, as well as in being accountable within the varied sets of relations that comprise any given research project (1987:10). From the beginning of a research and after completion, a researcher and", "label": 0 }, { "main_document": "(strength) these can be considered important requirements, below is a list of specification requirements for a pressure vessel. The CES software is basically a large materials database that contains thousands of materials and their associated properties. The software is designed to help the design engineer choose the most appropriate material by specifying design/requirement parameters. The selection process is broken down into stages, each stage reduces the list of materials produced by the previous stage. Performance indices are equations describing the performance of a material in terms of its material properties. The equation can contain either a single material property, or be a function of two or more properties. CES has tabulated all the various materials indices, and arranged them according to the design requirement. An example of this is damage tolerant design, where the fracture toughness is divided by the failure strength to give an index value. An example of the use of material indices would be strength against mass against cost. If a designer wanted to choose a material for an airplane floor then the material indice to use would be (Failure strength/2) / material density, ( This performance indice could be plotted on the Y-axis of the graph and material const plotted along the X-axis. Using this performance indice the designer would be able to see the relationship of strength to density against cost. In this instance the designer is able to choose the cheapest material that meets both the high strength and low mass requirements. Since the specification for the gas cylinder states specific values for the Yield, and Ultimate Tensile Stresses, this would seem the logical starting point. CES uses project files to group the selection stages together so that the selection process can be re-visited. The first stage is a limit stage where the specified stress values are entered; materials whose properties fall short these requirements are filtered out of the selection. The first stage in the selection process was to enter in the actual maximum and minimum limit values. These values are specified in the initial design specification. The values used here are entered into the CES software. Graph 2 - Fracture toughness, Endurance limit M Graph 3 - Strength, Stiffness M Graph 4 - Density, Stiffness M These stage selections are used to identify a subset of materials which maximize Fracture Toughness, Endurance Limit, Strength & Stiffness, minimizes density and cost while meeting the constraints for Yield stress and UTS. Constraints can be imposed on to the graphs to reduce the materials suitable for the function of the design. These constrains can be a performance index line, when plotting one material property against another, or by box selecting values from a desired value to either maximum or minimum value depending on the properties being plotted. Other factors that are important when choosing what material to use are, joining, how the pressure vessel will be manufactured, surface finish, whether it'll be painted or coated to improve the chemical resistance. In each of the graph stages constraint boxes were used to select only the best materials in", "label": 1 }, { "main_document": "that individualists profess. Rather does he see liberty in a community best exercised at the highest level, thus enhancing liberty in an ordered way. As liberty exercised only on the most individual, narrow level only furthers contradiction in interests, Rousseau proposes the pursuit of community interest, which, if carried out fully, secures the welfare and safety of all its citizens. What Rousseau does not display enough attention to, however, was the fact that 'there is never a perfect fit between moral requirements of our rational nature and requirements of citizenship'. Rousseau, on the contrary, believes in the stabilising role of the general will, and the inalienability of the will in general. The problem, nonetheless, lies with the radical solution Rousseau's general will poses. Asserting that '[t]o be governed by appetite alone is slavery, while obedience to a law one prescribes to oneself is freedom', Thus, a community taking its legislative task to the extreme, might indeed force its citizens into abide by the law. The question becomes, then, not to what extent Rousseau can be used for totalitarian ends, which he explicitly sets out to prevent, but rather how a feasible system may be envisaged in which particular interpretations of the general good can obtain a stable construct in which the individual is both more protected and better positioned to criticise the state's pursuit of the general good. Viroli, Christopher Bertram, Rousseau, Neal, Christopher Bertram,", "label": 0 }, { "main_document": "This essay will consider if coffee shops mirror existing social inequalities and if they further contribute to them by providing additional scope to such inequities. It will explain how there are factors which influence our leisure choices and draw upon evidences on bourgeois women to explain the feminisation of society as a factor influencing the demand for coffee shops and their increased growth over the years. It will further consider the standardisation of leisure choices and the McDonaldization thesis to explain the dissemination of class structure within society and the importance of other measures, such as lifestyle, to re-define class. Leisure has become an important aspect of life regardless of the factors that influence choice. Haywood et al (1995) state that leisure may be what is done after all other duties have been completed, or a measure to achieve an aim within society hence being seen as why it is applied, rather than what actions define it. It can also be regarded as a goal, therefore emphasising on independence and the ability to choose leisure as a form of personal development. They also analyse the relation between leisure and work, suggesting that in some cases they become intertwined, in others they are totally contradictory and in some there is no relation at all. The definition of leisure is in no way linear, and many aspects such as age, gender, social class, ethnicity and disabilities, which are internally linked, may constrain our choices. Coffee has been a conventional beverage since the seventeenth century (Mintel, 2004), following the emergence of coffee houses and according to Scarpa (2004), the aim now is to improve the taste of what is already a trendy drink. The sudden increase of coffee shops is a concept worthwhile analysing because although corporate brands are appealing to almost every market segment, there are underlying differences regarding the experience of going to a coffee shop. Clarke & Critcher (1985) define age as being socially constructed, therefore influencing other important factors such as time, money and social interaction. Haywood et al (1995) show stages in life which influence our leisure choices and claim that adolescence is a period of indecision, an identity crisis, where youth use leisure activities as a form of expressing themselves and stating their individuality. Equally, retirement and old age are also defined by society. Retirement refers to an end in paid employment, which is many times socially associated to being old. However, nowadays society urges people to start employment later and retire sooner, which in accordance with a longer life, increases the period of retirement (Haywood et al, 1995). Naturally, this brings many concerns for the retired who, being rather young in comparison to the elderly, have different needs which are now being considered by leisure providers. Mintel (2004) shows that there are many differences in the consumption of coffee, differing between the type of coffee, region and income. Generally, instant coffee is preferred by the +20 age group, and is most popular amongst +34yr olds which are possibly employed and have work status (appendix 1). With regards to", "label": 0 }, { "main_document": "and the guarantee of free secondary education have also been introduced through the use of the Constitution and the State Education Act. CRC/C/65Add.5, paragraph 31. Implementation of the Convention on the Rights of the Child: Commentary by NGOs, September 10th 1999, question one. One of the most important national developments in relation to the implementation of the Convention was the 1997 introduction of the post of Commissioner for Human Rights in the Russian Federation, and later the addition of a Commission and an Ombudsman, all charged with the same task. This, in theory is to provide state protection for all civil rights and freedoms, including those of children The effectiveness of these posts however, will be questioned later. Ibid. paragraph 44. Official statistics would suggest that the state of the children of Russia has improved since ratification. Infant mortality is gradually falling (down from 17.2 per thousand in 1997 to 16 for 2003 Other statistics however, tell a very different story. As a direct result of the 1998 economic crisis, various regions had to reduce payments to families with children, or limit them to families living below the poverty line only The Committee on the Rights of the Child has heard criticism of the newly reformed education system from Russian NGOs, stating that approximately 2.5million children still do not study anywhere In Russia as a whole, something like 50,000 children run away from home each year In 2003, 27% of the Russian population were living below the internationally recognised subsistence level of $73 per month These figures are merely a sample of a collection of damning evidence that NGOs are always ready to cite to 'prove' Russia's failings in the field of the rights of children. As will be explained below however, there have been some improvements, and there are some mechanisms in place to ensure other areas are developed also. CRC/C/SR/565, paragraph 3. Committee on the Rights of the Child, Day of General Discussion, \"State Violence Against Children\", Submission by Russian NGOs. 'World Health Forum', 1996 The Russian Federation recognised that the way it dealt with children left with no parental care was unacceptable. The number of children left like this is persistently increasing, and at the end of 2001 reached 685,132 The common estimate of the number of 'social orphans', that is, those with at least one living parent, is 95% of this number The reasons for this ever increasing number of children abandoned are numerous, but invariably lead back to the economic crisis facing their families. Wages are falling and prices are increasing, leaving families with not enough money to feed and clothe their children. 200,000 children were placed in State orphanages and boarding institutions in 1999, the rest with either close relatives, or adoptive or foster families. CRC/C/125/Add.5, paragraph 146. Committee on the Rights of the Child, Day of General Discussion, \"State Violence Against Children\", Submission by Russian NGOs. The 1995 Family Code of the Russian Federation established arrangements for children left with no parental care that are in line with the UNCRC. This Code states that adoption", "label": 1 }, { "main_document": "for all. Gold represented just that for many. The mid-century responsibilities that had been woven into Northern society concerning family obligations were also largely ignored as the chance to make a quick fortune was seen as favourable. Rodman W. Paul, California Gold, (US, 1947) p23 Malcolm Rohrbough essay No Boy's Play in Kevin Starr and Richard J. Orsi eds. Rooted in Barbarous Soil, (US, 2000) p25 For those who chose to travel via the Isthmus of Panama, a journey which would take between 30 to 90 days, the wonders of the region gave them new experiences, showed them things the likes of which they had never seen before. Even though this route was seen as quicker than the alternative venture around Cape Horn, if there were a shortage of vessels on the Pacific coast, as there often were in 1849, then a stay in Panama would be prolonged. Most of the ships that left for San Francisco in that year were worm-rotten with awful living spaces and food supplies. Whilst crossing the Isthmus, disease dogged many travellers, as cholera, dysentery and yellow fever struck within the immensely unsanitary conditions. The region both intrigued and appalled the American migrants. Many viewed the natives as lazy, but also disrespectful toward the Sabbath. The recent evangelical revivals during the first half of the nineteenth century can partly assist explanation of such opinion, as the natives gambled exuberantly on Sunday's. Viewing the Fandango, 49er J. E .Clayton commented, 'they...have no sense of shame about them.\" Such views indicate the conservative nature of the Americans during this period, most of whom would not have come into any contact with diverse cultures before. Many journeymen also saw the natives of the region as heathens having had no prior knowledge of Catholicism, although many viewed the Catholic buildings with awe. The journey across the isthmus involved canoe trips and mule rides through rich forests diverse in animal and plant life. Despite the beautiful nature on show, arrival in San Francisco Bay often brought great relief to the migrants. This highlights both the tedium of the journey, and their discomfort in the presence of foreign cultures. Malcolm J. Rohrbough, Days of Gold, ( US, 1997) p59 The journey around Cape Horn, for those who chose to undertake it, (approximately two thirds of sea-going emigrants) was far longer and more laborious than the other sea-faring option, but that did not mean it was far safer. Passengers tried to amuse themselves with games on board, whilst cockfighting and bullfighting was common when ships anchored. The straight of Magellan proved hazardous with strong currents and dense fogs, which only served to prolong the journey. Life on board ship, whether around Cape Horn or across the Isthmus of Panama proved tense at times, as the class and regional diversity of the passengers on board most ships highlighted growing social and cultural gaps within American society. Travellers had left there families in many circumstances, and as individuals, were in a position to observe other cultures from within their own country. The aforementioned discomfort felt by", "label": 1 }, { "main_document": "The aim of this essay is to examine how different sectors of the leisure industry contribute to the provision of leisure opportunities in the UK. It will do this by examining each sector in turn and then drawing conclusions. The public sector plays an important role in providing leisure facilities in the UK. Both central and local government are involved. In central government, organisation of leisure responsibilities involves 4 major departments - The Departments of Environment, Education and Science, Employment, and the Ministry of Agriculture, Fisheries and Foods. There are also government agencies, quangos and non-departmental public bodies e.g. Arts Council of England, Sports Council, National Park Authorities. The government works with and through these to provide for recreation. They assist with planning, provision and management. The local authority has a large input into leisure provision - in 1985/6 local government spent Land extensive facilities (such as water recreation and parks) and very expensive buildings are also often provided by the local authority due to the cost of building and the fact that they require a lot of land. They are therefore not attractive investments for the private sector. The public may not pay directly for the use of some facilities but they may pay indirectly through rates and tax. For other facilities they may be a charge e.g. for golf courses and theatres.This charge may be subsidised by the local authority. The local authority pays the salaries of those who work in its facilities and this makes up a large proportion of its leisure spending. The local authority also provides funding indirectly e.g. through financial support. They provide grants and support to youth and community services and organisations. The planning system has an important role in recreation. The council can assist with availability of land and resources for leisure by giving / withholding planning consent. This will have an effect on leisure provision by other sectors. Planning guidance (specifically PPG 17 which applies to support and recreation) guides the council as to what kind of leisure facilities should be encouraged and where they can be built. Provision of leisure opportunities by the public sector is led by central government policy. Policy filters down to the local level. The local authority provides opportunities for leisure and recreation through \"planning, facilities, services, budget and support\" [Torkildsen, 2005] However, the role of the state in the provision of leisure is \"complex and fragmented.\" [Haywood, 2002] This comprises both volunteering as a leisure activity in itself, and not-for-profit providers e.g. societies and clubs. Community involvement and volunteering are an important part of making this sector what it is. Voluntary organizations have a long history. In the early 18th century they existed as political coffee clubs. Voluntary sector organisations became increasingly important as rural society became industrialized; they provided \"the medium for development and pursuit of common interests, which might previously have been met by the extended family or local community.\" [Haywood, 2002] The voluntary sector is made up of a huge range and diversity of organisations, some of which have primary goals other than relating", "label": 1 }, { "main_document": "labelled 'dog' do not overlap with members of the category labelled 'cat'. (Hoff, 2001) The idea of these constraints has gone a long way to explain how children are able to acquire words relatively quickly through quick incidental learning. Both quick incidental learning and fast mapping have been studied extensively and from the results of investigations, researchers can gain valuable information about vocabulary learning. For example, the results from the Rice and Woodsmall (1988) experiment showed that whilst five-year-old children were very efficient at learning the meaning of words from the cartoon narration, three-year-old children were significantly poorer, learning almost nothing. (Rice and Woodsmall, 1988) We know that three-year-old children have the ability to fast map. In fact, some infants show fast mapping skills as young as13 months. (Schafer and Plunkett, 1998, described earlier and also Kay Raining Bird and Chapman, 1998 cited at (Hoff, 2001) For example a three year old is likely to benefit from joint referencing, pointing and other social factors involved in caregiver-child interaction. A study carried out by Grela, Krcmar and Lin, (2004) ( They were interested in comparing the children's word learning abilities when novel words were presented in maximal and minimal learning conditions. Maximal conditions were those where the child and caregiver interacted. In these conditions the caregiver used child directed language and non-verbal input such as pointing to help the child associate the novel word with the target object. Children in the minimal learning condition category watched a cartoon. Television is a non-interactive medium where the speaker is unaware of the listener's behaviour or attention and where social pragmatic clues to the novel word's denotation are minimal or absent. When the children were tested, it was found that the children exposed to novel words in the maximal learning conditions performed much more successfully than those who encountered the words in the minimal learning conditions. Again this proves the importance of child-caregiver interaction to vocabulary learning. The same study used a fast mapping task to investigate how internal factors such as age and vocabulary size influenced vocabulary learning. It was discovered, as expected, that in general the older that children get, the more readily they acquire new words and that children with larger vocabularies tend to learn words faster than children of the same age but with lower vocabulary levels. ( Carey and Bartlett also found that age has a significant influence on fast mapping skills, concluding that before 18 months, the child is a slow mapper. (Carey and Bartlett, 1978). Many investigators believe that prior to the vocabulary spurt, which happens around the age of 18 months, children's words are radically different to those of adults. They are learned in a slow associative way, without the support of the word learning constraints discussed previously. (Bloom, 2000.) However, several researchers have proposed that it is vocabulary level and not age which is the best determiner for fast mapping skills. They suggest that a critical number of words has to be reached, in order to obtain some knowledge of how the lexicon works, before the child", "label": 1 }, { "main_document": "The founding principle of the NHS was equity and to remove the financial burden of medical care. Despite this service, there still exists significant, differences in health and incidence of disease within the population (Moon & Gillespie 1995). These differences are known as health inequalities. Inequalities exist as a result of unequal social position, i.e. differences in socio-economic status, geographic location, gender, age and ethnicity and are an important issue as they can have a profound effect on people's lives. It is important we work to reduce them, because as highlighted by Mackenbachal. (2002) \"they contradict values of fairness and justice and reducing inequalities will lead to better average health in the population as a whole\". What can be seen from studies into health inequalities is that there is a clear class gradient. Blaine (1997) noted that \"mortality rates increase in a step wise fashion as one moves from social class I (professional) to social class V (unskilled) with the mortality rate of the later being twice that of the former\". The first comprehensive report into health inequalities carried out was the Black Report; it was commissioned by the labour government in 1977 and published in 1980. It suggested four explanations of social class differences in health. This states that apparent differentials in health status between social groups are created by the process of measurement and data analysis rather than existing in there own right, often referred to as numerator/denominator bias. The explanation has been largely disregarded, Whitehead (1988) shows that evidence from \"longitudinal studies such as the Whitehall study has provided fresh evidence of a social class gradient in mortality between different grades of the civil service\". This explanation accepts that health and social position are linked but it suggests we have a situation of social mobility and that the direction of causation is from health to social position. To an extent this is a real phenomenon because the disabled or chronically ill cannot work and so move down the social hierarchy but longitudinal studies have shown that at the very most this is a very minor contributor to socio-economic differences in health. This states that social class determines health through social differences in health damaging or promoting behaviours. Graham (1999) \"estimated that health related behaviours like smoking, diet and recreational exercise account for between 10-30 per cent of the socio-economic gradient in mortality\". The problem with the behavioural/cultural explanation is that it assumes individuals have the capacity to choose what behaviours they adopt. Blackburn (1999) highlights this isn't always the case \"lower income families have poorer diets than high income families not because of poor knowledge about what constitutes a healthy diet or unhealthy attitudes, but because they cannot afford to buy the foods considered important for good health\". This suggests that social class determines health because \"there are hazards to which some people have no choice but to be exposed to given to the present distribution of income and opportunity\" (Shaw et al 2000). Housing would be an example, Blackburn (1999) states that \"low-income families are more likely to", "label": 1 }, { "main_document": "This report follows up our recently approved internal development project, ORANGE (Detail can be found in internal report- ORANGE No. 1) As the manager of this project, I proudly present you this concise management report- ORANGE No. 2, which covers the following: Identify, classify the main project stakeholders and consider how to manage them Prepare a statement of aim and objectives with proposals for the evaluation of objectives Review the risks associated with this project and recommend appropriate risk management strategies Prepare a statement of project implementation strategy Recommend a system for planning and control of progress and cost Annex that gives a critical appraisal of the concepts and techniques used Brief introduction of PRINCE 2 and decide its compatibility or suitability to the project. This report includes recommendations and is subject to approval. It is important to define ORANGE aim and objectives at early stage of the project as it helps focus project team on specific goals and everyone concerned know what tasks to be performed Its importance becomes even more obvious when a customer outsource the work to a different company (this applies to an in-house project) as it gives an idea to the contracting companies what they are bidding, what they are expected to achieve and their commitments would be. Project definition takes place when a customer first conceives the idea of a project2. It is essential that their ideas are filed, understood and accepted by all parties involved in decision making of the project at this early stage. Extra details, i.e. specifications, can be added as projects go on. Six Weeks To Strategic Excellence, Hodder and Stoughton As mentioned earlier in this chapter, project definition should be decided by a group of key individuals, such as board of directors, senior management, project manager, work package manager, etc. Therefore, the definition provided in this document should only been seen as a recommendation and by no means should be used to provide the basis for project planning decision making until it has been reviewed, challenged, modified if necessary, understood, accepted by the key individuals concerned. After thorough understanding of the proposal, I propose the following project aim Aim: To enhance the quality of customer service and maintain a much closer working relationship with high street agents at a reduced cost by restructuring CFS's operations department through creating 10 multi-purpose geographical teams and increase individual's flexibility to deal with all types of insurance policy except Life. Control Suitable changes to comply with the plan can then be made by the project group. To make the right decision, it requires monitoring and evaluation to measure, examine and appraise of how project progresses. Defined project objectives can be used as a measurement of how well the project is progressing towards the set goals. It can be done by deciding how each objective should be evaluated. Table below shows proposed ORANGE objectives and how they can be evaluated. Project Management, Fourth Edition, Cleland and Ireland Stakeholders are those people who have an interest in a project. According to Obeng One way to manage stakeholders", "label": 0 }, { "main_document": "Sergei Pankejeff, a wealthy Russian aristocrat first presented himself to Sigmund Freud in February 1910, in a \"pitiful psychological state\" and entirely dependent on others for his care (Gay, 1989, p.285). \"Wolf-Man\" as Pankejeff was later referred to in Freud's case studies, was diagnosed by Freud as having an anxiety neurosis, an animal phobia in early childhood, compulsive feelings complete with obsessive rituals, attacks of furious rage and neurotic sexual conduct. Overall Freud described him as having \"slipped beyond neurosis into a tangle of crippling symptoms\" (Gay, 1989, p.285). To treat Wolf-Man's condition Freud employed psychoanalysis to interpret and treat his symptoms. Freud started Wolf-Man's psychoanalysis by taking his emotional history, from what he found Freud believed he could shed light on Wolf-Man's current neurotic state. Early on in Wolf-Man's life when he was only three his older sister had initiated him into sexual games and he went on from this to exhibit sexual behaviour towards his much loved nurse Nanya. Nanya subsequently warned him that if children did such things they would get a \"wound in that place\" (Freud, 1973, p.187). Having established that some people don't have penises, from watching his sister, and after Nanya's warning, he subsequently became obsessed with the thought of castration. He began to torture butterflies and to have fantasies about masturbation and beating, which Freud interpreted as a retreat to the earlier anal phase of sexual development. He also began to fantasise about his father beating him and provoked his father in order to make these fantasies become real. Freud interpreted this as showing that after his rejection by Nanya, Wolf-Man had chosen his father as his sexual object. By the age of four and a half Freud reports that Wolf-Man's anxiety neurosis was in place, at this age he would obsess about religious conundrums and compulsively carry out a number of rituals, he also struggled with his sensuality and had fits of rage. These childhood episodes also seemed to lead the way for Wolf-Man's obsessional and compulsive behaviours in later life (Gay, 1989). However Freud believed that for Wolf-Man's neurosis to have become so severe, an even more traumatic and distressing event must have occurred in his childhood other than the episodes with his sister and Nanya, that he could not presently remember. After years of unproductive therapy Freud believed that the key to Wolf-Man's condition lay in a dream he had just before he was four. In this dream Wolf-Man was in bed, facing a window, when suddenly the window opened by itself revealing six or seven white wolves sitting on the branches of a tree. The wolves looked rather like foxes and were perfectly still and silent, with large tails and alert ears. Wolf-Man reported that at this point he had felt great anxiety at the thought of being eaten by these wolves and screaming, woke up in a state of anxiety (Freud, 1973). Freud, working along the lines that in dreams we often convert experiences and desires into their opposites, believed that the still and silent wolves meant that Wolf-Man had", "label": 1 }, { "main_document": "27 The demand for XL is 3000 units per week so by 30 Accessories for XL which will not be required in 30 Since the demand is stable and producing the exact quantity, there is no inventory for the raw materials and finished good STD, a very little inventory cost for XL model. As the inventory of XL builds up in three consecutive weeks there is no need for producing in the fourth week so in 30 As a result raw material for XL, main body and aerial is not ordered in 29 The same strategy continues in following weeks from 33 to 35. Fulfilling the fluctuating product demand is critical to any supplier, manufacturer, or retailer. Forecasts of future demand will determine the quantities that should be purchased, produced, and shipped. Demand forecasts are necessary since the basic operations process, moving from the suppliers' raw materials to finished goods in the customers' hands, takes time. On the other hand inventories provide a level of product or service availability, which, when located in the proximity of the customer, can meet a high customer service requirement. [5] Forecasted demand can be classified as either dependent or independent. Dependent demand is represented by the vertical sequence characteristic of purchasing and manufacturing situations. The company manufactures plastic components that will be assembled to form finished goods in the automobiles. In this dependent demand situation, plastic components requirements depend on the automotive assembly schedule. [4] Orders received from original equipments manufacturers OEM are quite stable so simple Moving Averages technique can be used to predict future market demand. This technique is the simplest way of smoothing past data that is used for forecasting. Most recent data is most relevant in forecasting short-term demand because it reveals latest trends better than data several years old. [6] Since the demand is stable for first customer so EOQ is suitable to use for raw material supply which minimizes the total cost of ordering and carrying inventory. Due to an increase in demand in the past few months a new shift is also introduced so producing regularly can fulfil customers demand without any need to stock. Orders from other customers vary and have unstable demand therefore to satisfy this customer made to stock policy is better. Company shouldn't wait for demand to emerge and then react to it. Instead, the company must anticipate and plan for future demand so that can react immediately to customer orders as they occur. In other words, company should adopt the strategy of \"make to stock\" rather than \"make to order\" and then deploy inventories of finished goods. Moving Average is a good technique used in the forecasting but the biggest disadvantage is that it gives equal weight to old and recent data. This problem is solved in 'exponential smoothing' technique that gives more weight to the most recent observations which reflects most recent trends. By using this technique more accurate forecasts can be made. Therefore, it is recommended that this technique should be used in future to satisfy the second customer. The Times series", "label": 0 }, { "main_document": "portfolios of risky assets in the real world. Take the human capital as an example, the return on people's human capital is positively correlated with the stock of the company they work for. Mayers [1972] shows if the CAPM takes the scenario which the people may be holding non-marketable risky assets R Where V After Mayers' adaptation, now the model allows people hold various portfolios of risky assets due to their human capital has differing risk rate. Moreover, the two-fund separation principle still applies in this case, so the price of risk assets is determined regardless the individual preference towards to the risk. The second weakness of the model is about its testability. The famous Roll's Critique (1977) argued that the CAPM is not testable unless the true (mean-variance efficiency) market portfolio is identified. The theoretical market portfolio must include every single available asset which may be referred as a \"world market\". As a result, proxies for the market (such as the FTSE100 in the UK or the S&P 500 in the US) are used by investors in reality are seriously flawed. However, we need to be aware that, instead of denying the CAPM is valid, Roll's Critique only implies the CAPM might be difficult to apply on practice. An alternative to the traditional CAPM is the Arbitrage Pricing Theory from Stephen Ross in 1976. The APT retains the distinction between systematic and unsystematic risk, furthermore it extends the CAPM framework to many risk factors, such as the fluctuation of the interest rate and sensitivity of fuel prices etc, therefore the APT allows deriving a security market line at the absence of the unobservable market portfolio. The third drawback here is the CAPM is a forward-looking model but based on the historical data. For instance, a company's cost of capital changes with the daily price of its equity associates with the price of bonds, options and other financial instruments. Conversely, Beta tends to be a \"static\" estimate based on the past information. Because of the unreliable measure of Beta, many firms may decide how much their investments should earn purely depends on their experience or instincts. Companies that do use beta often have to adjust their estimates of the measure to reflect factors such as a company's size, financial condition and the specific type of investment involved. With many years academic debating the value of CAPM, there are many studies have tested the empirical performance of CAPM , and some of them providing strong evidence of its inability to explain (and therefore to predict) the behavior of financial markets. Nevertheless, from my point of view, the CAPM is generally considered to be a good first step in understanding what type of risk requires a premium and hence what is the excess return we should expect from various assets. Besides, there is no perfect model in this world. Despite the weaknesses, the CAPM is still generally considered a useful tool in evaluating the profitability of a project.", "label": 0 }, { "main_document": "at the correct temperature the milk enters the cooling section and loses heat to the raw milk entering. It is then further cooled using chilled water bring the temperature right down. The cream is then homogenized to evenly disperse and stabilize the fat globules. Once the cream is warmed to convert the milk fat into the liquid phase, this process can be carried out in a three-piston homogenizer which breaks down the fat globules under pressure and disperses them in the liquid milk phase that is present. Water was first pumped through the pasteurisation system and the various temperatures recorded, as for the milk. The raw milk was then pumped through the system and separated by the connecting centrifuge. The cream was collected in a bucket. The skim milk continued through the pasteurizing system. 10kg cream was then mixed with 15kg skim milk and was warmed to 50 A sample was also taken for analysis along with a raw milk, a raw skim milk, and a pasteurized skim milk sample. The freezing point depression was determined for raw milk and pasteurized skim milk. The flow time was also determined using a flow cup for the homogenized cream/skim milk mixture. Thermal Efficiency (TE) of the HTST Process See attached sheet A Flow Properties of Liquids in the Heat Exchanger See attached sheet B Reynolds Number and Residence Times for Water and Milk See attached sheet C Analysis in the Control Laboratory of Fat, Protein, Lactose and Alkaline phosphatase See attached sheet D table 1 Viscosity of Cream Homogenized at Different Pressures using the Brookfield Viscometer See attached sheet D table 2 and graphs following Freezing Point Depression of Raw Milk and Pasteurized Skim Milk See attached sheet D table 3 Determination of Flow Time by a Flow Cup 3 mm orifice used so kinematic viscosity is calculated: See above graph 2 for plot of apparent viscosity versus homogenisation pressure The thermal efficiency values for water and milk were calculated. This is also known as the regeneration efficiency and calculates how efficient a machine is at regenerating the energy. For water the TE value was 80.8%, much higher than the TE for milk which was 65.3%. This shows that for water, 80.8% of the energy needed will be supplied by regeneration and for milk 65.3%. For milk, the value ( The cream is then removed so the total volume is reduced and so there is an increased heat transfer. For water the values for ( There is only a difference of 1.4 The volumetric flow rate is calculated to be larger for water at 1.10 x 10 This is expected as milk has components which will restrict volumetric flow rate compared with pure water. The Reynolds number for water is calculated as 16000 and for skim milk as 8147. The Reynolds number determines whether the flow is laminar or turbulent. Laminar flow is dominated by viscosity forces and the flow is parabolic. Turbulent flow is dominated by inertial forces and produces random eddies. For both liquids, the flow is said to be turbulent as", "label": 1 }, { "main_document": "the source or receiver moving with respect to a medium, or even a moving medium. As the ambulance approaches the observer, the sound waves from the siren become compressed towards the observer. This compression of waves causes the wavelength to decrease and therefore the frequency to increase, which results in an increase in the siren's pitch. In contrast, as the ambulance recedes from the observer, the wavelength of the sound waves increases hence a decrease in the frequency, and a decrease in the pitch of the siren. In a nutshell, when the siren is coming towards you, more sound waves are entering your ear each second as opposed to if the siren was stationary. This increase and decrease in wavelength correspond to terms regarding 'shift'. As the wavelength decreases, the frequency increases and the waves are therefore said to be blueshifted. Similarly, as the frequency decreases, the waves are said to be redshifted, as shown in Figure 1. Why this is, I will explain later. So how is the Doppler effect related to cosmology and the expanding universe model? Like sound, light waves have frequencies and wavelengths corresponding to those in the electromagnetic spectrum. Visible light has a frequency ranging from four to seven trillion (ie 10 Different frequencies in this range is what the naked eye sees as different colours. The 'bluer' end of the spectrum has the highest frequencies whereas the 'redder' end of the spectrum has the lowest frequencies. This brings me back to the reasoning behind the terms redshift and blueshift. In a blueshift, the frequency of light is increased, and the light is therefore shifted towards the bluer end of the spectrum. The opposite happens in a redshift, the light's frequency decreases and it is shifted towards the red end of the spectrum. The way light behaves when travelling through a glass medium is now very important to Slipher's findings. As Isaac Newton discovered, light from the sun breaks down into its individual spectral colours when it passes through a glass prism. When using a telescope, the lens acts as the prism and when individual stars or galaxies are viewed, as with the prism, the light's spectrum can be observed. Atoms emit or absorb light of certain definite frequencies in this spectrum. The frequencies, known as 'lines', of each and every atom, form a characteristic pattern (as shown in Figure 2) that we can use to work out exactly which elements are present in a star's atmosphere. Now we know that the patterns of lines produced by the spectra of light from remote galaxies are the same as those produced by the elements in the Earth's atmosphere. What Hubble observed, just like Slipher, was that the spectral lines which would have had a certain wavelength and frequency on Earth, in fact had longer wavelengths, hence the shift towards the red end of the spectrum. Not only did Hubble observe this characteristic redshift from a single galaxy, but from the vast majority of galaxies he was measuring. Only very rarely did he observe a blueshift from a galaxy.", "label": 1 }, { "main_document": "refused and McDonald's knew better than to pursue it. In March 1999 the Court of Appeal made further rulings against McDonald's in relation to heart disease and employment. SITUATION NOW: As a result of the court case, the Anti-McDonald's campaign mushroomed, the press coverage increased exponentially, this website was born and a 60-minute documentary was produced. The legal controversy continues. It aroused many anti-McDonald's activities all over the world. This not only cost McDonald's financial loss but reputation declined. McDonald's has to pay great deal of money to settle many lawsuits all over the world. For example, McDonald's has agreed to pay $8.5 million to settle a lawsuit over artery-clogging trans fats in its cooking oils in California. McDonald's also so a lot of charities to rebuild its reputation. For example, McDonald's said it will donate $7 million to the American Heart Association and spend another $1.5 million to inform the public of its trans fat plans.", "label": 0 }, { "main_document": "Sue 48 years old is single and lives alone in a bed-sit. First diagnosed with schizophrenia when 23, she was admitted to hospital numerous times but has been maintained on antipsychotic medication in recent years. Schizophrenia is a splitting of the normal links in the mind between perception, thinking, mood, behaviour and contact with reality. Antipsychotic medication is used to control the positive symptoms of schizophrenia; the psychotic behaviour defined as thought delusions and hallucinations generally in the form of voices but not always. This medication can induce negative symptoms, consequently known as secondary negative symptoms. Sue has negative symptoms of schizophrenia, these can be summarised as: Symptoms such as these often persist long after the positive symptoms have ceased, and lead to social withdrawal and isolation (Creek, 2002). Currently Sue attends a day centre 3 days a week where she participates in crochet, bingo and has lunch, on the other 2 days she has lunch at the MIND club. She often sits in the library to read newspapers or wanders the streets. Socially isolated, Sue has had no contact with her family since she was 30, she has few friends and recently split up with her boyfriend Terry (who also attends the day centre). In Sues' view her medication makes her tired and fat, often she does not have the motivation to cook meals or change her clothes for bed; this shows the impact her illness has had on her functioning and self esteem. Sues main occupational needs are to maintain self care routines and productive activity; this will lead to Sue feeling better about life in general. Long term aim: to independently attend a voluntary work placement 2 days a week and interact with co-workers. Short term goals: To ensure ADL and IADL activities are maintained everyday with the use of prepared task check lists within one week. To participate in a befriending scheme one afternoon a week within two weeks. To independently take books out of the library, read them and discuss these at the day centre within threes weeks. To independently attend interviews for voluntary part time work within five weeks. Each goal can be achieved by: Activities will be analysed and broken down into smaller steps, working with Sue to aid her in structuring tasks that need to be completed on a day-to-day or weekly basis. For example: preparing meals - a list of what to cook for each day of the week can be prepared in advance to reduce the amount planning that needs to take place each day: cleaning the kitchen - complex tasks can be broken down into smaller steps such as wash the dishes, wipe the surfaces, clean the cooker top. By crossing off these smaller steps it should give a sense of achievement and aid the structuring long tasks that seem unreachable at first. A meeting will be arranged so Sue and the volunteer can meet on mutual grounds. Sue will be met for the first time and accompanied by the OT. When she feels at ease on her own she will", "label": 1 }, { "main_document": "essay cannot possibly examine the intricate differences in state form between centres such as Pylos, Mycenae, Tiryns and Thebes. Finally, the term 'palace' will be used loosely in this essay to denote major Mycenaean sites. In contrast to Crete, 'palaces' on the mainland appeared relatively late in LHIIIA (Dabney and Wright, 1990: 47). This late appearance has instigated the search for state society traits in earlier Bronze Age periods. The following table summarises the ceramic phases of Early Helladic (EH), Middle Helladic (MH), Late Helladic (LH) and subdivisions. Claims for state functions in the EH have been based on evidence for centralised storage and administration as well as on the presence of elaborate architecture and large settlements. The interpretation of the EHII The limited space for storage, the questionable stockpiling of sealed goods and the use of sealings for personal possessions rather than the administration of commodities make the relevance of the House of Tiles for the emergence of the early state questionable (Peperaki, 2004: 224; Whitley, 2004: 197; Kalogerakou, 1999: 97; Halstead, 1994: 203, 206). Thus, the House of Tiles no longer constitutes an example of early complexity as Renfew supports in his \"Emergence of Civilisation\" but a location for occasional gatherings (Peperaki, 2004: 214-215, 226). Similarly, the recently discovered EHII apsidal three-roomed house at Thebes, with outdoor pithoi and monumental walls, provides evidence for communal foci and labour mobilisation within a kin-based society (Aravantinos, 2004: 1255, 1258). The lack of consensus regarding the function of EH II corridor houses (Dickinson, 1994: 59; Kalogerakou, 1999: 91-92) may have actually inflated their relevance to the appearance of the early state. The massive size and dense habitation of EH Thebes, the speculation that it stood at the top of a regional settlement hierarchy (Konsola, 1981: 160-161, 167, 179-181) and the continuity of occupation down to LHIIIA-B (Aravantinos, 1995: 615) make it tempting to argue for an EH Theban state. However, given the collapse of the House of Tiles and other EH elaborate structures it would be problematic to assume a linear evolutionary path (Whitley, 2004: 193, 195) claiming that continuity affected the appearance of the first state. In 2000, a small EHIII jewellery hoard with trinkets of gold, silver, carnelian and crystal was found at Kolonna in Aegina. The ornamentation of one of the beads shows an etched technique known in Mesopotamia. This unique necklace has been considered as evidence for the accumulation of wealth within settlements and far-reaching trade links for prestige items desired by local elites (Reinholdt, 2003: 260-261; Reinholdt, 2004: 1114-1115, 1118). However, can the presence of these EH social inequalities be interpreted accurately when it is not possible to pinpoint to the exact nature of contemporary societies (Chapman, 2005: 92)? Recently, it has been suggested that Kolonna in Aegina represents a MH state because of monumental fortifications, a very rich Shaft Grave, centralised storage and specialised pottery production (Niemeier, 1995: 80; Rutter, 2001: 154). It would appear that the EHIII-MHIII fortifications at Kolonna with their intricate gate layouts exemplify a unique style different from the later Cyclopean technique (Field,", "label": 0 }, { "main_document": "is \"men who fight, men who lead troops or guerrilla movements, men who negotiate peace, men who wear blue helmets, and men who head UN agencies\". Therefore, women have been left out in history because we have often conceived of history in male centred terms. Reinforcing societal stereotypes, the media often puts women suffering at the forefront of the cover of conflicts which becomes a self-fulfilling prophecy. Lori Handrahan, \"Conflict, Gender, Ethnicity and Post-Conflict Reconstruction\", In all, there are many traditional theories that pose methodological barriers to recognising women's full agency in ethnic conflicts - silencing them as a result. As a consequence, women are more likely to suffer from being victims of cultural norms as they have to collectively \"shoulder a burden of being (inevitable) victims of the violence the system has created\". Vidyamali Samarasinghe, \"Soldiers, Housewives and Peace makers: Ethnic Conflict and Gender in Sri Lanka\", Moving away from essentialist conceptions of women's agency in conflicts, it is crucial to begin by acknowledging the widespread victimhood of women in ethnic conflicts. Following on, we can then transcend this stereotype and explore the opportunities and possibilities present in conflicts for women to take on new and unconventional roles. This is epitomised in Sri Lanka, when Tamil women fleeing conflict realised that the \"spatial exigencies of camp life produced an erosion of the caste hierarchies\" In Bosnia-Herzegovina, Medica Zenica became a centre for traumatised displaced women and it was run by Bosnian women physicians that provided therapy to more than 2,000 women. It is important to trace such patterns of empowerment in order to recognise the positive changes to women's roles and lives wrought in conflict. Meintjes, \"War and Post War Shifts in Gender Relations\", p. 68. Nancy Farwell, \"War Rape New conceptualizations and Responses\", \"Meintjes, \"War and Post War Shifts in Gender Relations\", p. 68. Darini Rajasingham-Senanayake, \"Ambivalent Empowerment: The Tragedy of Women in Conflict\", p. 107. Undeniably, facts and figures from countless reports have illustrated the extensive sufferings of women during ethnic conflicts. The nature of victimization and the identities of the perpetrators and victims are both complex and diverse and the most documented one is of sexual violence and abuse. Caroline O.N. Moser and Fiona C. Clark, \"Introduction\", in Caroline O.N. Moser and Fiona C. Clark, eds., Gender, Armed Conflict and Political Violence. London: Zed Books Ltd, 2001, p. 8. According to a European Union investigation, approximately 20,000 girls and women suffered rape in 1992 in Bosnia alone while the United Nations Commission of Experts on Yugoslavia affirmed that the \"vast majority of the victims are Bosnian Muslims and the great majority of the alleged perpetrators are Bosnian Serbs\". Elisabeth Jean Wood, \"Sexual Violence during War: Explaining Variation\", New York University: Santa Fe Institute, April 2004 (Article presented at Conflict and Violence Conference at Yale University, April 30-May 1, 2004), p. 5. Meredeth Turshen, \"The Political Economy of Rape\", in Caroline O.N. Moser and Fiona C. Clark, eds., Victims, Perpetrators or Actors? Gender, Armed Conflict and Political Violence (London, Zed Books Ltd, 2001), p. 62. Lisa Sharlach, \"Gender and", "label": 1 }, { "main_document": "freezing techniques which can be applied to food products but they do differ on energy consumption, desirable and undesirable effects they do bring about in food products. Example vacuum freezing is often used as a preliminary step for freezing drying, plate freezing is mostly applicable to food products which can be placed in plates during freezing although it is time consuming during loading and offloading. Blast freezing are flexible i.e. can be used to freeze any time of food products but moisture from the food is transferred to the air and builds up as ice on the refrigeration coils, on which with time it needs to be removed by defrosting of which it can be un economical and wastage of time in large scale industries. Liquid nitrogen is expensive and mostly reserved for expensive products and the products that can be frozen in less than 1 minute. As the size of foods increase, conduction within the food becomes the limiting resistance.", "label": 0 }, { "main_document": "obstacle caused by external environment (Forgas, 1992). Likewise, several studies found that people who have positive mood were more successful partly via the help of others in organizations. Since happy workers may be more fun to be around and are more easy-going, supervisors could provide more positive evaluation for them (Robbins and DeNisi, 1994).Thus, disposition affect is an important predictor of operational as well as interpersonal elements of performance. In an empirical evidence level, Seligman and Schulman (1986) chose 103 new life insurance agencies and found that the optimists stayed on their jobs as twice as the number of pessimists and sold more insurance than the pessimists. Furthermore, after re-analyzing a longitudinal data, Staw et al (1994) highlighted that the measure of affect such as depression did predict the changes in salary, performance valuation and social support. In the contrast, Ledford (1999) placed the criticism towards Wright and Staw's proposition reporting dispositional rather than state affect the performance. Ledford (1999) pointed out that their studies neglected the significance of other factors such as rewards system, working conditions, job design, leadership styles and so on which might change the level of job satisfaction of workers and thus performance. In conclusion, this essay provided a review and examination of the relationship between happiness and productivity. It stated two definitions for happiness and illustrated five models to explore the argument that \"a happy worker is a productive worker\". When happiness refers to job satisfaction, it could be concluded that a happy worker could be a productive worker provided that he is capable of accomplishing the task in certain context. These conditions are associated with, for example, well-established reward system, a self-perceived interesting job and relatively stable economic environment and etc. When happiness is defined as dispositional well-being, it is suggested that the people with positive traits have a tendency to performance better than those pessimists, attributed to their operational and interpersonal strengths. A number of researchers do not consider any positive relationship existing between happiness and productivity and some even hold the point that there is no relationship at all. Whereas, it does not mean that to maintain a high levels of worker satisfaction is not desirable. Neither, it is not used to justify that profitability can not be reached without happy workers (Silvestro, 2002). Employee satisfaction is vital for organisations triumph in a long run, since it matters to company reputation, long-run strategy planning, and corporate social responsibility and so on. Since dispositional affect productivity is moderately supported, it is worthwhile to develop the researches on psychology area of personal attribute. Whether disposition is a personal trait, an attitude, cognition, or just a measure of state of mental health is uncertain (Ledford, 1999).More investigations need to be done in this psychological management scope. However, it is beyond the discussion of this paper.", "label": 0 }, { "main_document": "Corporate governance is concerned with the separation between ownership and the power to make decisions in terms of how business allocate resources and returns. Two classic models of corporate governance are the shareholder model and the stakeholder one. The former is linked to external market control exercised by shareholder and the latter refers to the internal control exercised by various stakeholders such as banks, employees and public institutions and so on. Due to historical reasons, there are large differences between international corporate governance systems (Witt, 2004). However, much attention has surrounded the issues on whether national corporate governance systems will converge and if occurs, which direction it is towards to. Without neglecting those matters, the essay emphasises on the factors which might be considered to undermine the national distinctive system of corporate governance, with a fairly deep reference to Germany. The German corporate governance is a typical example of the stakeholder value orientation. It is worthy analysing the dynamic process of changes in Germany together with finding out the driven forces. The essay consists of four sections. Section one provides general arguments about the convergence of corporate governance and the possible reasons. Based on the German case, the second section turns to identity five factors leading those changes, in terms of the challenges of existing German corporate governance situation, pressure from global capital market and the rise of Government regulations. Section three takes up the issues of ownership and control to management caused by the changing system of corporate governance in Germany. The final section offers summary and further implications. Many authors (e.g. O'Sullivan, 2003; Witt, 2004; Jeffers, 2005; Ponssard et. al.2005) suggested that the convergence of national corporate governance, driven by the economic globalisation does occur and is more likely to continue. What is in disagreement is the direction of the convergence of corporate governance. On one side, a number of scholars (e.g. Berger and Dore, 1996; Streeck and Hopner, 2003) claimed that the international diversity will be replaced by a unified, Anglo-American model of corporate governance. The corporate governance reform around the world, suggested by Jackson and Moerke (2005), is influenced by the principles of US model, such as shareholder rights, transparency and external independent directors. On the other side, it is argued that a new hybrid model with the right mix of market rule, corporate regulation and stakeholder power will emerge. It seems reasonable that the dominant governance model will be featured by strong minority shareholders and multi-dimensional corporate objectives (Ponssard et al., 2005). Meanwhile, Ponssard et al (2005) also pointed out that the dominated model would present variations because of the dependency path in national distinctive contexts. Towards which model the national corporate governance may converge is a global scenario and the factors that contribute to this emergence are significant. Three main factors, indicated by Ponssardal. (2005) and Jeffers, (2005) should be taken into consideration: the internationalisation of financial market, the globalisation of corporate strategies and the rise of international regulations. The first factor has been cited largely, since \"the deregulation of financial investment is a reality in", "label": 0 }, { "main_document": "price. Grant, R.M. and Shaw, E.K., ed. Krugman, P.R. and Obstfeld, M., ed. International economics. The export subsidy is a government intervention, therefore it has, as all trade policies, the purpose to modify something in a country's terms of trade, in order to pursue some specific political objectives. One of the main purposes that can be found for export subsidies is to encourage national producers. As an example, in developed countries, an export subsidy is given in order to protect their producers in the international market from the competition of other countries, poorer countries, where the cost of factors of production, such as labour or land, is much cheaper and as a consequence, also the final price of the good will be lower. Other important purposes could be to support income to the nation, stabilise prices and equalise balance of payments (the record of a country's transactions with the rest of the world). In order to understand how a subsidy works effectively, we have to analyse its effects on RD and RS curves. Before this it is important to say that subsidies, like tariffs, make a difference between This means that we have to be careful to define terms of trade (that measure the ratio according to which countries exchange goods). We have to consider that terms of trade correspond to external prices so we will analyse how a subsidy affects RS and RD as a function of external prices. Suppose that country The effects of this subsidy on the internal price of As a consequence country's producers will increase the production of good This means, as is shown on the graph, that the RS curve will move from RS1 to RS2, increasing the world's relative supply of good This will affect the equilibrium from 1 to 2, changing the prices: the relative price of Krugman, P.R. and Obstfeld, M., ed. International economics. The effect of paying an export subsidy is that country This means that domestic prices are forced upwards, then the output will increase, domestic demand will decrease and the volume of export will be expanded. Therefore domestic consumers suffer higher prices and taxpayers have to bear a subsidy cost, while foreign producers are made to face additional subsidies competition in international markets. If the government of the USA, as an hypothetic example, offers an export subsidy on goods that are exported by India, India's terms of trade would be worsened. If the USA offers a subsidy on a good which India is an importer, India's terms of trade would be improved. Grant, R.M. and Shaw, E.K., ed. This is what happens in the international system. Now we can understand the debate about agricultural subsidies that has arisen between developed and developing countries more easily. To do this I will give some examples. The current level of Canadian dairy exports is $415million but it is expected to fall, following a decision of the WTO (World Trade Organization) Appellate Body, that has discovered Canada's government is subsidising production of milk exports in a strongly trade-distorting way. This verdict goes against an", "label": 0 }, { "main_document": "death that he went there and to his wife Clara that he built a large house in the city and merely used the country as a holiday spot. One critic even goes so far as to say that the women in his life, Clara, Blanca and Alba are the main protagonists in the novel, not including Esteban. There is one devoted to both of the loves in his life, 'Clara the Clairvoyant' and 'Rosa the Beautiful.' This may give an indication that Allende intended to place less focus on Trueba as his name does not appear in bold as a heading on any of the chapters. Even his two sons who appear to be far more minor characters than Trueba get a chapter devoted to them. As do the Three Marias who appear as more minor characters but whom the estate in the country is named after. Esteban is also not part of magic in the novel. Magic is an important part of the novel and of Latin American society as a whole. The only magic that does affect him is where he begins to see Clara returning to him as a ghost late in his life and his shrinking where 'he was the only one to notice that he was shrinking.' Metaphorically one may see him as not only shrinking physically but also shrinking in size as a character. He appears less and less in the later stages of the novel, despite gaining the title of senator and the growth of his businesses. This is probably due to the development of politics and ultimately his wealth puts him in conflict with rise of Marxism in his country. Towards the end of the novel we hear little of his large estate in the country. However, one may still claim he is still effective even towards the end, as a character as he provides a stark contrast to the new politics and the old conservative way of life. Esteban's temper makes him a prominent character as he is forever banging his cane on the furniture, lashing out at people, and having fits of rage where he foams at the mouth. He smashed his wife's teeth in and chopped Pedro Garcia's fingers off. These acts of violence bring a very austere quality to the novel which would be lacking without. He is the main male chauvinist in the novel and it would probably be hard to write a novel quite like the one that Allende chose to write without a male character of his stature. Also, as Alba suggests, Esteban began the chain of events that resulted in the culmination of the novel as a whole. He may not contribute to the narration in large quantities but what he does contribute is important to the progression of the plot. In addition, he actually created the world in which events take place. Yet his place in this world is reduced by Clara, Blanca and events outside his control like the rise of Marxism or the military coup that brings the 'modest general' into power. However,", "label": 1 }, { "main_document": "speech (Bloom, 1994: 7)and L is also seen to do this on a number of occasions with many of her clauses adopting the Verb Object (VO) or Verb Object Adjunct (VOA). Although the subject is omitted, the intended meaning is often clear with utterances such as: , being understood due to the context in which the utterance was said and the child's use of directed speech towards her conversational partner. Templin (1957) devised TTR to look at lexical diversity of a subject's vocabulary by assessing 250 word samples of speech. The total number of words within the sample (tokens) and the number of different words within a sample (types) are collected and then the types are divided by the tokens: L's mean TTR was 0.31, slightly lower than Sophie at 2;4 and 3;0 where she achieved 0.36. Both L's and Sophie's TTR's would be said to be low as Templin suggested that a TTR lower than 0.45 was cause for concern. However, the research which TTR is based on was conducted on children aged 3;0- 8;0 (Fletcher, 1985: 47) and therefore the same expectations may not be anticipated. L uses a lot of repetition throughout the transcripts sometimes repeating the same utterance four or five times, as demonstrated at the end of transcript 1. This repetition will act as a major contributor to L's low TTR score, however due to the investigation being a case study and the data only being a product of a limited time span it may not act as a true representation of L's language due to given contexts and tasks often dictating the vocabulary used. Again this can be demonstrated through the data collected, with transcript 2 seeming to be the clearest example of a topic dictating language choice. The transcript show's L repeating the words L has little option but to reply either Analysis of bound morphemes uses 250 word samples. The samples were analysed and the results of the analysis set (C&I utterances) will be used. L was found to be using a mean of four bound morphemes per 250 word sample, which is slightly fewer than Sophie at 2;4 where she was found to be using five bound morphemes. However, types of bound morphemes used are more significant than how many bound morphemes are used. Table 3 shows the bound morphemes that Sophie was using at 2;4 and 3;0 but due to L's bound morphemes being calculated as a mean, examples of her bound morphemes were not previously given. Table 4 Shows bound morpheme types and the distribution amongst L's transcripts: L demonstrates a range of bound morphemes across her transcripts, with a greater bound morpheme diversity than Sophie at 2;4 and 3;0. L's use of different types of bound morphemes is relatively sophisticated for her age, as by 2;6 the expectation is use of - ing and plural - s only. Therefore,despite not being able to compare L's bound morphemes to the data on Sophie; it appears that L is acquiring bound morphemes successfully. VTTR looks at lexical diversity of verbs within a", "label": 1 }, { "main_document": "This assignment tried to predict the occurrence of the tremor in Parkinson's disease from the local field potentials (LFPs) recorded from sub-thalamus by using artificial neural networks. This may contribute to design a demand-driven intelligent stimulator. The data recorded from sub-thalamus was processed by using some statistic method before being trained by neural network. The experimental results show that it is possible to use neural networks to predict Parkinsonian tremor by using the LFPs recorded from sub-thalamic nucleus. Parkinson's disease (PD) is a neurodegenerative disorder characterized by many forms of change in motor function known by neurological signs like tremor, muscle rigidity, bradykinesia, akinesia and loss of postural reflexes [1]. Such symptoms tend to be associated with basal ganglia, thalamus and motor cortex [1]. Parkinson's tremor is the most common form of resting tremor caused by idiopathic Parkinson's disease [2]. This tremor of Parkinson's disease is one of the involuntary movement disorders affecting many people, in particular, elderly. There are several possible therapies for Parkinson's disease. These approaches include L-Dopa, antioxidants, drugs which stimulate dopamine receptors, drugs which block glutamate, drugs which decrease apoptosis, neurotrophins, high-frequency electrical stimulation of the globus pallidus and transplants of neurons from a fetus, but such treatments have some limitations [3]. Patients usually have internal globus pallidus Deep Brain Stimulation (DBS) for surgical treatment [2]. Some patients are implanted a deep brain electrode into thalamus or sub-thalamus. Such treatments are less dangerous than lesioning, particularly subthalamic nucleus stimulation [4]. The cost of Deep Brain Stimulation (DBS), however, limits the number of patients using this to stimulate deep brain structure because of the frequent replacement of batteries [4]. Demand-driven Deep Brain Stimulation may be a solution to this problem. To achieve this, the occurrence of tremor must be predicted. Artificial neural networks (ANNs), the field of artificial intelligence, emerge from the development of the first neural model by McCulloch and Pitts in the 1940s. Since then many researchers have become increasingly interested in artificial neural networks, commonly referred to as \"neural networks\" in [5]. There are many advantages of neural networks such as machine learning, generalization, adaptation and fault tolerance. Artificial neural network can be applied to medicine or biomedical engineering domain. The medical applications fall into four basic fields: modeling, bioelectric signal processing, classification for diagnosis and prediction for prognostics [6]. A lot of research shows that neural networks have been used successfully in many areas in prediction such as the prediction of ovarian cancer and also several types of cancer [7]. It is likely that neural networks might be utilised in the prediction of the occurrences of tremor in Parkinson's disease from sub-thalamic nucleus (STN) LFPs signal. Figure 1 shows the local field potentials recorded from sub-thalamus with a sampling rate of 800 Hz. This data will be used for an input vector. However, a large number of the input of a neural network may increase the size of a neural network. The reduction in input dimensionality is achieved by pre-processing. To do this, the data should be analysed. Figure 2 presents the envelop signal of", "label": 0 }, { "main_document": "work in the characters' lives. The main protagonists, Stephen and Louisa are both defined by the factory: they are often connected with images of smoke or fire and they are metonymically presented as the fuel and eventually waste of the factory. Stephen is presented as fuel for the system of production while Louisa is fuel for marriage and reproduction. Explores the characteristics of Dickens' style and stresses his linguistic creativity. Focuses on the writer's grammar: underlines his tendency to reclassify countable nouns into uncountable or vice versa, his alternations between indirect speech and free indirect speech and his use of collective nouns or \"string-compound\". Presents Dickens as a lexical innovator: many words which have now entered English language were first coined in his novels. Stresses the writer's style to turn nouns into verbs or vice versa and reviews numerous terms created by the novelist. States that Dickens' contribution to language history should be better recognized. Underlines the unique scope of Dickens' repertoire. His characters' linguistic behaviours often deviate from the norm which allowed the writer to present linguistic oddities. Points out idiosyncratic pronunciations and the writer's skill to play with the sound of words. Stresses Dickens' treatment of strings of words. Focuses on the putting together of prestige terms with colloquial or slangy words. States that the novelist offered his readers extensive but realistic and convincing specimens of substandard English. Finally asserts that Dickens -commented his own style in his work through some remarks of his characters. Stresses the unprecedented style of Dickens' prose. Argues that the writer transformed and reshaped language. Presents examples of \"grammatical mimesis\" - passages where external descriptions are filtered through the consciousness of a character. Shows how the writer managed to blur the boundaries between people and things. He resorted to metaphors to bestow animation on objects. Points out the Dickensian approach to double meanings, to play with the literal and the metaphorical senses of a same word. Highlights his skill to create \"syllepsis\"-to join together unlike things- and his fondness for adjectival triads, the use of three successive adjectives. d", "label": 0 }, { "main_document": "This report provides an analysis of the broader business environment of the UK and France from a human resource perspective. The findings are taken as a basis to development an appropriate approach to people management of a British hospitality organisation entering the French market. Research has shown that similarities prevail, however, that there are also dissimilarities which should not be underestimated. In order to adjust best and perform well a polycentric orientation is chosen. Furthermore, resource, reward, training and development schemes are included suggesting a possible design for a luxury hotel. Another point discussed refers to equality approaches and their advantages and disadvantages for both employer and employee. It is concluded that similarities between the UK and France dominate but it should be beard in mind that there are also dissimilarities to which should be paid major attention in order to select the best human resource approach for the French market. With three hotels Each unit has its own style and character. Prime locations are in the countryside. These five star hotels house between 25 and 40 guest rooms, four to eight suites, food and beverage facilities and provide various leisure amenities and cultural programmes. Due to France's location in Europe and its constantly high number of tourist arrivals (WTO 2004) The country is situated in Western Europe and borders on Belgium, Luxembourg, Germany, Switzerland, Italy, Monaco, Andorra and Spain. France offers a great diversity of landscapes including rivers, lakes, coastlines and sea sides, mountain ranges, rural areas and cities. France is a democratic country and has got an economy which is shaped by private enterprises and substantial intervention by the government in key sectors such as transportation and communication that, however, is declining (Whyte 2004). The French hotel industry comprises nearly 30,000 establishments and can list 1.5 billion overnight stays in 2003. It is the most visited country (Minist Wine tourism is just one area of tourism opportunities in France. The economic potential of this market sector has not been realised for a long time. It has changed, however, since the last years. Wine-related attractions such as wine museums, wine routes or wine tasting have increased. It can be expected that this market sector is growing in future times (Frochot 2000). The literature provides different approaches that help to identify the macro environment of an organisation. One of them is the PEST-framework which focuses on political, economic, social-cultural and technological influences. Political factors that may affect managing people include all actions a state is involved like legislation, regulation or transfer of power. Whereas economy comprises all fields that concern business including labour market socio-cultural influences look at values, attitudes, demography or education. The technological environment refers to changes and developments in this field that may have impacts on the industry and an organisation and its people management in particular (Campbell 2002). Those external factors are crucial for a company entering a new market. However, they become even more important when entering a market abroad as the type of international strategy is affected by them including the human resource strategy (Boella and", "label": 0 }, { "main_document": "moral problems that affect more than just one or another nation-state. It is precisely because the nature of the nation-state is changing so the communitarian approach to states morally prior to the individual should be subject to re-examination. Moreover, there is no need to be a globalist to admit that terrorism (in its new transnational form) may affect you irrespective of your home country, religion, and culture. State closure and exclusion through drawing boundaries is no longer achievable as before. The fate of peoples can no longer be simply confined and understood in national terms. Globalisation does not mean the end of territory, state or the 'exclusion' associated with them but the nature and content of these elements is changing. The proper home of politics and morality becomes a puzzling matter that should be discussed from a wider perspective combining the virtues of both communitarianism and cosmopolitanism: \"Taken together, they constitute the best starting point for developing a concrete universalism capable of meeting the challenge of post-modern scepticism\". Consequently, moral \"spaces need to be continually negotiated rather than physically or symbolically secured.\" Ferrara, \"Universalism: Procedural, Contextualist and Prudential\", p. 14. Campbell and Shapiro,", "label": 0 }, { "main_document": "Despite many assumptions that Locke and Hobbes share similar if not identical opinions on the state of nature views what a closer examination unearths are differences that, ultimately, shape both thinkers later views and do much to explain their diverging ideas. Even in their very conception of what reality the state of nature is referring to the thinkers seem to differ. However it is perhaps telling as well as intriguing that both Hobbes and Locke engaged in this thought experiment to imagine how society would exist hypothetically without governance and that both saw such theorising as necessary foundations upon which to build later thought. In Leviathan Hobbes meticulously forms and builds logical arguments progressing, step by step, towards an ultimately logical and therefore for Hobbes immutable conclusion. For Hobbes human nature is the result of varying urges or appetites which ultimately dictate behaviour and, when there exists a scarcity of resources, leads to a man who \"naturally endeavours, as far as he dares (which amongst them that have no common power to keep them in quiet, is far enough to make them destroy each other,) to extort a greater value from his contemners,\" All are equally vulnerable because, in a David and Goliath mentality, Hobbes depicts all as equally susceptible to being killed by another regardless of strength. Hobbes' picture is bleak; continual war \"every man against every man\" and constant \"fearre\" make life for man in the state of nature \"solitary poore nasty brutish and short\". In this state of nature the conflict between men and the acrimony that exists makes society and its products impossible. That is to say there \"is no place for industry\" , \"no arts;...no society;...\" In order to escape this state of nature which is ultimately dangerous and futile individuals transfer their rights, mutually, to the Leviathan. This submitting or transferring of rights is, for Hobbes, the lesser of two evils where the state of nature is a negative reality from which one seeks, naturally, to escape as soon as possible. This process is arrived at using the faculty of reason which seeks peace fearing the consequence of war for their self preservation which is of ultimate importance. Hobbes, T. Leviathan. Tuck, R. (ed) Cambridge Texts in the History of Political Thought (1996) Cambridge: Cambridge University Press p.88 Hobbes, T. Leviathan. Tuck, R. (ed) (1996) Cambridge: Cambridge University Press p.89 Equally Lockes account of the state of nature depends on a reasoned deduction. His starting premise is to dispute Filmer by proposing that the world belongs to all individuals collectively. He does not account for a scarcity of resources of any sort within the state of nature or otherwise which is in sharp comparison to Hobbes who assumes individuals will have to fight each other for goods and, indeed, to survive. Locke finds all humans equal in the state of nature but it takes a different meaning, man is equal so has no power over another resulting in all men being in a \" This liberty of man is conducive to a healthy society as well", "label": 1 }, { "main_document": "Georg Simmel separates social form from social content in order to provide an explicit distinction that allows for the constructive study of sociology. The analysis of social form is built on Simmel's premise that 'a collection of human beings does not become a society because each of them has an objectively determined or subjectively impelling life content' (Levine 1971 24). The importance of the social form aims to provide a firm basis to the study of society and the discipline of sociology thus allowing objective and concrete analysis to be made. This separation attempts to allow one to formulise and structure social theory and society respectively and the complexities of social form in themselves are arguably to such an extent that this explicit separation is warranted. The ideal of sociation therefore is built on the basis of social form and how individual's come together to form a society. However, problems in this division can be argued for. Social form and social content can be seen as inseparable and the standing and usefulness of social form can be jeopardised through manipulation. They can additionally be seen as too constraining on the individual and the discipline of sociology. The separation is useful as arguably it upholds the basis of sociology and allows for social science disciplines to argue their position as not purely subjective. Social forms thus instigate 'sociology to be a science that describes social phenomena and classifies them into homogeneous categories' (Moscovici 1993 237). The externality and empirical character of social forms, allows the discipline to classify and formulise forms and establish social theory. Social forms can be seen as stable and concrete and allow for the sociologist's deeper understanding of society because they can be assessed objectively. Moscovici for instance formulise social forms into anonymous This formulisation provides a firm basis for the discipline and is noted by Nisbet, 'to move from Simmel's pages to current work in the study of small groups, motivation, roles, status and human interaction is not as difficult' (Nisbet 1959 480). The use of social forms allows sociology to remain a discipline in its own right, the explicit division distinguishes it particularly with psychology, which Simmel argues focuses purely on social content. Simmel remarks upon this distinction as focussing the discipline on, society, and that 'there is a science, that is peculiar to society, because certain specific forms [...] derive directly from the reciprocal actions of the individuals and groups and from social content' (Simmel cited in Moscovici 1993 237) The ability to do this sets sociology apart and instigates its presence within the field of the sciences, Simmel remarks on its ability as ' a science that describes the facts that are produced Forms thus justify sociology as having a clear and embedded purpose, and the discipline sharing features that attribute it to the field of science. Moscovici in ' The Invention of Society': Definition of Anonymous Action: little or insignificant acion, 'we collect them by a scrupulous obsination of physical and spiritual states' to 'become an object of our attention or reflection, it must", "label": 1 }, { "main_document": "of operation for So, a sequential solution \"will stay a long time in a loop operation\" and so this would take a lot of computational resources. That's why a parallel solution will be done. However, more details will be given in another part of this document. In this part, we will explain what a system of linear equations is mathematically; afterwards we will present the Jacobi method which is used to solve such systems. A system of linear equations is a set of linear equation containing several unknown variables. In the general case, such a system can be mathematically expressed as follows, As it is easy to think, we can write such a system using matrices as below, Where A is the matrix containing all the coefficients, B is a column matrix containing the results of each equations and eventually X is also a column matrix which contains all the unknown variables. Hence, using the matrices notation, the problem of system of linear equation is to solve the following equation and then to find the unknown variable (vector X), When Furthermore, such resolution without formal method is very complex and not efficient to compute. However some more efficient methods have been found such as Gauss-Jordan method, the Gaussian elimination method In the next part is explained one of them which is called the Jacobi method The Gaussian method is explained in Bib[D] The Jacobi method [ This method is iterative, that means the solution would be approached to a final solution after some iterations. The system will converge forward a final solution but will never reach it (actually, a determined tolerance value has to be used as a threshold to stop the iteration). Hence a solution would be found after each step ( Before explaining mathematically how this method works, some little definitions has to be defined, So, this kind of matrices can be defined as follows, So, this kind of matrices can be defined as follows, So, this kind of matrices can be defined as follows, As it is natural to think, we can write M = L + D + U where M is a matrix, D its diagonal, U its upper matrix and L its lower matrix. Knowing these definitions, we can define the Jacobi method. We want to solve AX=B with X the vector of unknown variables, Hence, we have the definition of the Jacobi method which is, And so, for one row (of the result matrix), As we can see, this method is a good one to be computed, because it is iterative. When the system converges, we will determine if the convergence is sufficient, that means, whether we are enough closed from the solution or not. The initials values of In others words, The algorithm will be stopped when the last solution will be enough closed from the previous solution. That would mean we are much closed from the \"true\" solution because it is a convergent iteration. So, we can define When The Jacobi method uses a convergence state to find the solution. Thus, the system", "label": 0 }, { "main_document": "It is difficult not to assimilate this purported experience of self back to the deeply felt unity of all things which Simmel charges is Schopenhauer's motivation. Schopenhauer is himself ambiguous - at least - about the knowledge we have of the will. If it is mediated by the world as representation then it has to be taken as inferential knowledge. Obviously, this knowledge would then be achieved according to the fourfold principle mentioned above and thus would not represent a different method of knowing. Let's examine the argument on this account. If we take Schopenhauer's move inferentially, he believes that we can move from the premise that individuation occurs in the world as representation to the conclusion that outside the world of individuation there is no individuation. This is clearly warranted only on the assumption that conditions of individuation within the world as representation are the only possible conditions of individuation, which is not logically required. We may not be able to make sense of such a type of individuation, but beyond the conditions of the possibility of our experience there may be such an individuation. Logically then, Schopenhauer's inference is unsound. Schopenhauer also talks of our knowledge of the will as unmediated - direct. We can discriminate between two questions: what is the ground of any individual act of willing - why am I willing x at t? - and what is the ground of all my willing - why do I will at all? The former can admit of an answer on the level of representation, the latter requires a different ground. The situation is the same regarding gravity: any falling object can be referred to gravity as its ground, but gravity itself cannot be so grounded. Here is the key to the intuition at the heart of the concept of the will: it is movement. What is left when one has abstracted ends from willing? Movement. What is to be explained when gravity is referred to its ground? Movement. The intuition Schopenhauer believes that we can grasp from our experience of ourselves as embodied subjects is an intuition of movement within a multiplicituos unity, a differentiating unity; there is a process of individuation immanent to the will. However, we must be careful; the will does not cause its objectification into individuals, but the objectification comes from its own nature. Mankind's understanding is objectified will. The subject/object split is objectified will. That is to say that there is nothing in the world which is not will, including the mechanism by which one comes to take the world as representation. This is a necessary consequence of Schopenhauer's adaptation of Kant: the thing-in-itself grounds the subject in Kant. The subject finds its own spontaneity springing from the thing-in-itself. The thing-in-itself is will for Schopenhauer, thus it has to ground the subject. On the account that I have given, the problem then arises: how can one renounce will? If the subject is to be taken as immanent to the will, as an objectification of it, any act of renunciation would The alternative would", "label": 1 }, { "main_document": "of \"100\" in January 1921 rose to \"1298.2\" in December 1922, While land reform was the way out for a country too poor to introduce any significantly contributing industrialisation, the National Democrats, with many landowning members, were reluctant to land redistribution of any real importance, This, in all, caused land reforms to be rather ineffective and certainly insufficient. Ibid, p. 99. Stachura, 'The Battle of Warsaw', p. 52. Stachura, 'Historiographical Outline', p. 5. Polonsky, Roos, Crampton, Roos, Additional to internal problems, came the issue of Poland's national security. Weimar Germany continued a revisionist policy of regaining its former Prussian areas and southeastern Silesia from Poland. Rothschild, Crampton, Stachura, 'National Identity and the Ethnic Minorities', p. 71. Another form of support to the new independent state which was pressingly lacking was financial aid. Having half of its roads, railways and public buildings destroyed during the Great War, The absence of foreign loans and apparent disinterest in creating political stability from the western powers might have done much to help Poland establish a parliamentary democratic routine. Roos, Ibid, p. 105. Polonsky, The exigent situation that Poland was in did not go unnoticed. Pi Much of the PPS-cadre, frustrated with years of ineffective government and Roos, Ibid, p. 111. Polonsky, Not just the opposition of the left had their second thoughts about the functioning of the state. In December 1925, Dmowski wrote 'if we could create even half the organization like the Fascists... I would willingly agree to a dictatorship in Poland'. University students radicalised under the prospect of unemployment and resented the apparent greater success of their Jewish peers; they formed the leadership of fascist movements that were to increase their power-base throughout the 1930s. Antony Polonsky, Seton-Watson, Overwhelmed by problems of such dimensions, historians have argued, 'some action [...] seems in retrospect to have been almost inevitable' within the Polish political landscape. An internal conflict in Pi He claimed he 'would never be a cause of trouble and dissention in the State'. Already earlier in the twenties had Pi By declining a bid for the presidency, he proudly refused 'to be enclosed in a 'gilded cage'', Rather than acknowledging his objections to National Democratic domination, he slid further back into the trappings of \"moral dictatorship\", hereby only united the body politic in their opposition against his rule. Thus, the peasant parties united in the Sejm to resist political persecution, While open confrontation with the initially supportive socialists became ever more discernible, Pi When, in the end of 1929, he personally came to reopen the Sejm accompanied by a hundred armed officers, Daszy The failure of Pi 'It is the basis of democracy, for which there is always place in Poland', he still argued in 1926. When eventually a number of politicians chose exile rather than imprisonment, Pi Polonsky, Pi Roos, Seton-Watson, Joshua Cohen, 'Protection for Obedience', MIT, Roos, Polonsky, Crampton, Roos, Ibid, p. 114. Pi Seton-Watson, Crampton, By carrying out a While initially hesitant, the This, however, had detrimental effects on the democratic institutions and increasingly alienated many parts of society from his", "label": 0 }, { "main_document": "We want to test the hypothesis that the regression coefficients for the first half of the sample (20 observations) are significantly different from that for the second half (20 observations). For this purpose we will conduct a Chow test, which is based on an F-test. Initially, we have the regression model: We introduce a dummy variable (D The resulting regression model is now given by: The dummy variable takes the value 1 for the first half of the sample (20 observations) and the value 0 for the second half of the sample (20 observations). So, the regression functions for the dependent variable in the two half of the sample as resulting from (2) are given by: The assumption that we have made here is that the regressions for the two half of the sample are completely different, since we have allowed the intercept and slope to differ. We now want to test whether the regression coefficients for the first half of the sample (20 observations) are significantly different from that for the second half (20 observations). In other words, we test the equivalence of the two regressions and therefore we specify the null hypothesis (H To test the H The restricted model is represented by equation (1) and it assumes no difference in the intercept and slope coefficients across the two half of the sample. The unrestricted model is represented by equation (2) and it allows the intercept and slope coefficients to differ for the two half of the sample. By running two regressions, one for each model we find the SSE for each one, which we then input in Table 1 together with the values for J, T, K and alpha (number of restrictions, number of observations, number of coefficients and significance level respectively). By replacing these values in Table 1 we get the computed values required to reach a decision. Since F=5.638488309>F Thus, there is difference in the intercept and slope coefficients of the two regressions which means that we cannot pool the data into one sample and describe them with one common regression model that has the form (Table 1): Instead of that, the equation describing the regression for the first half of the sample is given by (Table 2): This regression model implies that if x increases by 1 unit, y will be increased by about 0.41 units. The equation describing the regression for the second half of the sample is given by (Table 2): This regression model implies that if x increases by 1 unit, y will be increased by about 4.8 units. The same decision can be reached if we use the p-value. Thus, we reject the H We want to examine whether heteroskedasticity is present. One way for doing that is by plotting the residuals, however, a formal way of testing for heteroskedasticity is the Goldfeld-Quandt test, which is the method applied in the current problem. We specify the null hypothesis (H The data were split in half and ordered based on variance. Then, two regressions were run, using the first half of the data", "label": 0 }, { "main_document": "speed and high performance cars competing against each other. At present, the most famous Formula race is Formula-1 (F1) racing. In this assignment, most research was done on Formula-1 engines. Formula-1 car engines have excellent performance as they are about ten times more powerful than normal car engines. To ensure all races are reasonably fair, FIA (F For instance, the engine must be four strokes and consist of 10 circular cylinders with less than 5 valves on each cylinder. Supercharging is not allowed in Formula-1 cars. At present, the 10 cylinders of the engines are usually arranged in V configuration. They often have capacities of about 3000 cc capacity and can generate more than 800 bhp. High torque of the engines means high power output (i.e. high horsepower) as the number of engine cycles per unit time is depend on the torque. Special materials (such as Aluminium alloys, ceramic etc.) were used to manufacture the engine's components for Formula-1 for weight reduction purpose and also to reduce the chances of overheating of the engines. The cooling of the engines was very important in Formula racing cars especially in Endurance racing since the car was required to run for a long period of time (typically 6 hours, 12 hours and 24 hours) over the runway course in order to test the engine's durability. Formula-1 car engines are all air cooled. Less dense ceramic was used in making the internal components of the engines so that it was easier to accelerate and also reduce the engine's fuel consumption. The overall size of the engines should be reasonable light and small, so that they could be easily fitted into the chassis. The numerical data for the engines' torques and compression ratios (CR) are not available in all of the sources. This may because these data have to remain confidential. Following are some examples Formula-1 racing engines: Figures 5a and 5b above are Ferrari F2003-GA single-seater racing car and Type 052 engine respectively. Ferrari F2003-GA was Ferrari's 49 Similar to the previous generations of engines, the 3000 Ferrari (type 052) engine is load-bearing and is fitted longitudinally into the chassis. It has a V configuration and 10 cylinders arrangement with 4 valves per cylinder, i.e. 40 valves. It is a Spark Ignition (SI) engine with Magneti Marelli static electronic ignition. Fuel will be fed into the engine by Magneti Marelli digital electronic injection. Type 052 is an evolution of the former 051 engine with several improvements that increase the engine's performance and usability. Size and weight of the engine were reduced as new materials have been used for manufacturing it. The centre of gravity of the car is lowered and therefore improving the overall weight distribution. Maximum engine revolutions of type 052 engine would be about 200 rpm higher than that of the type 051. Further development of the engine would be carried on in order to improve the horsepower and its performance. The types of engines that Ferrari would develop in the future will tend to be more reliable and can integrate well with the", "label": 0 }, { "main_document": "lot of data The larger a corpus is the more detail may be discovered about less frequent items. This principle is based on what Sinclair (1991:100) says about the data is systematically organised Data needs to be presented in a way that allows observation while maintaining objectivity as much as is possible where the nature of the system requires categorisation. the data is not annotated in terms of existing theories While annotation of corpora may be helpful in searching categories such annotation will necessarily reflect any theory on which it is based. These principles can now be investigated with Kennedy's (1998:10) observation in mind; ' How the use of corpora is made possible then is in the methodology of first finding and then interpreting concordance lines. A concordancer is not a corpus - it is a computer programme used to search a corpus which displays information in different ways for interpretation. The following shows a randomly selected single word-form search for The following shows the same concordance lines sorted alphabetically according to the first word to the right of By displaying the data differently it is possible to begin to see patterns emerging. In the example The same data can also be displayed sorted to the left. It is now easy to see the frequencies with which It is also possible to search for a phrase which may help to determine patterns within phrases. The following displays 15 lines contained in the result of searching the word string In this limited search it seems A further example of using concordancing software to determine patterns is the use of fixed phrases whose use cannot be explained in terms of rules only as patterns. For example; in the Bank of English the phrase However, four of those occurrences are in sequences which include These types of search make it possible to use a word, a lemma (the different forms a word may take such as its base and - Limiting this type of search is the ability of the human observer to extract information. The lines themselves present information but it is the analyst who must make interpretations and this will require a certain amount of intuition. Hunston (2002) refers to this type of search as 'word-based'. Sinclair (1991) concentrated on the association of sense and syntax and continued to emphasise this with statements like; '... Hunston (2002) makes a distinction here saying that a word-based approach and a category-based approach answer different questions so must be applied at different times. The issue being whether to work from a lexical basis to identify patterns, as has been done with the concordance lines above, or from a pattern basis to discover which lexical items share the pattern. The methodology used in a category-based approach looks beyond the concordance line to frequency lists and to collocation. Sinclair (2004) describes collocation as the choice of one word being conditioned by the next. By using concordancing software as an analytical tool it is possible to investigate the presence of patterns in those choices. Frequency word lists show which", "label": 1 }, { "main_document": "be seen to complicate them as he constantly tries to get at the hidden impulses underlying them, often by implementing hypothetical and abstract ideas. Cognitive-behaviourists would therefore see this type of therapy as unhelpful to an anxious person who is already confused and unsure of themselves. Psychoanalysis is carried out by allowing the client to say whatever comes to mind and it is also a very long and drawn out procedure, often taking years. This would be criticised by cognitive-behaviourists as they see anxious people as being in a state of disorder and as such needing a highly structured format in which to approach their problems (Beck Other critics of Freud have also criticised his interpretation of Wolf-Man's condition as they see it as arbitrary and assuming many things about Wolf-Man's past that there is simply no tangible evidence for. Critics such as Fish believe that the analysis Freud carried out was not so much an interpretation of Wolf-Man's condition but a persuasion (Fish, 1998). Also professional bodies such as the Department of Health tend to advocate the use of cognitive-behaviour therapy in anxiety disorders, as can be seen in their clinical practice guidelines, due to the experimental evidence supporting this type of therapy (Department of Health, 2001). However on the other hand the lack of evidence supporting psychoanalysis as a treatment for anxiety disorders does not necessarily mean it is ineffective. Although the form of psychoanalysis used by Freud is rarely used today, many therapists still use similar techniques and ideas, reconstituted to form what is now called psychodynamic therapy. Further more, evidence also suggests that brief psychodynamic therapies can be of use in certain conditions and although this evidence does not suggest that it is better than other therapies, it does show that it is more effective than no therapy at all (Joseph, 2001). In light of our current understanding of anxiety disorders it is easy to criticise Freud's interpretation and handling of the Wolf-Man case as he does not take into account the biological mechanisms involved in the creation and maintenance of anxiety, and there is now some evidence suggesting that a cognitive-behavioural approach is preferable in treating anxiety disorders. However both cognitive-behavioural therapy and effective drug therapies had yet to be realized when the treatment of Wolf-Man was carried out. Therefore the therapy provided by Freud may indeed have been the best option for Wolf-Man, as other therapies around at the time can be seen as much less helpful than psychoanalysis, such as the treatment of taking baths that Wolf-Man underwent in \"Dr N.'s institute\" in Frankfurt (Gardiner, 1973a, p.87).", "label": 1 }, { "main_document": "one of the key factors to the wide success of Nokia. Its main competitors in early 2000's produced various of platforms. Based on Nokia's case, it can be argued that discipline in executing strategic plans in the long term provided a considerable advantage for a limited time by manufacturing standard platforms instead of various platforms like its competitors. Furthermore, their long-term strategy allowed for innovations, in other words emergent change. The failure to respond to an emerging change in consumer moods caused loss of market share, as the consumer preferences were changing in Nokia's business environment. The duality of Nokia's case shows that aspects of long term discipline and short term flexibility are useful and may overlap. But, main indication is that the advantages are more likely to arise from flexibility rather than from discipline. McCaffery; 2000 McCaffery; 2000 McCaffery; 2000 BBC; 2004 Reuters; 2004 In conclusion, the flexibility to address emergent changes makes a crucial contribution in the process of strategy making and its merits are undeniable, but, the discipline in long-term plans is most certainly needed to some extent. The strategic planning point of view in its old sense of formalisation has become ineffective, but its views give a manager to some extent better means to prepare for the future changes and new approaches to planning are developed. Even if the flexibility to address emergent changes is more crucial, there is no absolute right way of strategy formation. The choice rests inevitably on the organization where it considers its future and the best way to reach feasible outcomes for its operations. It is possible to argue that the general guideline is to rely on a combination of methods of strategy formation, as the benefits of different methods can be employed to create successful strategies. Nevertheless, a good strategy maker needs flair and insight for his profession, because our world is not the one of the ideal models and all situations are unique.", "label": 0 }, { "main_document": "of the direct sums of simple Lie algebras. It is also given by the following theorem [V74, H2003, B89]. Theorem 17. A complex Lie algebra is semisimple iff it is isomorphic to the com-plexification of the Lie algebra of a connected, compact matrix Lie group So we know that Lastly, A The elements of Theorem 18. Every irreducible root system is isomorphic to one of: The algebras formed by the operation defined by Tits, above, are the exceptional Lie algebras and a few corresponding to he classical root systems, see [V74, H2003, B89]. Thus concludes our brief foray into the world of normed division algebras. An interesting fact to close here is that quaternion representations of rotation groups may be fundamental to the underlying structure of our universe. The standard model insists on an infinite Euclidean space created by inflation. The plasma that filled space shortly after the big bang fluctuates in density, causing temperature fluctuation in the Cosmic Microwave Background radiation. The standard model predicts these fluctuations, but on larger scales they are not detected. Consider the 4-manifold There are a few explanations for the missing fluctuations but the simplest is that a spherical universe will not support large scale fluctuations, analogous to the way an unit circle will only support wavelengths The observable evidence is inconclusive with To recognise a spherical universe we must find which spaces are possible. The operation is similar to our construction of the 24-Cell. We find that the possible symmetry groups are the cyclic group The corresponding spaces S We quickly tour the area of Lie groups and algebras in greater detail. The collection of classical Lie groups is made up of the general and special linear group, orthogonal groups, the unitary groups and the symplectic groups. They are all examples of Theorem 19. Every matrix Lie group G is a Lie group. The proof is given in most good Lie theory texts. Example 4. Lie groups. This can be shown by using methods similar to above. iii) We G has an identity and an inverse for each element. The product is smooth as is the inverse element map, hence G is a Lie group. A matrix Lie group is Example 5. U(n) is a If A is unitary, the column vectors are orthonormal. Similar idea for the other classical Lie groups. A matrix Lie group We say G is simply connected if every loop in G can be shrunk continuously to a point in G, as well as G being connected. Now a Lie algebra of a Lie group Theorem 20. Then Proof: We are familiar enough with differential topology to recall that Now, we let The map So Then But We also have But This gives a neat way to calculate the Lie algebra of a given Lie group since we see that Example 6. The The Lie algebra of a matrix Lie group The classical Lie groups have their own Lie algebras; we have already stated which are the classical Lie algebras and which Lie groups they correspond to. It can be", "label": 0 }, { "main_document": "I do not agree with Anscombe's idea that we should abandon moral obligation and duty. Anscombe does point out some problems with two major and conflicting ethical views of concequentialism and deontology, but her alternatives are unsound and there is contradiction in her argument. I shall argue that though her principal thesis and ideas seem sound, when Anscombe tries to explain differences between what is intrinsically 'just' and what can be 'just' depending on circumstances, she misses the point. I also think that her link between the 'just' and the 'right' is flawed (or as she says that we as yet do not have a link). I shall argue that moral obligation will occur with any theory and that by waiting for her \"adequate philosophy of psychology\", she is postponing what is inevitable. I do not think that moral obligation is a \"survival\", but something that is a necessity for any ethical theory. The key terms I shall be using are ones used by Anscombe herself and will require explanation, as their meanings are similar. 'Just' and 'unjust' are terms relating to justice and are not indications of right or wrong (though I shall comment on the connection). 'Right' and 'wrong' themselves mean morally or ethically right or wrong. I will not differentiate the terms ethics and morals (as has been done) for this discussion. 'Ought' will refer only to 'moral ought', where the word implies a moral obligation, something that Anscombe does not believe in. Elizabeth Anscombe begins her critique of modern moral philosophy with the thesis that we should not attempt philosophy of ethics until we have \"an adequate philosophy of psychology\" (Anscombe p.26 XXX). Despite claiming that ethical philosophy should not be done, she then goes on to say that modern ethical philosophy is fundamentally wrong as it is based on the idea of moral duty and \"ought\" and that these concepts are \"survivals\" from an earlier conception of ethic that is no longer applicable. Anscombe argues that this law - based conception of ethics has come from Christianity, which \"derived its ethical notions from the Torah.\" (Anscombe p.30 1981). Modern philosophy has been most interested in ethics that does not require religion: Anscombe argues that it is being done by people who live in a Christian/post-Christian society and that they have been influenced by the society that accepts moral obligation. \"for Anscombe, the law conception of morality is untenable without God.\" (D. Jamieson p.479, 1993). Anscombe says that as we have abandoned the notion of God in moral philosophy, we should also now abandon the law - based conception of moral obligation, as it is pointless without a superior legislator. She disagrees with Kant that people are able to regulate and legislate for themselves. Anscombe attempts to show that some acts that we generally perceive to be wrong with our rule-based conception of ethics actually do not have a moral value. She uses the example of owing a grocer money to illustrate that there is nothing intrinsically wrong or immoral about withholding payment. 'X owes Y money' is", "label": 1 }, { "main_document": "to the neurodevelopmental effects such as nausea, headaches, diarrhea, and irritation of the mucous membranes, tremors and convulsions, nervous system abnormalities. A single or repeated dose of (5mg/kg) DDT on rats causes liver damage, tremors, decrease in thyroid function and impaired neurological exposure. The chronic exposure of DDT on birds and mammals affect the estrogenic properties, antiandrogenic sexual development and feminization of males (alligators and Florida panthers). Because of all these severe environmental impairments bought by DDT, the global ban of DDT is necessary. DDT is still being used in many poor developing countries such as Africa, and India for the controlling malaria. Although resistances are shown in many of these areas, it is very difficult for them to stop the use of DDT. DDT is a cheap man made chemical, which can be manufactured easily during the reaction of chlorine and double benzene ring in optimum temperature and pressure. In addition, DDT can be store and carry easily. The application of DDT is also simple. Therefore, DDT is still very popular in those poor developing countries. Invention of new chemical to replace DDT is needed before the global ban of DDT. The new chemical should be easy to make, store, handle and apply. More importantly, it should be cheap enough those poor developing countries to afford. On the other hand, education is also very important. Besides the application of insecticides, keeping the hygiene in a high standard can also decrease the cause of malaria. According to the graph in the pervious result section showing the change of eggshell index of peregrine falcon (Falco peregrinus) and Sparrowhawk (Accipiter nisus) during 1900-2000, the changing trends of both species are very similar. This fact supports the idea that the contamination of DDT affects both species in a very similar way. The eggshell index of Peregrine falcon, The eggshell index decreased rapidly to 1.8 between 1905 and 1910. After that, the eggshell index remain very stable until 1930, there was a slight decrease during 1930- 1940. A marked decrease started in the late 1940s, dropping to its neap, 1.4 in 1960. This is due to the introduction of DDT into the agricultural use at that time. Although the problem was not noticed before 1950, laboratory work shows that the eggshell index started to decrease very shortly after the introduction of DDT. The eggshell index started to rise between 1960 and 1970, contribute to the ban of DDT in Scandinavian and Holland. The eggshell index increased very rapidly after 1970, reaching its peak 1.8, in 1990. This is due to the various reintroductions, breeding program and the banning of DDT in many European countries. According to the graph, there was as slight decrease in eggshell index in 1900-2000. This maybe because the egg has been collected form polluted areas; the population of Peregrine had in fact recover to its pre-war level in 1985. The pattern of the eggshell index change in Sparrowhawk, However, the eggshell index of Sparrowhawk started with a lower level. The eggshell index of Sparrowhawk was 1.3 in 1905. There was a general", "label": 0 }, { "main_document": "diary for 4 weeks, to keep a record of when/if they practice the exercise and splinting regime in their own time, and when/if they attend appointments to see the OT. They will also write about their feelings about their treatment related to appointments, the regime and daily life activities throughout the weeks. A researchers diary will be kept, to enable readers of the research to understand 'how the researcher thinks, acts, interacts with and feels about the participants in the project, and how the researchers concerns and values affect the data' (Grbich, 1999, p89). The suspension of beliefs, assumptions and biases about the phenomena under investigation is known as 'phenomenological reduction' (Streubert and Rinaldi Carpenter, 1995). Interviews, and meetings with participants to look at diaries will take place in the particular hand clinic the participant attends, so that the environment is familiar to them (Mays and Pope, 2000; Sim and Wright, 2000). Participants will be invited to participate in one to one interviews with the researcher. The interviewer's appearance should be similar to the interviewee's so as not to come across as too authoritative, and the interviewer should try to make the interviewee comfortable in expressing their true experiences (Robson, 2002; Streubert and Rinaldi Carpenter, 1995). Semi-structured interviews are appropriate where a study focuses on the meaning of particular phenomena to the participants (King, 1994, cited in Grbich, 1999); they are therefore the most appropriate interview design for this study. Semi-structured interviews have predetermined questions, but the order in which they are asked can be modified based on what the interviewer feels is most appropriate. The wording, order and explanations of the questions can vary, and questions which seem inappropriate with a particular interviewee can be omitted, or additional ones added (Robson, 2002). The design of semi-structured interviews enables interviewees to elaborate on their answers, and the interviewer to ask probing questions, so comprehensive data can be obtained (See table 2, 'The 4 Components of Asking'). In the initial interview, participants will be asked open questions about their attending of appointments and whether they follow the exercise and splinting regime, and in what ways they think it helps. Further and probing questions will be asked, depending on the answers given. Probes are neutral statements which do not bias the subject to respond in any particular way. They are used to encourage more information, or elaborations, for example, \"tell me more about it\" (Depoy and Gitlin, 1998). Summary questions are also useful at the end of an interview, to enable the interviewer to check they have understood the interviewee correctly (Grbich, 1999). The second interview will provide an opportunity to expand, verify and add descriptions to the phenomena being researched. The researcher can refer to limited or inadequately described information to gain further descriptions. Often, the participants will also have additional thoughts about the phenomena, following the initial interview (Streubert and Rinaldi Carpenter, 1995). The second interview will take place after the 4 weeks that the participant has kept a diary, and will also be an opportunity to discuss what is written in", "label": 1 }, { "main_document": "order to manage it, the bidder/seller sends an instruction which is called This instruction means \"Are you available\" ? If the server is available, that means it can receive this instruction and so, can answer. Hence, if the seller/bidder receives an answer from this server, then this server is available and then the seller/bidder will communicate with this one. If the server is not available, then it won't answer and after 1 second, the seller/bidder will send the same request to another server, because that means the first one is not available. If the second server is not available, the seller/bidder will ask to the last one. If the three are not available, then the seller/bidder informs the user. So, if a response is not received from a server after 1 second that means the server is not available and we have to try with another server. The instruction The schema below explains this process to find an available server, The bidder/seller need to communicate with the server in order to have or send some information (for example, new bidder, new seller etc...). Here are explained some of possible communication. New Bidder: Here is the sequence diagram useful for the creation of a new bidder. This sequence diagram is the same for the creation of a new seller. However, the tag message used for the bidder is the number 18 (use for every transaction used to create a bidder) and the number used for the seller is 16 (for all the transaction used to create a seller). In this diagram sequence, we can see the User. We show it only in this sequence diagram because it is not very important for the explanation, Bid for an auction: Here is the diagram used when a bidder wants to bid for an auction, Add and start an auction (a seller do that): Here is the diagram sequence used when a seller wants to add and start a new auction, (please see next page) In these diagrams, we can see some intern action like \"Add auction\". Of course, these intern actions are complexes. They use the mutual exclusion to access to the different data, synchronise the data list with the others servers etc... The others sequence diagrams (show auctions, show time etc...) are not shown here. This program doesn't manage the case where a server dies. Indeed, if a server dies, the system still works (with only 2 servers) but the dead server doesn't become alive again. If the root server dies, the child process (clock synchronisation and accept connection from outside) would still work, so the root server dead is not a problem. However, if one of the two other server (the second one, on Beowulf6) dies, the system can't work again because this server has to manage the auction's duration (see above). So, I think the system depends to much on the reliability of the server. If one dies, some problem can occur and if the one on Beowulf6 dies, the system dies as well. So, that's a problem in this program. A", "label": 0 }, { "main_document": "tip. The selection of the sensing device depends on the minimum change in displacement that it must be able to detect. We must be careful to find a reasonable resonant frequency for the pick-mechanism. We are given that the displacement sensor must be able to sense a change in height of 0.01 The radius of the stylus tip is 2 The smallest measurable sinusoidal surface can be taken as 0.01 In order to work out the wavelength and hence the corresponding frequency let us consider figure 3, which shows the tip on a sinusoidal surface with the same radius of curvature. We can use Pythagoras' theorem and the numbers shown in the diagram to calculate one half of the wavelength and therefore one whole wavelength: We can work out the frequency using equation C and taking the horizontal speed as 0.0005ms A suitable resonant frequency for the system is therefore about 7900 rads The key feature of the sensing device we are looking for is its sensitivity. The device must be able to measure a change in displacement of 0.01 Inductive and capacitive devices are not suitable because neither if these have the sensitivity required. A typical range for these displacement sensors is Another drawback from the inductive sensors such as LVDT's are their inertia. The magnetic core has too much momentum to allow quick changes of direction in response to the minute changes in displacement that need to be measured. Optical encoders are binary type devices. The output is in digital form and expressed as either \"on\" or \"off\". Ideally we are looking for a sensor that will give a continuous reading as the stylus moves horizontally across the surface. A digital device is not really suitable for this purpose. We would prefer a device that gives a continuous output that can be transferred to a computer via an analogue to digital converter for data presentation. The obvious choice for the displacement sensor is a laser interferometer because it is the most accurate of the devices available. These are very precise measuring devices that rely on the phenomenon of interference to provide a measure of displacement. Figure 4 shows the layout of an interferometer whose operation is described below. A beam of monochromatic light from a laser, of single frequency One beam travels along a path of length x The other travels a distance of x Both beams are reflected back off their respective mirrors to a detector. On reaching the detector the beams can be described by the following wave equations: Equation taken from X Ping Liu. Lecture Notes \" The resultant light beam incident on the detector is given by: This describes a wave with amplitude a where (which is the optical path difference) The intensity I The intensity is therefore a function of the optical path difference. The distance that the movable mirror moves though is related to the number of fringes that are detected. Maxima occur when x = 0, In these cases the two beams are interfering constructively because they are in phase. A bright fringe", "label": 1 }, { "main_document": "these to help interpret what is meant, yet the meaning is then normally taken from the metacommunication rather than the words, especially if what we're saying conflicts what we're doing. As well as health professionals communicating with other people they also need to listen effectively. By doing this Stanton (2004), believes that it results in: Encouragement to others - results in others loosing some or all of their defensiveness and then they usually try and understand you better by listening more effectively Possession of all the information - the more information obtained the easier it is to solve problems and make decisions more effectively. Careful listening usually motivates the individual to continue talking and with as much information as possible health professionals are usually then in a position to make accurate decisions Improved relationships - as you listen, you understand them better, they appreciate your interest in them resulting in friendship maybe deepening Resolution of problems - everyone wants understanding, the best way of expressing this is through sensitive listening. Disagreements and problems can best be solved when individuals listen carefully to each other Better understanding of people - listening carefully gives clues as to how others think, what they feel is important and why they're saying what they're saying. This enables professionals to work better with clients Unfortunately health professionals are faced with situations where they will have to deliver bad news. When it's this form of communication it is best that the health professional listens to the individual before delivering the information and when it's time to do so, they do so in an effective manner that doesn't bring about complications to the situation. They should also make sure they include all the relevant information and don't try to break the news gently by not making it sound exactly how it is just to ensure their own emotions are satisfied. Therefore, the health professional should try to put a side their emotions in order to be there for their client and also to enable effective communication to take place. Therefore in order to respond to others effectively health professionals need to possess a number of skills and values to enable this to take place. As a subset we discussed this and believe a few of these values include: By carrying out an interview in our subset it was believed that we would get to see an example of the wide range of communication and interpersonal skills that are needed in order to become a competent practitioner and enabling us to work with a variety of people in a range of settings. We therefore felt that this would help us when we work within our placements, as we would have hopefully built upon different skills through this activity. Within our subset each of us were studying something different within health care and so this gave us an insight into what working with different health professionals might be like. In order to come up with reasonable interview questions we all went away and came up with five questions which we thought would", "label": 1 }, { "main_document": "exams, and overall work effort seem to have a significantly positive effect on exam performance, however, when including it in the multivariate regression of my preferred model it is not significant. Regressing QTmark against number of siblings and number of times attending Top Banana or similar did rather surprisingly not return any significant coefficients. Expenditure on alcohol in pounds sterling per week. After screening the variables, I ran a series of multivariate regressions on the variables I found to be individually significant. The regression outputs for the different multivariate regression analysis are included in the appendix. Being a UK student does not seem to have a large negative effect on the exam results compared to non-UK students when running a simple regression analysis; however, when including it in the multivariate regression it has a negative and significant coefficient of over 4. My preferred model of exam performance is: And the regression results are: The justification for these variables to be included in my model is given throughout the assignment and is based on statistical evidence, empirical work done by others and my own experience. The variables included are both jointly and independently significant, except for ParUni, which has borderline t-stats of 1.958. The appendix includes tests of the joint significance of both the year dummies and the attendance dummies. The results above show that compared to between 0 and 20 % attendance, attendance of 80-100 % improves exam results by 9.73 percentage points. We can also see that having an A in Maths from A-level boosts your exam performance with 4.97 %-points. Furthermore, UK students receive on average 4.68 %-points lower exam marks than international students and people who are studying a single-honours Economics degree perform 4.31 %-points better than other degrees. If one or more parents attended university this enhances the exam result by 1.73 %-points. My preferred model of exam performance is accordingly determined by prior performance (AlevelsA and MathA), heritage (ParUni and UK), the course undertaken (Econ), class and lecture attendance (the DAttCL dummies) and the year of survey (the DYear dummies).", "label": 0 }, { "main_document": "or inclusions formed during manufacture. Any region with a high stress concentration is susceptible to failure due to fatigue. If the time-varying loads are high enough, often due to the stress raiser, then local yielding may occur, even though nominal stress in the structure are well within the materials yield limit. \"The localised plastic yielding causes distortion and creates slip bands along the crystal boundaries. As the stress cycles, additional slip bands occur and coalesce into microscopic cracks. Even the absence of a notch, such as in a specimen, this mechanism still operates as long as the yield strength is exceed somewhere in the material\" NORTON, ROBERT. L, \"Machine Design an Integrated Approach\", Prentice Hall, 1996, p349 The size of a crack in a specimen or component could be anything from microscopic (<0.2mm) to macroscopic. Sudden failure occurs when a crack has grown to such an extent that the stress intensity factor, K, at the crack tip reaches that of the materials fracture toughness K Crack growth occurs when the cyclic tensile stresses in the component are high enough to cause crack propagation. There are in general three types of time varying stress, Figure 4.1 shows these conditions. The first case is The second case and the one which will be used in the testing of the samples is This condition is given a stress ratio of R=0. The final loading condition is The stress ratio for this loading condition is R = 1 Fluctuating , R = 1 The most common way of displaying fatigue data is by means of a stress-life relationship graph. In these the stress amplitude, S is plotted against cycles to failure life, N. These are known as S-N graphs, and when both axis are plotted on a logarithmic scale produces almost a straight line. The data for these graphs are usually obtained through actual test results from a number of tests. When S-N curves aren't available for fatigue life predictions, the following assumptions are made in order to form a prediction for the S-N curve. For steel For Aluminium Sa(10 Sa(10 The S-N curve can then be defined by the following equation Equation 4.6 - S-N Curve Equation 4.7 - Determining Constants a & b When considering the fatigue life of a component rather than test specimen it is necessary to include other factors that influence its life. These can be seen in Equation 4.8. Note the dash after the stress amplitude denotes component. Equation 4.8 - Component Life There were two different types of beam tested for this report, a plain rectangular beam shown in Figure 5.1, and a beam with a hole through the centre shown in Figure 5.2. The material for both beams was Aluminium alloy 6082-T6, a ductile material possessing isotropic properties. \"Aluminium alloy 6082 is a medium strength alloy with excellent corrosion resistance. It has the highest strength of the 6000 series alloys. The addition of a large amount of manganese controls the grain structure which in turn results in a stronger alloy\" The A to Z of Journals of Materials", "label": 1 }, { "main_document": "some of the proteins involved in membrane translocation. Endocytosis of a bacterium by an ancestral eukaryotic cell would create an organelle with a double membrane. According to this theory the inner mitochondrial membrane would be derived from the original bacterial membrane, and the outer membrane would be derived from the eukaryotic plasma membrane. Recent research by Martin and M In the current picture of the origin of the eukaryotic cell, the mitochondrion was a \"lucky accident\" (Vogel 1998). The ancestral host cell simply engulfed the mitochondrion ancestor, did not fully ingest it, and an even more successful cell resulted. According to the hydrogen hypothesis, however, the first eukaryotic cell did not form simply by accident. Instead, it was the result of a purposeful union between an archaebacterial host cell, a methanogen that consumed hydrogen and carbon dioxide to produce methane, and a future mitochondrion symbiont that made hydrogen and carbon dioxide as waste products of anaerobic metabolism. Thus, although the symbiont was probably capable of aerobic respiration, the symbiosis began as a result of the products of anaerobic metabolism. The host's dependence upon hydrogen produced by the symbiont is identified as the selective principle that consolidated the common ancestor of eukaryotic cells (Martin and M The hydrogen hypothesis has some important implications that contradict the current view of the relationship between eukaryotes and archaebacteria. In the current view, the eukaryotes branched off from the archaebacteria long before the archaebacteria had divided into their present-day groups. The hydrogen hypothesis implies that the first eukaryotes appeared much later in the evolutionary picture, meaning they are more closely tied to the archaebacteria. In order for the hydrogen hypothesis to be confirmed, an analysis of the complete sequences of eukaryotic and archaebacterial genomes must be completed (Vogel 1998). Another recent explanation of the origin of eukaryotes called the \"syntrophic hypothesis\" was presented by L Though they were independently proposed, the syntrophic hypothesis is complementary in several aspects to the hydrogen hypothesis. Both hypotheses agree in the suggestion of an anaerobic metabolism for the origin of mitochondrial symbiosis. The theories are also similar in some metabolic details of the symbiosis and molecular features of archaea (L The major difference between the two hypotheses is in the nature of the original bacterial partnership. As previously stated, in the hydrogen hypothesis, the original symbiosis is thought to have taken place between a methanogenic archaebacterium and a eubacterial ancestor to the mitochondrion. In the syntrophic hypothesis, the original symbiosis is conceived to have taken place between a methanogenic archaebacterium and an ancestral sulfate-respiring delta-proteobacterium. The former provided the central genetic material and nucleic acid metabolism while the latter provided most metabolic characteristics (L Mitochondria are thought to have derived from a later, independent symbiotic event. As with the hydrogen hypothesis, further genetic sequencing analyses are necessary in order for the claims of the syntrophic hypothesis to be upheld. It is generally agreed by the scientific community that eukaryotic cells originated from some prokaryote-like ancestor. There is mounting evidence which supports the theory that \"the modern, organelle containing eukaryotic cell evolved", "label": 1 }, { "main_document": "humid Britain however is slowly washed out. This leaves a non-calcareous clayey material, the brown earths. Nevertheless, a large part of the British solid chalk outcrop was covered by superficial deposits such as loess, clay-with-flints, Plateau drift and Coombe deposits and therefore does not carry brown earth. The present Chalk grassland is created and maintained by sheep grazing. Evans (1993) summarizes the history of the English chalklands under human influence. The prehistoric development happened in the following way: Woodland clearance started in the early Neolithic. In the later Neolithic land abandonment period the vegetation regenerated in different ways: Steep slopes developed into scrub woodland and woodland, remaining calcareous due to instable conditions and soil creep River valley bottoms showed a development towards dry grassland, decalcification, and following paludification and alluviation Forest regeneration on flat, level areas did not happen; stable and short-tufted impoverished grassland developed there. The soils became decalcified in these sites. This phenomenon is probably related to a decrease in edaphic activity due to covering of the surface with a dense thatch of grass under extensive grazing conditions, as shown in Figure 5. Soil stripping was used to a large extent when huge earthworks were constructed. This opening of the landscape brought new elements of vegetation into its current place, e.g. terricolous lichen colonies (Gilbert 2000). In the Bronze and Iron Age grassland was widespread. Woodland regeneration did not take place on any larger scale. The landscape we know from the region did not undergo mayor changes anymore (Evans 1993). A band of Jurassic limestone exceeds from north Somerset to Yorkshire. Soil formation here is fairly similar in comparison with soils of the chalk downlands, with the difference that limestone is harder and weathering thus happens slower. Soils from weathered limestone are made up of calcareous and non-calcareous clays, and oolitic and sandy limestones. In regard to the impact of early Man on soils and vegetation similar processes as in the chalk downlands have happened here. For these reasons it is not dealt with in any further detail. In the late Quaternary some parts of southern England were covered by loess and coversands (ANTOINE, CATTal. 2003). On the European continent this was and still is the preferred soil type for agriculture, e.g. the Neolithic Bandkeramik culture was the first culture to practice agriculture in the deep loess soils of Germany (Ellenberg 1982; K The loess cover of England has weathered and transferred to the non-calcareous fraction of most soils on the chalklands (Courtney, Curtisal. 1976). Large parts of it were eroded after woodland clearance and agricultural practices in pre-historical and historical times. In a computer simulation (Favies Mortlock, Boardmanal. 1997) have elaborated an initial condition of loess cover thickness of c. 1.2 m at around 7000 BP which became gradually eroded in the period from between 4000 BP to 1800 BP due to intensifying agricultural practice. This typically yellowish-red clay and large flint containing material is covering south-central and south-eastern English upland slopes and plateaus. This heavy soil was hardly suitable for pre-historic agriculture and therefore was used as grazing", "label": 0 }, { "main_document": "However, although RTAs with the North may increase policy credibility, investment and growth, this is unlikely to happen in the absence of complementary domestic policy reforms (Hoekman and Schiff, 2002). The adjustment to differentiated products such as fruits and vegetables will require these issues to be dealt with on an RTA level Hence, it may be more efficient to negotiate the topics regarding FDI, such as new rules that deal with investment, copyright law, and the harmonisation of health and safety regulations, on the multilateral level (Josling, 1993). Generally it is assumed that DCs, having small domestic markets, benefit from economies of scale and product differentiation that arise from increased market size (Langhammer and Hiemenz, 1990). However, agriculture tends not to enjoy these economies; sourcing out is rarely possible; and innovation rents are less common (Josling, 1993). Differences in resource endowments usually ensure that agricultural trade will be profitable between areas which have ample arable land in relation to population and those that have less. These differences in endowments will often occur across regions, not within regions. Similarly, trade in crops requiring particular climatic conditions will tend to be among rather than within regions (Josling, p. 30., 1997). Accordingly, DCs trade agricultural goods predominantly with a single large trading country such as the EU or the US, not with each other. This circumstance lead to a group of African countries lobbying the EC against tariff cuts on tropical products in the Uruguay Round negotiations (Hine, 1992). North-South RTAs are popular among DCs because they offer some insurance against possible disruptions to access to the market of these large member countries. However, this insurance comes at the cost of becoming more dependent on a larger partner, possibly making the DC more vulnerable to a re-imposition of protection and to economic fluctuations in the larger country. DCs may even start competing for insured market access to become the favoured low cost supplier which then becomes more attractive to FDI (Fern As to South-South RTAs, agricultural trade is increasingly taking on the nature of industrial trade patterns, including two-way trade within the same sector. In case of intra-industry specialisation trade grows among countries with rather similar resource endowments and at similar stages of development. Trade in processed foodstuffs already moves among countries each of which produces similar products (Josling, p. 30., 1997). Although DC have a strong interest in RTAs in particular with the North, it is clear that the gains from increased market access are potentially greater the more countries are involved and the broader the product and issues covered making multilateral trade liberalisation potentially more beneficial to DCs (Anderson, 2005). Multilateral trade negotiations in the WTO with its 140 members are often considered to be slow and cumbersome, while RTA negotiations involving only a small number of countries are considered to be comparatively easy (Anderson, 2005). In practice, the serious and definitive multilateral negotiations are concentrated within a relatively short period of time and, most of the actual negotiations involve a limited number of the major trading countries or blocks (Stern, 1999). However", "label": 0 }, { "main_document": "this approach stemmed from the immense weight of the deck. As it was pushed further out, it began to point downwards. Again, a functional analysis approach was used to decide on a method to combat this. The end result was that, \"the launching system also included an additional independent nose recovery device placed on the end of the deck, allowing it to be pulled upwards or pivoted during operations.\" [Reed, (2005)] The choice to use steel for the superstructure and pylons also provided a number of benefits, especially in terms of human resource management. By using steel, \"of the total number of man hours required for fabrication and assembly, only about 4% were expended in the air.\" [DYWIDAG, (2006)]. Prefabrication of pylons offsite, and assembly behind the abutments also helped to shield workers from, \"inclement weather and potentially dangerous heights,\" [Kren, (2002)] according to Project Director, Mark Buonomo. Buonomo also claims that effective use of computer technology was a key factor influencing the overall success of the project - particularly in terms of creating and raising the deck. \"We needed perfect synchronisation of all the traversers, those indispensable machines that lift and push the steel \"ocean liner\" from the pier to the pile work until it reaches its final position, above the Tarn.\" [Buonomo, (2006)] The joining of the two parts of the decking was completed in May 2004, and thanks to this computing technology, was accurate to the nearest and centimetre and well within the given timescale. The use of GPS was also praised by Eiffage's director of works, Thomas Tieberghien, who claims to be delighted with the 4-mm accuracy recorded. He goes on to state, \"I think it's the beginning of this kind of precision. Ten years ago, it was 10 cm.\" [Reina, (2004)] It is obvious that a great deal of research and planning during project specification was the primary enabling factor behind such effective analysis and selection of tools. Buonomo states that, while Eiffage engineers found both steel and concrete possibilities for the bridge to be of a similar cost, \"the steel option was more slender and allowed faster construction. A concrete deck would have been greatly heavier, over 0.5m deeper and needing three times as many cables.\" [Reina, (2004)] Other key strengths of the project included the Environmental Protection Plan developed at the beginning of the project and remaining in place during the entirety of the bridge's construction. \"It identifies the project's different pollution risks, lays down preventative arrangements, organises checks and provides measures to handle any pollution that may arise. A specially dedicated environmental protection team has drawn up a rigorous plan to ensure that nature is respected in all events.\" [Gimmig, (2005)] Over the course of the 39 month construction period, water, air and noise issues were all checked on a regular basis. Unfortunately, despite enormous success on the whole, there is evidence of poor planning and management oversight in some areas of the project. One unforeseen problem was the increase of prestressing at the pier tops as a result of the concrete casting process.", "label": 1 }, { "main_document": "both centrosomes of the mitotic spindle. However, as the mitotic spindle rotates 90 The rotation brings the centrosome into a different cellular environment, this affects PIE-1 stability and/or binding. MEX-5 and MEX-6 are also necessary for partitioning PIE-1 into germline destined cytoplasm just before cell division as well as localising it to P granules, and this localisation somehow involves the second zinc finger of PIE-1. The localization and the genetic properties of PIE-1 suggest that it represses the establishment of somatic cell fate and preserves the totipotency of the germ cell lineage. Thus PIE-1 protects the germline cells from somatic differentiation pathways, by preventing transcription of factors such as SKN-1 and PAL-1. PIE-1 is thought to prevent transcription via interfering with a CTD kinase, which phosphorylates the CTD domain of RNA polymerase II, a process normally necessary for transcription. Furthermore, the stability and expression of maternal effect genes depends on PIE-1 activity. One example is MEX-1 is essential for maintenance of the germline but does not directly control gene transcription. The mex-1 mutant phenotype is similar to par mutant phenotypes, but a bit weaker, and mex-1 mutants exhibit cell fate changes, defects in localisation of P granules, SKN-1 and PIE-1 proteins as well as defects in asymmetrical cell division. The MEX-1 protein has two zinc finger motifs similar to the ones found in PIE-1 and is localised in a similar pattern to that of PIE-1. MEX-1 is localised to the cytoplasm of germline blastomeres and P granules until the birth of the P4 daughters, but in contrast to PIE-1, it is not found in the nucleus. In the AB divides into two developmentally equivalent daughter cells, ABa and ABp, which have asymmetrical contacts with the neighbouring blastomeres EMS and P2. This enables a cell-cell interaction between ABp and P2 by the late 4-cell stage of embryogenesis that is essential for formation of a dorsoventral axis and causes ABp to adopt a different developmental fate from ABa. Translation of maternal When As many other maternal mRNAs, Two regions within the The balance between POS-1 and SPN-4 is thought to control the translation of maternal POS-1 is a protein that binds to the SCR of glp-1 and is thought to negatively regulate the translation of glp-1 in P2 by shortening its poly(A) tail. SPN-4 is an RNP-type RNA-binding protein whose interaction with the TCR seems required for the translation of glp-1. The translation of maternal glp-1 mRNA is suppressed in posterior parts of the embryo where POS-1 is very abundant, but it is turned on in the anterior half, where the concentration of SPN-4, which is abundant in both the posterior and anterior parts of the embryo, predominates over the abundance of POS-1. Interestingly, GLP-1 is not translated in oocytes or one-cell embryos which do have abundant SPN-4 but not POS-1, thus there must be yet another type of temporal regulation preventing SPN-4 from activating translation at this early stage. APX-1 is a membrane-bound protein similar to the Drosophila Delta protein and the While the maternal apx-1 mRNA is present in nearly every blastomere", "label": 0 }, { "main_document": "type of the output. This immediately solved the problem: The translate() function was now fully operational. To have a random sequence of numbers was crucial to ensure longevity of the game. An option was to store a large array of values which we \"hard coded\" and access them sequentially. A better option was to import the random library into the code. After numerous attempts at importing the library it became evident it was not possible, this was later explained to be due to an incompatibility with the SWET boards hardware. To overcome this setback we decided to utilize user input; users would input a sequence of ten numbers that would be run through an algorithm to give some pseudo random numbers. The algorithm decided upon was our own simple implementation of a prime number driven, modulus based calculation: To test this equation we passed each possible value that could be inputted via the keypad. The results proved that there was an even spread of numbers. These numbers could be passed into an array which would then be accessed sequentially based on the round of the game. To get these values a simple method called readStart() was created. It read from Keypad A continuously until ten numbers were input. To store the values in an array the translate() method was called. To test the inputting of random numbers we simply inputted ten numbers and wrote a simple method to output the contents of the array. As the output of array was random after each of the five rounds of testing it was decided that the readStart() method was a success. The method also functioned when a non numeric key was pressed (e.g. A, B, C or D). An output of one of the testing rounds is detailed below. A test program, printArray() was used to output the value stored in the array once values had been entered. The detailed brief suggested \"lighting up squares\" on a four by four grid but we believed the usage of boxes was too basic. We wanted to make the user actually try and engage with the program. So by writing a random number of lines to the LCD screen we could make the user \"work\" and hopefully enjoy the challenge. The final aspect of the system was adding a user friendly interface we decided upon the following features all implemented to display on the terminal window: The test the calculation of the winner we simply played the game. One round Keypad A would win by a score of 6-4. The next round Keypad B would win by the same score. In case of a draw the final test was to have a score of 5-5. All tests passed first time. The final solution worked well and met our specification. It allowed two players to pit themselves against each other and would reward the faster player. However, there were a handful of minor issues that arose. During the time between two rounds of a match being played the LCD screen did not completely clear instantly. We are unsure", "label": 1 }, { "main_document": "the historical text of the film may instruct and become the accepted historical understanding of those who watched it. The film text acts as the literary story that Becker discusses and, as far as Becker is concerned, is a completely valid history. This final section offers a retrospective view of Becker, a look at secondary source accounts which detail Becker's work and complete the final piece in our quest to find out 'why' he is so significant in American historical writing. Milton M. Klein wrote specifically in 1985 about the impact of 'Everyman his own historian' which was given as a Presidential address at the Minneapolis meeting of the American Historical Association in 1932. Klein emphasized the impact that Becker's work had within the historical profession and names those in the profession who remarked favourably upon his theories, namely Charles Beard, Frederick Jackson Turner and J. Franklin Jameson. Klein understands that the reason for the great response he received by fellow historians was because it rejected the scientific forms we have discussed that had been in place by the 1880's. Klein concludes his positive evaluation of Becker by saying \"Becker's resounding repudiation of objective history ushered in an era of relativist historiography that has not yet run its course. If he himself was not the founder of the school, he nevertheless provided it with its fullest theoretical expression.\" Milton M. Klein, Everyman his own Historian: Carl Becker as Historiographer (The History Teacher, 1985) p101 ibid, 103 ibid, p106 Peter Novick's is not as kind to Becker, most likely because his book is a history of the 'objectivity question,' and not a critique of it, and he questions the impact that Becker had on post-war historians, despite Becker's 1910 'detachment' essay. He does however, label Becker, along with Beard as the most \"influential interwar relativist,\" Novick, p106 ibid, p107 ibid, p104 ibid, p107 Both Klein and Novick admit that Becker was crucially important for the development of a new kind of history, primarily because of the positive effect his work had on others, and there are examples of this effect. A disgruntled secondary school teacher wrote 'Carl Becker's Modern History: New Roads Barely Trodden,' demanding that the \"flat formulaic prose\" This teacher, Mildred Alpern, believed that the classroom was exactly where the skills Becker espoused should be taught, those concepts such as interpretation, which were valuable for high school students. Mildred Alpern, Carl Becker's Modern History: New Roads Barely Trodden (The History Teacher, 1985) p111 ibid, p112 Becker's concepts also affect the work of his contemporaries. Allan Nevins for example, writes about the best form of history being one that fuses \"facts, ideas and literary grace in a single whole,\" Allan Nevins, What's the Matter with History (The Saturday Review of Literature, 1939) p123 Historian Cushing Strout writes of Becker's theories as controversial, He was controversial because he espoused a form of historical writing far removed from the existing rigid format, and anything that challenges a rigid format can only be controversial, but it is also vitally important. In response to 'Everyman his", "label": 1 }, { "main_document": "that more Sodium hydroxide was needed to neutralise its citric acid. This may be attributed to an experimental error if pH indicator is taken as a more accurate measurement. Interesting to note that they all had the same amount of sugar but the varying acidities determined their tastes to a large extent. This means that it may be possible to alter tastes by altering acidity levels of various juices. Lemon juice is has a very high proportion of hydrogen ions in solution , almost 10 times as much sodium hydroxide was required to neutralise the lemon solution Observations were made to see if any 'string like' suspension was formed after a few minutes. This part of the experiment was performed mainly to confirm the effectiveness of the measures undertaken to ensure the juices made were safe The cloudy nature of juices is due to the presence of collected cell tissues and constituents. Most of these are colloidal and are removed from solution by destroying or hydrolyzing the pectin e.g. using pectin enzyme pectolase. This allowed for the finely divided, insoluble particles (cloud) to settle out, leaving a clear juice. (Pectin stabilizes other components and its presence causes haze or browning.) Later attempts were made to stabilise the pectin. Pasteurisation stabilises the pectin but not for very long as was observed in the control. The most effective clarifier was Pectic enzyme which was the 'clearest' juice after 1 week. Gelatin was correspondingly best clarifier in terms of its ability to precipitate polyphenols which prevents the formation of brown insoluble compounds and off-flavours during storage. However, the enzyme and other clarifiers were not tested together. It can however be deduced from the tests carried out that a combination of Pectic enzyme and 4% gelatine solution would yield the best quality product. Despite the fact that the apple juices were pasteurised, they all had varying degrees of mould after one week indicating that the heat treatment carried out was insufficient to reduce numbers of micro organisms to an acceptable level To ensure a safe product is formed it is essential to -Wash hands and fruit thoroughly prior to juice expression. In this experiment we only used water and not the recommended 1% HCl solution which may have contributed to later formation of mould. - carry out adequate heat treatment of the juice. Pasteurisation may not have been carried out for an adequate amount of time or high enough temperature in the experiment.", "label": 1 }, { "main_document": "realistically, could only be implemented if cost effectiveness was not an issue. The bridge output voltage has already been defined by the following expression: Based on this expression, a shaft with a smaller Young's Modulus will create a greater voltage difference, for a given torque. Similarly, if the supply voltage to the bridge (currently set at 9V) was increased, the sensitivity of the system would also increase. Conversely, any increase in bridge voltage, may produce more heat, and hence the bridge will become more susceptible to temperature. Therefore the sensitivity of the system may be increased at the expense of accuracy. The accuracy of the system depends primarily on the sensor's ability to be insensitive to other loading effects, such as axial forces and bending forces. Due to the method of applying the torque to the shaft, there will be a resultant bending force caused by the hanging weights. Similarly, temperature fluctuations affecting the gauge resistance will also act as a source of errors in the measurement. However, in the four gauge bridge, axial and bending forces are rejected because the effect of these loads is cancelled out by the gauges in adjacent arms of the bridge. Similarly in terms of temperature fluctuation, since all four gauges will experience the same temperature, the bridge remains balanced and the error is eliminated. The overall system provides a linear relationship between the applied torque, and the bridge output voltage, and is expressed by the following equation: For every Newton-Metre of torque applied, there is a bridge output voltage of 0.205mV produced. Therefore, there is an overall sensitivity of 0.205mV/Nm in the designed system. The torque measurement system could be improved by upgrading some of the elements of the system, however upgrading these parts results in a decrease in cost efficiency. The gain of the amplifier circuit could be increased further to improve the signal to noise ratio, which will produce more reliable results. The type of strain gauge could be upgraded to a type offering a greater gauge factor, which would increase the sensitivity of the measurement system. In addition, the measurable strain of the gauges could be increased with different gauges, and hence the range of operation could be improved. The bridge output is related to the input torque by the following equations: The latter, indicates the system sensitivity, with respect to the design parameters which have been chosen. Therefore, it is possible to increase the sensitivity by reducing the Young's Modulus of the material, and increasing the Poisson ratio. However, the principal strain is also related to the torque by the following: Therefore, if the Young's Modulus was reduced in order to increase the sensitivity, it would have a negative effect on the range of the torque cell, as the strain produced would be greater for a given torque, and the maximum torque would be limited. Depending on the requirements of the system, it is possible for the shaft properties to be adjusted to allow for a greater measuring range, or for greater sensitivity.", "label": 1 }, { "main_document": "such as OFSTED inspections and the national curriculum; and have generally endorsed the Conservative's impregnation of the welfare state with markets. The imminent introduction of foundation hospitals and variable top-up fees within tertiary education in Blair's probable third term in office is just the latest example of this. Lastly Blair's espousal of New Right welfare reforms can be seen in the contraction of the social insurance system. Whilst the latest example has been New Labour's narrowing of the eligibility for incapacity benefit in January this year, there near continuation of the Conservative policies in pension schemes has been a theme running throughout Blair's tenure- with the further encouragement of private provision and simplified means-testing and the refusal to revive the link between basic pensions and the rise in earnings. The opinion of Frank Field (the first minister of welfare reform) that success requires the \"reduction of the welfare roll, not redistribution\" (Driver & Martell, 1998, p.179) is as such quintessentially Third Wayist. Having said this it seems foolish to argue, that the Third Way is merely the latest manifestation of Thatcherism. True, New Labour have come to embrace some of the key assumptions of the New Right, however it has retained a communitarian focus and has been mildly redistributive, (Hills & Stewart, 2005) with Brown producing policies such as free pensioner transport in this year's budget and levying a windfall tax on public utilities in his first. An illuminating example of the different emphases in approach of Labour and its New Right predecessor's can be seen in a comparison between Nigel Lawson's polices to alleviate unemployment in the 1980s and Gordon Brown's New Deal today. While the former aggressively tried to reduce wages, in order to lure in businesses, the latter has adopted a more positive approach in attracting businesses and workers alike, through creating a more educated workforce, and making \"work pay\" through policies like the minimum wage and working family tax credit. On the other hand, Giddens appears too sanguine in heralding the Third Way as a renewal of social democracy. The welfare policies proposed by the Third Way seem to be a near futile attempt to marry if not dichotomies or polar opposites, concepts that don't seem by any stretch of the imagination mutually supportive: rights and responsibilities, social cohesion and economic dynamism, social justice and fiscal prudence, fairness and enterprise etc. When faced with the impossibility of this task New Labour have opted to make \"hard choices\" (Driver & Martell, 1998, p.179) - which invariably translates into producing welfare policy taking a Thatcherite complexion as shown above. Thus, while the Third Way does not merely constitute substantive neo-liberal policies doused in communitarian rhetoric, it does represent a dubious and only partial synergy of New Right assumptions and social democratic sympathies. New Liberalism was particularly prominent in the early twentieth century- a period where One can identify three main areas where New Liberalism as expounded by L.T Hobhouse in his famous book \"Liberalism\" looks remarkably like the ideas of the Third Way. Firstly, Hobhouse argues that while economic competition", "label": 1 }, { "main_document": "Problems. In this paper I have critically addressed the two principal frameworks which incorporate the Imperfect Information assumption to explain CR. While Jaffee and Russell (1976), the first model to introduce this hypothesis, emphasised Moral Hazard problems, Stiglitz and Weiss (1981) developed a model which relies on Adverse Selection considerations. Once these models were introduced, I answered the question as to how the imperfect information assumption explains CR in a way which synthesised both frameworks. As lenders are unable to distinguish from an In this situation, the interest rate which maximises the expected lender profits may differ from the market-clear rate. If there is an excess demand for loans at the former interest rate, a situation of CR may exist. Nevertheless, the CR hypothesis as a consequence of imperfect information was criticised by different researchers, particularly from an empirical point of view. It has also been emphasised that the existence of contractual mechanisms such as collateralised loans may help to differentiate borrower characteristics in a way which avoids asymmetric information problems. I have argued that differences among borrowers are not unidimensional, and a limited set of contracts may not generate a symmetric information scenario, since some degree of uncertainty after collaterals is a probable outcome in credit market contracts. I also indicated that empirical CR measurements rely on important difficulties, since they involve To conclude the analysis, two empirical tests on CR were introduced, adapting the Berger and Udell (1992) framework to the Argentine case. The evidence, although not conclusive, suggests the existence of CR among those less-collateralised borrowers in Argentina between 1996 and 2001. Further analysis on the evidence of CR in Argentina incorporating different data for testing CR via econometric techniques, the empirical study of the effects of collateral on Asymmetric Information problems, and the analysis of sticky prices (e.g. interest rate) as a consequence of imperfect information are interesting extensions of this paper. * I would like to thank This particular case occurs when incomeis associated with a stochastic variable, and the realized income is less than the capital plus interests agreed in the This situation could occur to honest agents if they are \"unlucky,\" since they did not Nevertheless, this consideration does not change the main results of the model. While honest agents have a high It is also assumed thatlenders do not have any other costs. It is clear that as thevalue of the term Furthermore, it is possible to assume that there is no default at all for certainvalues of It means that two projects with different probabilities of success and different outcomes will have the same mean return. Formally, for two projects Nevertheless, the mean expected value of both projects is assumed to be the same (i.e. For further discussion, see Jaffee and Stiglitz (1990, p.857). As the firm's variance return increases (i.e. the project risk), an imaginary convex combination of points in the This situation is observable through the short dash lines in Figure 4. In this scenario, it is also worth noting that the firm's expected benefit is also increasing. In a competitive", "label": 0 }, { "main_document": "the goddess Athena, rather than Apollo or even Zeus himself, who leads everything into the perfect harmony itself is kind of dashing cold water over this assumption. The relation between virginity and marriage also remains ambiguous, in that marriage is the way to produce the offspring and without this reproduction there is no future prosperity in Athens that is wished in the very end of the trilogy. Moreover, the virtue of marriage is supported by Apollo, who declares that 'marriage is... Fate itself, stronger than oaths, and Justice guards its life 215-6 As we have seen, tracing the imageries in the trilogy we can get the vivid picture of how things change as the story moves forward, and what ideology and implication may possibly be laid hidden behind the scene. But the important thing to remember here is that the argument I have developed in this essay is not necessarily the ultimate answer to how we should read these plays and what we should gain from them, since the biggest advantage of imagery may be that it allows numerous different interpretations for the readers and audiences with different perspectives. There is no way to determine if it is right or wrong. We have examined how the Aeschylean imagery helps us to grasp the storyline as the movement from savage and uncivilised circle of revenge to the more intelligent, sophisticated and human society, and indicates the possibility that the trilogy embraces a kind of anti-feminism, but at the same time it is completely probable that one might reach the totally opposite conclusion. In one way or another, nevertheless, this lively, graphical language of Aeschylus, which we might even be able to call 'the art of ambiguity', definitely has huge influence on our reception and perspective of his plays, and it contributes in making his stories more attractive by largely deepening the structure of his drama.", "label": 0 }, { "main_document": "bonds, in effect of high liquidity of money. People want to buy Japanese bonds, and save instead of keeping liquid money. There is a high amount of liquid money in the economy because consumer goods are very cheap, so less income is needed to buy the basic consumer spending of an individual, let alone the Japanese culture of people saving rather than spending makes things even worse. In consequence, to encourage people to spend more the Bank of Japan is keeping interest rates as low as possible, so that the cost of borrowing and reward of saving is low. The sites were quite useful in providing information. Some websites, like BBC Other websites lacked this feature. BBC has an exquisite feature as it allows users to view other articles written about the same issue in different intervals of time. Thus, the user can find information about the issue released in the past. There wasn't significant grammatical errors and all websites were all up-to date. One feature that caught my attention was that the data in Ludwig Von Mises Institute website In contrast, only in Ludwig Von Mises Institute there was information about who's providing the information, who was Frank Shostak a Ph.d. scholar at Mises Institute. All four website had search engines and help links which were very useful. There was just one thing that disturbed me on one of the websites, which was the layout and the colour scheme of the CBS website. The information was too stacked together and the page layout badly organized, also the choice of colours was not so good, cause a bright blue with black background disturbs the eye quite a lot.", "label": 0 }, { "main_document": "is depleted, making its life expectancy high during the Nordic winter (Nilsson, 2004; Nilsson, 2001) However the unfortunate drawback of this system for the organism is the loss of high energy carbohydrate (Nilsson and Lutz, 2004) This apparatus appears to be one of the most important survival strategies of the Carp but what happens at the neuronal level? During anoxic conditions, the animal's activity is reduced as mentioned above but not shutdown. The reduction of the electric activity (which accounts for more than 50% of neural energy consumption in vertebrates) contributing to a reduction in ATP consumption is believed to rely on 3 principle neurotransmitters and neuromodulators: Microdialysis measurements have shown that This neurotransmitter inhibits transmission in the brain and it is released in the Crucian carp at moderated levels to reduce neuronal activity such that ATP consumption is reduced but the organism maintains a certain level of activity. The second neurotransmitter that takes part is glutamate. It appears that this major neurotransmitter in mammals and fish during normoxia, is highly toxic in the anoxic brain. The Carp avoids this anoxic exitotoxin by reducing its release (Nilsson and Lutz, 1997; Nilsson and Renshaw, 2004). Finally adenosine appears to play an important role in neuromodulation of the Carp brain. This has been proven by blocking adenosine receptors which leads to the increase in the rate of ethanol release to the water (Nilsson, 2001). Being the result of the net breakdown of phosphorylated adenylates (such as ATP, AMP and ADP), it acts to produce an increase in cerebral blood, inhibit neuronal activity and release of excitatory neurotransmitters (Nilsson and Lutz, 1997) As a consequence of metabolic depression, Na+/K+ ATPase (which is used to restore ion-gradients constantly disrupted by electrical activity) is also reduced (Johansson The Crucian carp also maintains a low extra-cellular K+ levels and any harmful rises in intracellular Ca2+ are avoided. It is also believed that the voltage gated channels of those in have reduced permeability but experiments have so far given negative results for this supposition (Nilsson, 2001; Nilsson, and Lutz, 2004). These reductions appear to be a second line of defence (Nilsson, 2001). Figure 3 summarizes the reductions occurring. These reductions result in vision and hearing being (Johansson However responses to anoxic environments are moderate in the carp and they have much more severe effects in turtles. In contrast with the Crucian carp, and appear to survive anoxia for about 3 months at 3 The turtles employ very similar strategies to survive anoxia to that of but the outcome of those strategies differ greatly. The major difference is that they become comatose (Nilsson and Lutz, 1997; Nilsson and Lutz, 2004). This is achieved through brain metabolic depression and glycolytic depression. As mentioned above, the freshwater turtles clearly differ from the Crucian carp in that of the drastic suppression of the use of ATP (Jackson, 1999). First, the turtle is an ectotherm (cold blooded animal) and it therefore has a metabolic rate that is low compared to an endotherm of similar size (five to ten times lower even at the same", "label": 0 }, { "main_document": "Watching television has become an increasingly popular leisure activity in recent times. It has become especially popular among children with more children's programmes available, and the introduction of entire channels dedicated to them. As well as violent and aggressive images that some children's programmes contain, children are also exposed to adult programmes that are violent and aggressive in nature. As a result of this viewing, there are increasing concerns about how these aggressive images affect children's development and whether they encourage children to be more aggressive. Psychological research provides an attempt to address these concerns. One area of relevant research is the proposition of theories to explain ways that children may learn to be more aggressive through watching aggression in films and on television. One theory about the learning of aggression was proposed by Huesmann (as cited in Josephson, 1987) and was concerned with the formation of social scripts. Social scripts are ways of solving problems and are learnt by observing the behaviour of others. The following of a script is almost automatic and tends to be spontaneous. In the case of watching aggressive television, children who watch it frequently will acquire social scripts about aggressive behaviour, and with the right retrieval cues in a particular situation will exhibit aggressive behaviour themselves. Berkowitz (as cited in Josephson, 1987) also provided a theory that could explain how aggression could be learnt. He hypothesised that networks are produced with associative pathways linking feelings, emotions and action tendencies and these pathways can be strengthened through activation. This strengthening of pathways is called priming, and he proposed that television violence can prime pathways so that associations can be made between objects or situations and aggressive feelings and actions, even if the objects or situations are completely neutral. These theories are only useful if they can be applied to experimental research to answer the question of whether television and film violence is harmful to children. Josephson (1987) attempted to test these theories by investigating the effects of television violence on the aggression of second and third grade boys. The boys' classroom teacher was asked to give measures of characteristic aggressiveness for each boy and the boys were then shown a television programme with violent content. The programme included images of people using walkie-talkies because these would be used as a neutral retrieval cue later on. The children were then asked to take part in a game of floor hockey, but before the game they had to give a pre-game interview that was recorded on either a walkie-talkie or a radio. The boys' subsequent aggression level in the hockey game was then recorded. It was found that the violent television content only increased aggression levels in those boys that had been given high scores of characteristic aggressiveness by their teacher. Their increased aggression levels were immediate and were only observed in the pre-game interview and the first three minutes of the hockey game. Also, these boys were more aggressive if they were given the retrieval cue in the form of a walkie-talkie in the interview. The results support", "label": 1 }, { "main_document": "Many students are having to accept living with large amounts of debt despite recent research showing that this debt may be having a detrimental impact on their mental health. With the forthcoming increase in tuition fees, student debt reaching an all time high and reports indicating that mental health problems are significantly more prevalent in the student community, psychologists are beginning to look at possible links between heavy financial burdens and students health. The uptake of student loans is at its highest since their introduction and the use of student overdrafts and credit cards is constantly on the rise. As a result student debt has become one of the biggest concerns facing students in higher education. With increased student fees arriving in September, psychologists are eager to assess what impact debt may have on students' mental health. Psychologists at Middlesex University and the University of Sussex have been researching whether a students' worries about their finances and debt has an impact on their mental health. In their study, students were asked to put statements such as \"I worry about my financial situation\" on a scale indicating how much they agree to the statement. This allowed the researchers to assess attitudes towards finances. They were also asked questions that looked at various aspect of mental health such as happiness, depression and vitality. The results of the study, published in the British Journal of Health Psychology, demonstrated that financial worries do have a significant negative impact on mental health. These results confirm that students who are concerned about their financial situation are more likely to suffer from mental health problems. The researchers also compared these results with data from the same questions asked of students studying in Finland. This allowed them to examine whether there were differences in attitudes as a result of differing funding policies. British students pay tuition fees and don't receive a grant, whereas Finnish students do not pay fees and receive both a grant and housing supplements. Results showed that there were significant differences between the British and the Finnish students. It was seen that the British students reported greater levels of financial concern and that they reported significantly more mental health problems. Research by MORI saw that almost 40% of British University students feel that being in debt is the worst aspect of student life. This is no wonder as latest figures suggest that in 2004, the average student debt rose to a staggering More worrying still is that 56% of students report that they are unable to handle their finances and one in 20 students report that they have serious financial problems. Highlighted in the published paper is the fact that government policies appear to contradict themselves, on the one hand promising to reduce inequity in health and on the other creating policy that may seriously affect students mental health. Recommendations made by the psychologists suggest that the government needs to take a greater responsibility to ensure students positive mental health and well being. With student debts estimated to reach an average of The past few months have", "label": 1 }, { "main_document": "be expected to trust his son and this is what leads to their downfall. The lack of relationship between the two is emphasised by Theseus' description of his confrontation with Sinis, a bandit he claims to have killed, and the idea of the rocks being ravaged by the sea See The final scene of the play, when Hippolytus returns, bruised and battered, is an exceptionally poignant scene, as it shows the relationship between the father and the son as it should be. There are two strong Aristotelian concepts in this final scene between the two. The first is the idea of 'recognition', which comes when Theseus realises his 'hamartia' in cursing Hippolytus with the curse of Poseidon and also shows the tragedy of how 'the prayers that become reality are the deadly ones' The second is the concept of a 'peripeteia', or a reversal, which means that when something happens that should have one effect, it has the opposite effect. In this case, Theseus was supposed to feel justice that his curse had worked, when in fact, when he realises the extremity of what has happened, he feels guilt and pain. For his part, Hippolytus is able to absolve Theseus of blame before his death, as he tells his father: Like Theseus, Hippolytus has also had a recognition, as he 'rediscovers Theseus as a father' Their character difference which made their relationship so disastrous will still be there, but they have now finally learned to accept the other for who he is, even though it is too late. One of the most intruiging aspects of the opening play of Aeschylus' 'Oresteia' trilogy is the aspect concerning a figure of fatherhood. This is because, even without such a figure performing an active role in the play, the theme of fatherhood is extremely important and it comes across in several moments of the play in different ways. What is most interesting about this is the idea that a father figure is someone so powerful and influential that he does not have to be onstage constantly to draw the attention of the audience and, in some ways, dominate the action of the play. Indeed, the only male figure who could be regarded as being close to a father figure who is actually physically seen in the play is Agamemnon, yet even he does not appear until late into the action, yet he manages to dominate much of the action on the stage. In the 'Agamemnon', one of the greatest conflicts for a father who, like Agamemnon, is also a king and a warrior comes to the forefront of the play. This conflict is a conflict where the two different roles are set against each other and the character has different loyalties to the 'polis' The Chorus, when discussing this, initially show Agamemnon's unwillingness to commit such an act with these words: City Family Agamemnon's instinct as a father with love for his daughter is being clearly illustrated by the Chorus as they tell that he does not want to sacrifice his daughter, but that he", "label": 1 }, { "main_document": "have an impact on the reported profit for the entity. In the case of the Radstone Technology PLC annual report, the depreciation values are clearly detailed in the Notes to the Financial Statements [24], and the method used (a While this method will clearly not be absolutely accurate, it is a reasonable approximation and the estimated useful economic life values appear to be reasonable, suggesting that the depreciation adjustments made are fair. These will help to provide a valid and accurate impression of the entity's financial position by reflecting the fact that fixed assets are not likely to be worth the same amount from year to year, and hence the reported financial value of the company is altered accordingly. Another useful adjustment takes account of the fact that some services provided during the accounting period will not yet have been paid for. Some money will therefore need to be allowed (accrued) for these services, leading to an entry in the financial statements as \"accruals\" for the year. In the Radstone Technology PLC annual report for 2005, this can be found on page 47 [25] and is listed under Creditors: amounts falling due within one year. This shows that account has been taken of these values and the profit has been adjusted accordingly, however there is no explanation of where the figure for accruals has come from or how it has been calculated, so it is not possible to say how accurate an estimate it is, and therefore it is not really possible to conclude how fair the value is. The Notes to the Financial Statements note that applicable accounting standards in the UK have been adhered to when preparing the statements [26], suggesting that they will provide a valid comparison when related to those of one of its peers. This is very useful as many financial measures included in the statements are far more useful when used for peer comparison than as stand-alone figures. The group finance director acknowledges the requirement to adopt IFRS (International Financial Reporting Standards) in the financial review and states that Not only have relevant standards been adopted, suggesting that there is at least some degree of regulation in the way the accounts were constructed, but they have also been examined by an external auditor, who has certified that the accounts present a This alone does not necessarily indicate that the accounts are indeed \"true and fair\", however external auditing is good practice and provides some reassurance that the accounts have been satisfactorily prepared and, combined with the adjustments and adoption of standards detailed above, suggests that the financial report should indeed present a reasonably true and fair assessment view of the entity's performance. Given the results from the ratio analysis, and the assessment that the values are reasonably reliable, it seems that at the financial year ending 31 The share price is increasing over time, as are the investment ratios in general. There are no indicators evident at this stage which would suggest the entity's performance is going to seriously decline in the near future, implying that", "label": 1 }, { "main_document": "standards due to staff changes. This would reduce the number of returning customers if they've had a bad meal. Increased use of internet. The club's website is poor, and might affect a customer's perception of the restaurant. Competitor (Bicester) is opening more widely to the public. KGC's commitment to members may attract ex-Bicester members. Grey market growing and influencing eating trends. KGC can capitalise on its senior members, and target the restaurant towards them. Environmental issues are important to the consumer. Commitment to the environment can reap long term rewards in the way of additional customers, and can cut costs. Smoking ban. Members currently smoking in the Spike bar will now have to go outside. This may reduce use of the bar. Proposed meal tax. This will either increase prices or reduce margins, both of which would affect trade. High \"actual\" inflation. This reduces people's disposable income, particularly those of the middle classes. This may reduce the amount people spend in the bar and restaurant. Eating out market dominated by pubs. This means the competitor set is growing. The following are recommendations for Kirtlington Golf Club to consider. The recommendations consider each element of the marketing mix. These elements are tools used by marketers to develop objectives for a company (Bowie and Buttle, 2004). The 7 element of the mix are product, price, place, promotion, people, physical evidence and process. The elements are used in conjunction with each other to develop the hospitality offer for the target markets. Afternoon tea should be offered for the summer period. The afternoons are a period of excess supply (Appendix 6) for the restaurant. An offer such as this would encourage both members of the club and members of the public to use the restaurant at this time, and generate additional revenue. Barbeque evenings. At present, the restaurant is not open in the evenings, but a newly built terrace is the perfect venue for a barbeque in the summer and the club should make full use of it. These evenings would develop more of a social side to the golf club, but also attract the public, looking for a good value, casual evening out. This would also make people more aware of the restaurant itself, and hopefully generate further revenue due to this. Christmas lunches. The period of time running up to Christmas tends to be fairly quiet for the restaurant as \"fair weather\" golfers no longer frequent the club. For this reason, a Christmas lunch would be a suitable option. It would attract the members of the club who may already dine there for lunch, and small local businesses looking to celebrate. \"Balls and brunch\" price bundle. This discount would offer weekend golfers the opportunity to eat brunch in the restaurant during the quieter period of the morning (9-11am), with a bucket of balls for the driving range. As the bucket of balls only costs \"Societies and wine\" price bundle. At present, societies have the opportunity to order a 1, 2 or 3 course meal, and all drinks are charged additionally. We propose that any", "label": 1 }, { "main_document": "birth of a new nation, the United States of America.\" Therefore, it is important to understand that when the primary objective of the war changed from the defence of rights to that of fighting for independence, a huge step towards realizing independence had been taken. Pauline Maier, John Richard Alden, Ibid, p. 9 Countryman, p. 109 Ibid, p. 73 Ibid, p. 73 However, there was a vast difference in declaring independence and actually achieving it - 'It was one thing to assert independence. It was another matter to attain it.\" The Americans still had to defeat the might of the British army - the best trained and equipped in the world. The first few years of the war were characterized by unimportant battles in which neither side made progress. The Americans certainly did not look like achieving their independence in the near future. However, the turning point of the war and consequently the moment when independence became inevitable was undoubtedly at the battle of Saratoga in autumn 1777 in which the Americans defeated the British army and forced the surrender of almost 5,000 troops. Although the war did continue for another five years, American victory was now almost totally assured and independence became inevitable. It was the battle of Saratoga which 'rescued the American war effort from what looked... to be an inevitable and humiliating disaster, without Saratoga the Americans might well have sued for peace.\" Its importance cannot be underestimated for a large number of reasons. It was the first significant American military victory and showed that Britain could be defeated in open battle. This further helped to boost the independence cause among the people because it proved to be a huge propaganda boost. It also damaged the English appetite for war and meant that from here on, the British were less willing to commit so many troops and resources to the war. Finally, and most importantly 'it propelled the French into a long-contemplated declaration of global war on Britain.\" This meant that America now had the military strength to match its ideological fervour and therefore could over-power the British army in the war and finally win independence. Ibid, p. 90 Robert Harvey, Ibid, p. 282 Therefore, in conclusion, independence was a long and complicated process that took decades to achieve, only becoming inevitable with the military victory at Saratoga. It started with a gradual change in cultural identity as the Americans started to feel less attached to a Britain which exerted less influence over the colonies. The Seven Years' War and its effects were the next step as Britain began to reassert itself over the colonies firstly with the deployment of troops, which greatly angered the Americans, and secondly and more importantly with the constant attempts to tax the colonists. This led to resistance and eventually to war. However, the fact that the Americans were at war with Britain did not signify that independence was inevitable. Far from it - as they were merely fighting to defend their rights. The objective of the war did gradually change and independence was", "label": 1 }, { "main_document": "fire which occurred on the asteroid would not have happened in a the real world as an asteroids gravitational field is too weak to hold an atmosphere that might contain gasses capable of causing a prolonged fire. Consequently the viewing public would leave knowing what an asteroid is like when in truth they have only a false picture. It accurately depicts and explains the effects of asteroid collision with the Earth especially the effect asteroid size has on the impact. Books tend to do a better job of science communication they contain a broader spread information and give the public a full picture. An interesting case is \"The Hammer of God\" by Arthur C Clark, this story is set in the future and follows the fortunes of one Robert Singh, who captains a space craft assigned the mission of altering the flight path of a large asteroid in order that it will avoid collision with the earth. Although this is a work of fiction it contains much about the history of astronomy and the specific branch of the study of asteroids and consequentially is a mine of knowledge.(1) Books presume very little on the part of the public they are writing for as any assumptions made are usually explained within the text. However in this case it is \"Thirty thousand tones of TNT...more than the total energy of all nuclear weapons on Earth\", this phrase from the book Rain of Iron and Ice by J. S. Lewis describes the potential destructive force of a 250m diameter asteroid moving at orbital speed. (3)This indicates the presumption of the authors of the public knowledge of the strength of explosives and also an understanding of the prefixes such as 'mega' or 'giga' which occur throughout the text. The BBC website has a question and answer information page on the subject of asteroids, the questions are posed by the public and the answers by Duncan Steel the Vice president of Spaceguard. Here the information is clearly and accurately presented. The sentences are short, the jargon minimal, it is open to all types of audience. This page accurately portrays the risk for what it is and makes an excellent case for the expanding of the Spaceguard project or others like it. By its nature it has minimal presumptions about public knowledge of asteroids. Many of the internet sites examined presumed public knowledge of explosives as the effects of meteor impacts are described in this way. For example the Dinosaur Extinction page at The consequences of a collision are directly related to the size, density and composition of the impacting object, as well as impact velocity and location of collision e.g. land or sea. If the asteroid is particularly small it will burn up in the atmosphere before it reaches the ground. If an asteroid 1 or 2 km in diameter travelling 30km per second were to impact it would destroy an area the size of California. The resulting dust particles around in total would enter the atmosphere and block out the sun for possibly up to a", "label": 1 }, { "main_document": "the last reason. To be of any real use to Neuroscience, this altruism would need to be a purely emergent behaviour from an ANN similar in science and architecture to a mammalian brain. While this is almost certainly possible, it is something completely unlike anything developed in any higher life form. In fact, the only examples in biology known to act so altruistically are social insects - leading us back to Swarm Intelligence and away from the top-down approach to robotics. So the chances of us being able to do anything useful with top-down research in robotics in the foreseeable future are very low. This is the second reason why I propose that the Grand Challenge is flawed, and it leads directly from the flaws set down in the first. It is simply unrealistic to expect research to progress in such a way that they are mutually beneficial to both sides of the project in the foreseeable future. As criticism alone is worthless lest it is constructive, I shall not end this argument here. I shall instead propose two new Grand Challenges to take the place of this existing one - one doing computational modeling of brains and the models of related systems for purely neuroscience reasons, the other working towards robotics which would be useful in the home for instance. Using this method there could be set reasonable and attainable goals and a realistic roadmap for the challenges progression could be laid down, and there would be no pretense of them assisting each other. Some people may say that the first challenge I propose, being purely for the purpose of Neuroscience is not really a challenge in computing. But I deny this forthright. When we look at Neuroscience, what is to be seen but the greatest reverse engineering project ever to be undertaken? The brain was natures answer to computers, and thus those who are trying to work out how it works are but Computer Scientists of a different persuasion. The subject of Biology covers everything from the smallest bacterium through to Gaia in her entirety. Maybe it is time that the two sides of computing united towards a common purpose, without crossed-purposes muddling the relationship.", "label": 1 }, { "main_document": "The Black Death of 1347 to 1352 and the subsequent outbreaks of the plague were 'the most devastating natural disasters ever to strike Europe' Although it is impossible to calculate the exact death toll of the Black Death, as there is limited evidence, 'to maintain that one European in three died... cannot be wildly far from the truth' As there were a number of social, economic and political changes that occurred around that time, the Black Death can be viewed as a turning point in the history of Europe; F. A. Gasquet believed it to have formed 'the real close of the Medieval period and the beginning of our Modern age' On the other hand, there are worries that too great a significance has been placed on the impact of the Black Death. Philip Ziegler, for example, argues that 'in the long run things would have followed the same course, even though there had never been a plague' Herlihy, David, Samuel K. Cohn, Jr. (Cambridge, Mass., 1997) p.17. Ziegler, Philip, (Harmondsworth, 1998) p.231. Gasquet, F. A, Ziegler, There were a number of economic changes that occurred as a result of the Black Death. As Ziegler observes ' one third of a country's population cannot be eliminated... without considerable dislocation to its economy and its social structure' The initial response of many was to flee from the plague. Others preferred to indulge in life's pleasures rather than work. This, along with the high death toll, meant that posts went unfilled and services unperformed. On the other hand, more jobs were created, both in towns and in agriculture. For example, gravediggers and physicians were in greater demand than before. Also, there were more jobs available on the land; where before there had been an excess of labour, there was now a shortage. This had a major effect on the established system as 'labour began... to understand its value and assert its power' Workers could demand higher wages and better conditions because, if the landlord refused, they could easily find work elsewhere. Ziegler, Horrox, Rosemary (ed and trans.), Thorold Rogers suggests that the ' Black Death was a stimulus towards... the disintegration of the manorial system' Villeins began to feel that their positions were unfair as they could see the high wages of those not bound to their lord by obligation. The 'scales were... tipped against the land owner' Laws such as the Statute of Labourers were passed to protect the landlord by checking increased wages and the free movement of labour. These laws were largely ignored. Landlords also tried to maintain power by stepping up the level of fines in their courts. As a result, 'relations... became more confrontational' In this way, it can be argued that the Black Death caused the Peasants' Revolt of 1381. As Ziegler maintains, 'if there had been no Black Death, tension and bitterness would never have risen by 1381 to the level that it did' The situation after the outbreak of the plague highlighted existing grievances of the peasants and showed up the flaws in the existing system.", "label": 1 }, { "main_document": "the creation of enhanced welfare (Mansfield and Milner 1999, 600), and finally turned out to be a 'stumbling block' rather than a 'building block' to achieve global liberalisation (Panagariya 1999, 493). The second question raised from this phase is something to do with the role of the United States in promoting multilateralism. Although the successful conclusion of the Kennedy Round (1967) ostensibly signified the golden age of free trade after the War, world trade has intermittently revealed controversial problems due mainly to the decreasing economic power of the United States. Indeed, many scholars have stressed the hegemonic role in shaping multilateral international economic order, since Kindleberger manifested the importance of unchallenged leadership in the name of 'hegemonic stability theory' (Kindleberger 1973). Krasner (1976) elaborated that the hegemonic state in an economically ascending position is likely to engender global openness. Crone (1993) also principally took into account the power differential between a hegemonic state and a non-hegemonic one in forming the international co-operative regime. More recently, Gilpin emphasised again that one or more core political entities should play a decisive role to pursue political integration (Gilpin 2001, 356) Historical evidences are likely to enhance the explanatory power of the assumptions that these 'hegemony' theorists have made. In particular, despite the overall trend of export increase caused by a series of tariff reduction, the export portion of the United States divided by the world total has continuously decreased. While the export of European Economic Community (EEC) and that of Japan increased from 22% and 2% in 1955 to 39% and 10% in 1988 respectively, the US portion of export within the world trade strikingly decreased from 19% in 1955 to 12% in 1988 (Walters and Blake 1992, 17). This indicates that world trade can no longer be maintained by the unilateral leadership of the United States and the alternative super power replacing it has not burgeoned yet. In other words, the increasing trend of regional grouping shows the attempts to frame developing countries' own solutions to international economic problems such as gaining greater export markets. Particularly the endeavours for pursuing regional autarky with no dependence upon an external power were decisively facilitated by the breakdown of the Bretton Woods system (Gilpin 2001, 359). In the different vein, liberal multilateralists later join the criticism against the United States for its lack of vision and programme towards returning to its initial commitment of multilateral trade liberalisation (Bhagwati 1993, 45). For example, as his own alternative programme, Baghwati suggests a 'grand finale' scenario envisaging trilateral integration comprising the NAFTA including Latin economic area, the Asian bloc including even South Asia and the EU. It was with the arrangement of the NAFTA in 1992 examined below that the United States incurred severe censure from both hegemony theorists and multilateralists. The establishment of the NAFTA was perceived as a proclamation of the advocate of multilateralism manifesting the re-orientation of the US trade policy. It was a critical turning point, at the same time, signifying its willingness to pursue bilateral free trade agreements unless the lingering multilateral trade negotiation in", "label": 0 }, { "main_document": "to browse it in Internet Explorer or Firefox. That is why I decided not to attach it on CD drive. The prototype is usability and functionality driven, though it has not been designed professionally. I have projected the left hand side menu which is visible all the time and helps in page navigation. In the menu design I used Gestalt laws to group the links. On the home page I resigned from placing the photo of MIS building (I did not find this picture attractive and as a user is in the building he already knows how it looks like). All the information has been divided into sub pages and deleted from the main page. Instead the general search engine has been added. Maps of each floor can be displayed. In the final system it should be possible to click on the room and get information (which will be displayed below) whose room it is and what modules take place in it. This is a map of ground floor. This should help in more detailed search. This is a backup in case the user is not sure about the name spelling etc. Information what the rules are and where to submit coursework and assignment. In case of any problems with using the system user should get some instructions.", "label": 0 }, { "main_document": "Licence has been part of the law since the Dangerous Wild Animals Act of 1976. For Wild Boar produced in an extensive, free-range system, their only requirements are that they have enough land to roam and give them nutritional satisfication. This can be quite a very large area, as in their natural wild habitat they enjoy moving over a km per day (British Wild Boar, 2006). The area given over to the Boar must contain wood and scrubland as the Boar, in particular the females, enjoy the security that thick cover gives them. Also during the farrowing period, the females form nests for themselves that they line with leaves. A large benefit of employing the extensive, free-range system is that the amount of labour required will be lowered significantly. Indeed 1 man per 70 sows is needed in an extensive system, compared to 1 man per 60 sows in an intensive system (Scottish Agricultural College, 2006). Employing an intensive, semi-housed system for the production of Wild Boar gives much faster levels of production. The Wild Boar are not given the same area of land to cover as those in an extensive system have and normally they have a lot less tree cover, however they are given access to indoor housing and so therefore can make the choice between spending time inside or outside. It is possible to keep Wild Boar completely indoors, but at the moment there is very little consumer interest in this product (Harmony Herd, 2006). When kept in an intensive system, Wild Boar sows are often fed on a mix of cereals and ad lib vegetables. The young are weaned onto ad lib vegetables with a specialist grower ration. When keeping Wild Boar in an intensive system, care has to be taken in the formation of each group of Boar kept together. A breeding group can comprise of 5-20 sows and 1 mature male, but the addition of new sows can cause fighting. Also it can sometimes be wise to castrate mature males if they are to be kept together, because sometimes they can fight and cause fatal injuries. One aspect of the production of Wild Boar that is equal to both systems is the provision of adequate fencing. In order for a farmer to acquire a Wild Animal Licence, the fencing of his land must reach a certain standard (Basildon Council, 2006). Extra high tensile fencing must be used and must stand 1.8m heigh and 0.5m below ground level. It is important for the fencing to stretch below ground, as the Boar like to root and can dig fencing up. The fencing must be electrified and gates and access areas padlocked. In terms of the financial returns made from the production of Wild Boar, these are often similar between both intensive and extensive production methods. This is because although the intensive system will have a higher rate of production per year, its inputs are higher. The extensive system will have a lower rate of production per year, but its finished product should reach a greater price due to", "label": 1 }, { "main_document": "The doctrine of consideration plays a vital part in English contract law. It states that for a contract to be legally binding, detriment or benefit must be conferred to one of the parties involved. Without consideration, a contract is declared legally void. It is therefore commonplace for the courts to have to deal with cases in which parties are seeking to prove the presence or lack of, consideration in their contract. Due to the common law nature of contracts, it is entirely up to the court to determine whether consideration is present or not, and the judges are only bound by the precedent set by the decisions in other courts involving similar cases. One of these pieces of precedent that is cited as causing ambiguity was that which was set in the case of Roffey Bros. (1991). The judges who sat on this case, through their decision, went against previous doctrine set in This has obviously created uncertainty within English contract law, because it is now hazy as to which of the rules to use; that from Stilk v. Myric [(1829) 6 Esp 129, 2 Camp 317] In This therefore, is another point of precedent that Foakes v. Beer [(1884) 9 App Cas 605, House of Lords] In The guidelines which Glidewell LJ set out in his judgement during If the 'practical benefit or detriment' rule had been applied in the It could have been held that the boat's Captain had in fact, gained a practical benefit through his staff completing extra work, and consideration would have therefore been present. The decision therefore, could have been in complete contradiction of the one that was actually reached. Conversely, if the It can consequently be said that the decision in The addition of a practical gain or loss to the doctrine of consideration could also create problems for the judiciary because it is their task to decipher what is or is not a practical benefit or detriment. If they apply the However, if the Patteson J, in This definition was used as binding precedent for many years, and it can be said that This is because now, the 'eye of the law' believes the practical, not the legal benefit or detriment to be of value, and this is therefore an argument against Thomas v. Thomas [(1842) 2 QB 851, 859] The other point of law that The question posed by this arguably indefinite rule, is that if A already had a contractual duty to B to complete a duty, and if B subsequently promises to pay A more to complete his contractual duty, does A provide valid consideration by completing the task that he was already contracted to complete? It suggests that in the above scenario, valid consideration is not provided. Again though, This is another example of The It is yet unclear whether this fact indicates that the decision is of little practical significance to English contract law, or whether it has made little impact upon it as a whole. This could therefore be an argument against it having created further ambiguity in", "label": 1 }, { "main_document": "new program that was being written. Three new variables were introduced that related to the sin function of the velocity field v. The u component of the velocity field remains unchanged and so there is no need to make any modification to this part of the program, however the v component has changed and now contains a weak time-dependent perturbation. Modifications were made to the lines of code in the program that contained the v component. After modifications were made to the code, the program was saved as time.m (see appendix). The program was run with the following initial conditions, x0=1.0, y0=1.70, n=2000, alpha=0.01, dt=0.01, t=0, e=0.1, k=1 and c=3. Running from [1,1.7] from t=0 to t=20 the graph in Figure was output from MATLAB. The initial starting condition of [1,1.7] is clearly visible and it shows that initially the parcel almost completes an orbit before continuing off in some other direction. The outcome of this looks very similar to a mixture of the parcel being in the eddy region and the stream region. The initial conditions were varied slightly such that the starting point was now [1,1.71] and the graph shown in Figure 9 is what was outputted by MATLAB. The initial starting condition is shown again fairly clearly however the graph shows that the parcel beings to loop in what must be the eddy region. It is interesting to notice that a change of 0.01 in the y value has caused such a change to what the graph looks like. The program time.m was run with initial conditions [1,1] with e=0 and hence this should replicate the first experiment carried out in without time dependency. It was also run with condition [1,2] and e=0 to replicate the second experiment. The expected outcome would be for the values to be the same as by setting e=0 the program is basically ignoring the time-dependency. The results are given below. The results from traj.m are the same as those from time.m when e=0 (i.e. no time-dependency). The results in the table they are exactly the same as the predicted ones and hence some confidence can be taken in the programming of the weak time-dependent perturbation that has been programmed into the program. Initially looking at the results it would seem the modified system displayed chaotic behaviour, which means that the system is extremely sensitive to the initial conditions. In the experiments run the conditions were changed slightly from [1,1.7] to [1,1.71] and some very different results were obtained. In order for a system to be chaotic, it is necessary for the system to be two dimensional with time forcing or three dimensional. In all the experiments carried out in part B the systems were two dimensional so it is possible to discard the three dimensional argument already. In the first set of experiments where Figure 6 and Figure 7 where the graphical outputs the system was simply two dimensional and had no time dependence. Changing the initial conditions from [1,1] to [1,2] caused a change in the output but the difference was significant", "label": 1 }, { "main_document": "a sign of shortage in cash. The TOWS matrix generates some potential strategies for Kodak. They are tested against each of the scenarios for robustness and detail can be found in Appendices. It can be concluded that both exploring new market and merging with another company would be worthwhile considering. Merging with another would bring temporary growth of the company and thus increase and thus answer the constant pressure from stockholders It can also introduce new market, increase customer base, acquiring key management personnel, etc. Exploring new market (such as emerging market) can be an important move to Kodak. Emerging market has unlimited potential and has yet to be explored. If Kodak could establish in new unknown markets, it could boost their sales, performance, etc. This report has looked at the scenario planning and strategies development for Kodak, using the WBS scenario approach. It has explored the 12 key factors that affect Kodak's business environment. 3 scenarios, UK Economic Depression, Advances of Integrated Technology and Collapse of Internet, have been developed using a deductive approach. Based on the 3 scenarios, a few strategies have been recognised using TOWS matrix and two of them, merging with another company and exploring emerging markets, has been identified as the possibility to be successful long term strategy of Kodak. This report, in my opinion, covers all the essentials of scenario planning and has successfully developed long term strategies for Kodak. However, a few issues could have led to developing not the best strategies. The main issue would be considering the imaging market in the UK rather than Kodak as a whole in the key factors development stages. It could give misleading strategies as in SWOT analysis, Kodak is viewed as a whole rather than just market in the UK, which is not quite consistent. The other issue would be including some, in my opinion, not so important factors as the key 12. However, the result of the report is very satisfying and the resulted strategies could be used for serious consideration.", "label": 0 }, { "main_document": "The basic objective of the laboratory sessions was to use Solidworks and Cosmos Floworks to design a car in a wind tunnel and to analyse the properties such as the drag coefficient and drag force. The idea behind these laboratory sessions was to gain knowledge about basic concepts of fluid dynamics and car aerodynamics. This laboratory assisted in gaining a basic idea about analysis and optimisation. A car was fitted in to a wind tunnel and all the external specifications were provided. Then the car was analysed for different properties by varying the external entities. A car and a wind tunnel is designed using Solidworks. Then he original car was mated in to a closed wind tunnel using Solidworks and then Cosmos Floworks was added in. Using a wizard available in Cosmos Floworks the external entities such as temperature and pressure were defined. Then the surfaces of the car were mentioned on which the air acts. Solver was then used to calculate the X - component of force by carrying out several iterations. The original car used for the initial calculations is shown below; The Modified car used for the initial calculations is shown below; Following data was specified before carrying out the iteration calculations; The iteration calculations are carried out for six different velocities of air (10, 20, 30, 40, 50, 60 m/s) The general data used for derivation and calculation of drag coefficients; Area of the modified car (Where; The drag forces for six values of velocity are found and using the equation shown in the \"Theory used\" the respective Drag coefficient is found. Then one graph is plotted for each car, with Drag coefficient against velocity. Calculating the Drag coefficient; (Above is an example calculation of a drag coefficient) The drag coefficient for each velocity is calculated and tabulated below; The graph shown below is of Drag coefficient against velocity. Even though the points seem to be scattered around the plot area when the y - axis is considered all the Drag coefficients for different velocities seem to be very close to each other. All the Drag coefficients seem to lie between 0.36 and 0.41. There is a general trend line added to the graph and it is considerably horizontal and straight showing that there is no change in Drag coefficient even though the velocity of the air is changed. The drag coefficient for each velocity is calculated and tabulated below; The graph shown below is of Drag coefficient against velocity. Even though the points seem to be scattered around the plot area when the y - axis is considered all the Drag coefficients for different velocities seem to be close to very each other. All the Drag coefficients seem to lie between 0.3 and 0.36. There is a general trend line added to the graph and it is close to being horizontal even though it is slightly slanted showing that there is only a very slight change in Drag coefficient even though the velocity of the air is changed. This slight change of coefficients could be due to", "label": 0 }, { "main_document": "rate, as a crucial indicator of the economic performance, may account for a certain degree of the change in employment rate and hence should be included in the regression model. The net flow of the three factors, though excluded from our conceptualisation, may still play important role in determining the condition of the domestic labour market. The inclusion of these variables can further help us differentiate the influences of net flow from gross flow, which are often confused in public discussions. By dividing the net flow with the gross flow, we can obtain an indicator (net flow relative to gross flow) mathematically independent from the indicator we adapt as independent variable (gross flow relative to GDP or population). The null hypothesis to be tested is stated as follow: \"The annual changes of either international gross trade relative to GDP, gross FDI flow relative to GDP, and gross migration flow relative to population, do not provide any information in help us predict the change in employment rate of labour force or of any age group, while the possible effects of GDP growth rate, net trade relative to gross trade, net FDI relative to gross FDI and net migration relative to gross migration being controlled.\" This study is conducted by applying multiple-regression on existing archival data. So the availability of the relevant data became the greatest concern. Ideally, we would like to build a comprehensive regression model to test the effects of all the independent variables and control variables at one time. However, the feasibility of this approach is totally ruled out by the more restricted time-period of available population data (1964-) compared with data on trade and FDI (1946-) and employment (1959-). Therefore we plan to build two models: one using data since 1959 to examine the effect of trade and FDI (called \"T model\"), the other using data since 1971 to test the contribution of migration flow (called \"M model\"). The regression equations for the two models are listed as follow, where Once the data analysis is completed, we will test the F value for each regression equation to determine whether the null hypothesis is rejected. We will further perform t-test on the regression coefficient of each variable to see whether its annual change is significantly associated with the change of employment rate. Attention must be drawn to the strategy that we use the By focusing on the association between the annual changes of variables, we intended to reduce the chance of having results contaminated by certain confounding long-term social-economical trends. However, this strategy implies the risk of failing to capture the influences of the independent variables that were not reflected by the employment statistics of the same year. In fact, the undetermined time-span for globalisation processes to create labour market impacts posed a major challenge to the study of this issue. In this study, we plan to address this risk by performing regression procedure between the annual changes in independent variables with the changes of employment statistic during different time intervals. For instance, the change of the independent variables between year", "label": 0 }, { "main_document": "poetry, especially the Romantic poets that followed him, have been inspired by his way of looking at the world at a time of great social and political oppression by tyrannical governments and religions. One poet that greatly admired his work was William Wordsworth, whose time spent on working on a collection of poetry with Samuel Taylor Coleridge in 1797-8, in a year of passionate creativity, produced one of the seminal works of the Romantic period: the Within Coleridge (1847, 297) stated that the imagination was considered to act as either 'primary or secondary'. Primary imagination refers to the perception of the world created by the senses. An example of Wordsworthian primary imagination is found in the opening stanza of Tintern Abbey, where the narrator, presumably Wordsworth himself, describes his perceived environment: Wordsworth, W, Lines Written a Few Miles above Tintern Abbey, from Lyrical Ballads (1978), cited in: p266, Wu, D, (ed The natural images in this passage are not a description of what Wordsworth can see; we know this because he wrote the poem upon leaving the area in which it was written. What is important to note when reading this passage is that the narrative is focused on the centrality of the subject, a viewpoint that most of the poems contained in The 'I' in the first line here refers to the poet's self, and in the self-acknowledgement 'I see' we are able to experience Wordsworth's descent from 'primary' into his 'secondary' imagination. The structure of the poem plays a key role in conveying Wordsworth's internal state. The iambic pentameter rhythm regulates the sporadic blank verse, a technique that mirrors Wordsworth's state of half-conscious control over the fluxing images he perceives. The blank verse also conveys a sense of reality and accessibility in terms of communicating thoughts and emotions in the most direct way possible, not restricted by a rhyme scheme in order for it to register at the most basic level of the reader's comprehension. Stabler (2002, 108) notes that in this way the reader is permitted 'a more direct access to what Wordsworth called our 'elemental feelings''. The descriptions of natural images, which are the symbolic language of Wordsworth's mind, are blended into the self-aware descriptions of Wordsworth's emotional state so that there is no distinction between what is perceived by the senses and what is created by the imagination. This would imply that in the state that Wordsworth is experiencing in this passage, the senses are only half-creating Wordsworth's perception of the environment, and that they are half created by the unifying ideals of what his unconscious mind wants them to appear as. The observation of this state of consciousness is referred to elsewhere in the poem. A grammatical technique used to demonstrate this is seen in repetition of the word 'h', in the line 'hedgerows - hardly hedgerows'. The linking effect produced by the repetition of 'h' allows the poem's descriptions to flow in the same way as his imagination so that there are no distinctions made between his observations, creating a sense of wholeness and forming", "label": 1 }, { "main_document": "This essay will analyse the extent to which the views of the Enlightenment affected colonial life in New South Wales from the date of the first settlement in the region until the middle of the nineteenth century. It will take into account the effect of the Enlightenment in terms of the reasoning behind transporting convicts to New South Wales in the first place, the methods of punishment and rehabilitation that the authorities imposed on the convicts once in Australia, the colonists' attitudes towards and treatment of the Aborigines, as well as the agricultural outlook and methodology of the colonists. It will argue that in the majority of these aspects the Enlightenment world-view was extremely influential, except in deciding to establish a penal colony which ran counter to Jeremy Bentham's ideal of the Panopticon. Although, by the end of the period in question, even the system of transportation had been abandoned in favour of ideas more influenced by the Enlightenment. The English authorities firmly believed that the relocation of its criminals to New South Wales would be able to reform them - 'Improvement of the land through convict labour seemed to bring with it moral improvement as crime was washed away by honest sweat.\" This belief emanated from the Enlightenment concept that progress and improvement was possible in humans and John Gascoigne argues that this was indeed proved - 'As the Australian colonies developed, the possibility of the improvement of human nature was demonstrated by the way in which many convicts were turned into good and useful citizens once transported into a new situation.\" New South Wales became the land of the 'second chance', allowing emancipists back into society - a stance consistent with Enlightenment views about the ability to reform mankind. As this policy was so successful in reforming the convicts, it was believed that the principle could be applied to the Aborigines in order to 'civilize' them (another Enlightenment concept). Consequently, there was a belief that 'Aborigines could be refashioned by removing them from their families at birth and bringing them up in European society'. It also disturbed Aboriginal patterns of growth and migration, and led to a sense of disorientation among those Aborigines taken from their parents at a young age. Thus, the Enlightenment belief in the reform of the human character was adopted by the authorities in New South Wales in relation to both the convicts, but also to the Aborigines which had a far more negative outcome. John Gascoigne, Ibid, p. 11 Ibid, p. 12 When initially deciding how best to imprison and monitor the convicts when they had reached New South Wales, the House of Commons proposed the idea of establishing a prison with a type of viewing gallery so that all the prisoners could be permanently observed. It would have enabled 'the whole Establishment to be inspected at a View, from a commodious and insulated Room in the Center'. This was undoubtedly the most famous example of Enlightenment thought in relation to the imprisonment of convicts, as it emphasized viewing and inspecting the prisoners at all", "label": 1 }, { "main_document": "the local housing market may also impact, creating changes to the regular customer base. The increasing trend for healthy living is a factor The White Horse needs to be aware of in order that the menu on offer caters for various people's different dietary needs. People on diets spent over The Internet has 'provided the opportunity for suppliers to reach customers directly....putting a full colour brochure directly into the hands of potential customers at a relatively low cost'. Despite the White Horse already having information on the pub explorer website they still need to remain vigilant of new technological advances and look at possible ways to improve their current website information. Another technological factor affecting The White Horse is the introduction of chip and pin. Currently this is not in use at The White Horse, which leaves them more vulnerable to card fraud. The increase in awareness about the positive factors of recycling is an issue The White Horse needs to be aware of. 'The primary goal of business should be to be better than the competition in critical product and service attributes' (Reich 1997 p. 161). Currently, The White Horse is not hugely different than their competition where this is concerned from a customer's perspective. The White Horse is a larger establishment than the Britannia and The Royal Standard and despite the Britannia's menu not being as large as the White Horse's it appeared to be slightly more up market. Despite this, the service delivery was much slower than either other pub, especially as it was not overly busy either. All establishments have an 'order at the bar approach' and despite The Royal Standard also being part of the Greene King chain the menu was slightly different and more expensive than The White Horse . As Kotler et al (1999 p.681) have noted, 'true competitive advantages are factors that are recognised by guests and influence their purchase decisions'. Despite the White Horse being voted 'best pub in the Headington area' (Stennings 2005) there is an unnoticeable difference in the food quality and service over the other two pubs. However, the atmosphere at the White Horse is good and the staff are friendly and helpful. Also, The White Horse's car parking facilities and the event evenings they hold are better than both other pubs. It is important for management of The White Horse to identify its key issues known as Strengths: Has been voted the best pub in the Headington area (Stennings, 2005). The pub is a well-established business. Theme / event nights are held most night of the week, such as Karaoke night, Quiz night, Disco night. The pub is situated in a excellent location, surrounded by student accommodation and nearby Oxford Brookes University an extremely short walk away. There are good parking facilities which not only benefits the customers but also nurses working at the local hospital who can park at The White Horse for just Entertainment is provided continually, for example there is a pool table, fruit machines (for gambling) and many television screens dotted around the pub. Customers", "label": 1 }, { "main_document": "angular deviation of the x-ray corresponds to a higher resolution of the structure. The data collected during this process then undergoes post-processing such that the maximal amount of information can be extracted. The nature of this processing can vary and there are a number of mathematical tricks that can sharpen resolution, eliminate clutter and provide error correction. However at its most basic the processing involves performing a Fourier transform on the location of the maximal intensities found - from this the structure of the crystal and be inferred and thus the shape of the molecule. In their 1975 paper Poljak et al [3] describe the use of X-ray crystallographic techniques to discern the structure of the Fab' fragment of a human myeloma (IgG1) protein to a resolution of 0.2 nm. The paper describes the finding of two irregular beta sheets in both the light and heavy polypeptide chains which are roughly parallel and surround a tightly packed interior of hydrophobic side chains. The research was also able to identify the spatial location of the hypervariable portions of both the light and heavy chains - and to speculate as to the role these hypervariable portion have in defining the function of the immunoglobin. In the paper ' It is interesting to compare these two papers [3] and [4] and to regard how the understanding and technology has progress in the intervening two decades. For example, in the Harris et al paper, which is concerned with the imaging of the monoclonal antibody Mab231, much is made of the computing techniques to obtain the images, and the resolution obtained between 3 and 20A, at least two orders of magnitude greater than that obtained by Poljak et al. The principle of NMR relies on the quantum mechanical affect of the intrinsic spin of nuclei. The intrinsic spin is perturbed using a magnetic field and then a resonant field is applied usually in the form of the magnetic component of radio frequency electromagnetic radiation. This mobility and resonance can be detected using the principle of electromagnetic induction. What characterizes the sample is its ability to recover from the resonance and this gives information regarding the structure of the sample. NMR can be used to obtain information about different levels of the structure of the proteins (primary, secondary and tertiary) for example use of techniques like NOESY to discern the primary structure (amino acid sequence) of the proteins. The use of NMR to discern aspects of the antibody structure and function are found in references [5] and [6]. Anglister et al [5] used NMR to probe the nature of antibody - antigen combining sites and obtain spectra from the cleaved Fab fragment of a monoclonal anti-spin-label antibody using deuterated amino acids to form the proteins of the antibody. Anglister et al go on to postulate that the combining-site structures must be almost identical to the native structures. A major difference between crystallographic techniques and NMR is that the specimen can be viewed close to their natural states when NMR is the probing technique. Furthermore dynamical structural information can", "label": 1 }, { "main_document": "aged under 35 go clubbing and then that was only on an occasional basis. Mintel (2004a) found that only 74% of those in the pre/no family life stage visit late night bars with 52% of them visiting at least once a month. Mintel also found that late night bars are more popular than nightclubs amongst all life stages. Although the general theory in what Rapoport and Rapoport say is still valid in today's day and age there are room for improvements in modern day society. Unfortunately nowadays families do not stay together as they once did. Divorce is frequent (in 2000 over half the marriages ended in divorce (Benson, 2003)) and remarriage occurs often (41% of marriages in 2000 were re-marriages (Benson, 2003)). Single parent families are on the increase whether the result is through divorce or through death (the death rate for males aged between 40-59 is rapidly increasing (National Statistics, 2005). Fifty years ago it was common for men and women to marry at an early age and have children whilst in their mid-20s. However as time has progressed society has changed and those in the pre-family/no family life stage no longer feel the pressure to settle down until later on in their lives. It is now common for people to spend their 20s and possibly their early 30s travelling and living a single life within groups of friends. Those who are in older age groups no longer feel unaccepted in a younger environment and through mass media it has allowed them to be accepted into the young market. Postmodernism suggests that drinking is increasing in social approval and it allows for the opportunity to network amongst others and to allow alcohol to act as a confidence booster. The wide range of drinks that are found in pubs nowadays allow for older people to feel young. With the introduction of 'alcohpops' into the drinks market it is a recognisable trendy and young persons drink. Therefore those who are not within the 18-24 year old age group and who drink the 'alcohpops' may feel that they are creating that bond with those who are younger through the drink that they have. The older person looks and feels young in themselves and therefore have created an illusion that fails to meet the expectations of Rapoport and Rapoport's theory. With the added opportunities that television programmes also create this allows for both young and older people to connect with each other. There are television programmes that all generations may watch and it is highly likely that there may be an alcohol related type of sponsorship (e.g. Sex and the City and Baileys, Friends and Jacobs Creek, Channel 4 films and Stella Artois). Through mass media this may therefore create an added bond between those in different life stages when in the pub through the sponsored drink. Postmodernism and mass media have led to a less distinctive gap between those in different life stages when drinking in a pub. There are few distinctions in the type of drinks that all life stages consume and", "label": 1 }, { "main_document": "approach, Large plc is able to understand for instance, whether loyal or satisfied customers are profitable (5). It is suggested that the manager may use 'average duration of customer relationship' to recognise loyal customers and 'customer response cards' to understand customers satisfaction. Consequently ensures that excellent customer service (in terms of time and cost) is given adequately to both loyal and satisfied customers and hence, profitability is increased. Furthermore, it is observed that above measures suggested are only means required to achieve to measure customer profitability but not the outcome. Hence, it is suggested that Large plc should incorporate the With specific reference to Custom Research Inc. example (5), CRI uses the lifetime profitability analysis to manage and therefore recognise profitable customers and able to do comprehensive screening, mentoring and promoting of individual customers. Its success is proven as CRI had 36 clients who provided 86% of sales and 96% of profits. If applied sufficiently within Large system, higher profitability can be achieved by understanding customer profitability. However, it is argued that both BSC and ABC must be used hand in hand. Referring to Appendix 2, managers are able to identify what actions to take in terms of managing customers. For instance, unprofitable, targeted customers should be transformed to make them profitable through pricing, process improvement and relationship management actions. This model gives insights about where opportunities exist to transform attractive, but money-losing customers into long term profitable relationship (5). Moreover, it must be noted that the recognition of profitable customers is subjective and depends upon managers' point of view. Mention the word \"performance\", and many managers immediately shift their thinking to measures related to some type of financial result (6). This is simply because; many managers are formally trained to measure performance through financial accountability which provides the easiest way to demonstrate progress and productivity. Hence, the success of adopting the BSC to ensure a customer-focused approach is highly dependent on managers' abilities to see the clear objectives of having this method imbued in Large plc's system. It is recommended that either both methods are suitable for calculating customer profitability. However, it would be more useful if further information is given to ensure the most appropriate method to ensure good judgement. For example, it would be more useful to know the culture within the organisation as this may affect the likely method taken. It is vital to know whether organisation is receptive to changes or not. Hence, in this instance, maybe the extension of ABC towards customer cost maybe more suitable as it does not include an entirely new system like the balanced scorecard for all levels of organisation to adapt to. Moreover, it is imperative to weigh the cost against the benefit. It is argued that employing a new system may involve a hefty financial cost and resources in terms of staff training. It would not be viable if the cost exceeds the benefits. Subsequently, it is vital to know whether competitors are moving towards customer-focused approach or not. This is because, Large plc may enjoy significant head start in", "label": 0 }, { "main_document": "whenever I was asked to speak in Persian, either in coursebook exercises or in interaction with my teacher and other students, I found out that I could hardly put into practise the grammar structures and lexical items that I have studied. Knowledge is not enough, instead repeated practice and production is required which I lacked. In addition, it is worth referring to the experience of interacting with a native speaker of the Persian language. Although, our teacher had equipped the class in advance with all the relevant knowledge to tackle with this short interaction, I realised once again that it is more difficult to produce output and process input simultaneously, especially in natural conversation where topic shift and the use of vocabulary and linguistic structures is not limited to learners' knowledge. However, the linguistic items that I interpreted and produced during the interaction are those parts of the language that I can effortlessly remember and use. In conclusion, learning Persian was a difficult but at the same time interesting experience, because it was the first time that I tried to learn a foreign language not by means of my mother tongue (Greek) but through a language (English) that I have not achieved yet a native-like mastery. Through this experience I realised that as far as language learning is concerned, exposure to comprehensible input, either in the form of classroom instructions and teaching or in the form of modified teacher talk, along with target language production from the part of the learner and interaction with other speakers are of great significance. However, the prerequisite, in order for these three factors to facilitate the learning process, is learner's internal drive and enthusiasm for the language. Lastly, the amount of studying that the learner devotes is quite crucial. Personally, I have to admit that, because of the fact that I lacked the practice that input and output offers to a learner, I should have invested more time and effort to the learning process.", "label": 0 }, { "main_document": "rules as absolute and binding is flawed, and that extra-legal measures are frequently drawn upon. Dworkin, 'Taking Rights Seriously', (1977), p.43 J. W. Harris, 'Legal Philosophies', 1997, 2nd ed., p. 189 (1889) 115 NY 506, 22 NE 188 N.E. Simmonds, 'Central Issues in Jurisprudence, Justice, Law and Rights', 2002, 2nd ed., p.183 N.E. Simmonds, 'Central Issues in Jurisprudence, Justice, Law and Rights', 2002, 2nd ed., p.184 N.E. Simmonds, 'Central Issues in Jurisprudence, Justice, Law and Rights', 2002, 2nd ed., p.186 Dworkin's assimilation of morality into legal theory is, to some extent, similar to natural law theory. Aristotle The concept of St. Aquinas Natural law originates from God, and His morals govern the law. A teleological interpretation introduces justice to the legal system. Although this is the best approach, it can easily be manipulated, resulting in dictatorship Dworkin's theory aims to compromise the natural law teleological interpretation with formalist qualities of certainty and predictability which remove the risk of manipulation. 384-322 B.C. Tania Kyriakou, Lecture Handout, 'Natural Law Theory', p.2 Penner, Schiff and Noble's, 'Introduction to Jurisprudence and Legal Theory: Commentary and Materials', 2002, p. 40 1224-1274 A.C. Lord Woolf, 'The Rule of Law and a Change in the Constitution', March 03, 2004, Following the secularisation of Natural Law emerged the Social Contract Theory, presenting law as a consented contractual arrangement between citizens and the state. The concept of divinity was abandoned in favour of individual autonomy. The legitimacy of the state was limited by 'natural rights' This theory of natural law is close to Dworkin's theory in embracing individual rights as supreme. However, Bentham criticised natural law as \"rhetorical nonsense\" He rejected the notion of rights, arguing that they could only derive from \"ungrounded metaphysical speculations about God or an intrinsic human nature\" These criticisms can be transferred to Dworkin's theory, although Dworkin purports to eliminate subjectivity by restricting the selection of principles. Penner, Schiff and Noble's, 'Introduction to Jurisprudence and Legal Theory: Commentary and Materials', 2002, p.721 Bentham, 'Anarchical Fallacies' in 'The Works of Jeremy Bentham', (1843), p. 501 Penner, Schiff and Noble's, 'Introduction to Jurisprudence and Legal Theory: Commentary and Materials', 2002, p.722 Dworkin asserts that the open texture of language which makes up law creates ambiguity. In hard cases, rules are uncertain, and principles must be adopted to reach a decision. Principles and policies confer discretion on judges which undermines the certainty, predictability and consistency that makes up the rule of law. Dworkin sustains the rule of law by limiting the discretion afforded to judges. Dworkin describes law as a chain novel, developing over time and modifying according to changing social practices. Each chapter must be consistent with previous decisions: \"The judge's decision must be drawn from an interpretation that both There is a Dworkin, 'Law's Empire', (1986) Dworkin, 'A Matter of Principle: Is There Really No Right Answer in Hard Cases?', 1985, p. 143 L.B. Curzon, 'Jurisprudence', 1995, 2nd ed., p.214 Dworkin created Hercules, an ideological character, \"a lawyer of superhuman skill, learning, patience and acumen\" \"Hercules must turn to the remaining constitutional rules and settled practices to", "label": 1 }, { "main_document": "For a duty of care to be owed a close relationship between the parties must exist (second stage approach). In the present case as the relationship was created ad hoc it is not clearly equivalent to that of a contract. Mere hearing or reading of statements is not sufficient to create a close proximity as this would put no limits to being liable for what one says or writes. Consequently, the last stage-approach, whether it is just, fair and reasonable to impose a duty of care would also fail As the courts held that the harm was not reasonably foreseeable, given that responsibility was not undertaken on the part of the defendant and that a close relationship was not observable, the claim must fail under this criterion. This is why Lord Goff states that cases similar to the present have no need to be assessed against this criterion. Given that no duty of care was owed, breach of duty and damages cannot be discussed. Liability for negligent statements or services which cause economic loss has for the most part been restricted and economic loss is rarely protected. Following Consequently, the possibility of claiming for consequential (not pure) economic loss as a result of a negligent misstatement was first was recognised. The impact of this case in the development of the law of negligence illustrates how common law is created through the use of judicial precedent. The conducts and decisions taken in previous cases have served as a guideline for future application since the Norman Rule of England. The judges travelled around the country deciding major cases some of which were based on common customs. This unified the law and made it common to all. Common law was hence developed from customs and judicial decisions. The Other persuasive precedents include dissenting judgements, decisions taken by lower courts, the Privy Council and by Commonwealth countries The decisions of superior courts are binding to their immediate lower court. The decisions of the European Court of Justice (EJC) are binding on all British Courts; likewise the decisions of the House of Lords are binding on all the inferior courts. It was not until the 19th century, after the Council of Law Reporting and the hierarchy of courts was established that judicial consistency in decision making developed into the more solid system found today. A previous decision of a court. Although parliament is becoming more involved in the creation of statutes, most law making is still judge based. When there is no precedent for a case, the decision will form a new precedent. For example, the law on remoteness of damages was derived from the Privy Council's decision on the Australian case In 1966 Lord Chancellor announced that the House of Lords would no longer be bound by its own precedents, in response to the failure of the law to amend unsatisfactory decisions (which could have otherwise solely been changed through new legislation) and to keep up with the rapid social changing environment. The Court of Appeal may divert from binding precedent in specific situations, also, its", "label": 0 }, { "main_document": "assess. This can also lead to confusion when setting quotas. However, both Coltman et al (2003) and Whitmanal. (2004) in their assessment of rams and lions, warn of the decline of populations with the selective effects of sport hunting on prime males. Jeffrey (1991, p 164) asserts that legitimate hunting in Zambia removes an \"insignificant proportion of the standing populations of wild animals (usually much less than 1%)\". However, for opponents, any proportion is a significant proportion. As well as moral opposition to legalised hunting, there are also practical objections. In an economic analysis, Swanson & Kontoleon (2005) warn that supporting such utilisation may not be appropriate for conservation organisations, as wildlife non-use values (like donations) may be in conflict with certain consumptive uses of the species (such as hunting). They found that wildlife supporters resisted consumptive utilisation and would be prepared to donate more to avoid it. It may damage conservation organisations when people who donate money for non-consumptive projects, may withdraw support if the organisation backs consumptive wildlife uses. Bodmer and Lozano (2001) describe economics and wildlife as a circular relationship, in that economics plays a large role in determining how rural people treat their wildlife resources, whilst at the same time \"the fate of wildlife populations will often depend on the mandate of rural-development projects\" (p 1169). It is worth debating whether sheer numbers of hunted animals is of importance, or whether the focus should be on which particular animals are targeted. According to Humavindu & Barnes (2003), in 2000 Namibia hosted 15 540 hunter days, and 13 310 animals were killed, making $11.2 million. In the same year in Botswana, $12.6 million was made from 5570 hunter days killing 2500 animals. However, 21% of the animals in Botswana were high-value key species as opposed to 3% of the take in Namibia. So, fewer total animals were taken but the animals were larger trophies. There is a question over whether it is appropriate to take a large number of high-value species in exchange for a lower total number, or whether a larger total loss is more damaging to wildlife populations, biodiversity and perhaps perception. Though trophy hunting is not a new concept, the idea that it can be appropriate for conservation via development is new to many people, even conservationists. Indeed, inexperience can mean \"ill-focused efforts to promote the sustainable use of natural resources as a means to development in inappropriate settings may condemn the poor to lives of poverty\" (Rao & McGowan, 2002, p 581). The link between conservation and development was reconfirmed when the IUCN World Congress defined conservation as both the protection and sustainable use of natural resources (IUCN 2000). There is also an IUCN Sustainable Use of Wild Species Specialist Group, suggesting it is viewed as an appropriate conservation strategy. Some see wildlife utilisation as a sacrifice to facilitate other forms of conservation. For example, in supporting the CITES 2004 decision to allow Namibia to offer hunting permits to kill black rhinos, WWF praised the country's rhino protection efforts over the last 20 years and", "label": 1 }, { "main_document": "opportunity for the Europeans to regain strength after their expulsion. It is perhaps true to say, then, that, 'A few hundred Spaniards became an unbeatable force only when combined with thousands of indigenous pouring in behind them'. Elliott, 'The Spanish Conquest and the settlement of America', p.183 Prescott, Hassig, Ross, Pagden, Townsend, 'Burying the White Gods', p.669 That these allies were one of the most crucial elements in the Spanish success is virtually indisputable. However, their presence does not necessarily detract from the idea that Cort It was his skills as a communicator that extracted the information necessary to learn of the internal divisions among the native population As Prescott eloquently argues, in gaining the native allies, Cort Prescott, Prescott, Perhaps the most obvious advantage that lay before the invaders was their vast military superiority in terms of weaponry and formations. As Elliott indicates, the horse provided the Spanish with both a greater mobility and, at least initially, the opportunity to surprise and unsettle their enemies. The 'slings, bows and arrows' Elliott, 'The Spanish Conquest and the settlement of America', p.175 Ib id., p.175 Prescott, Townsend, 'Burying the White Gods', p.669 Warren, J. Benedict, The different battle techniques used and, more importantly, understood were perhaps even more critical to the Spanish success. 'Just as there are two forms of communication, there are two forms of war' For example, the Europeans were more accustomed at producing the protective formations in battle that limit losses The ritualistic style of native warfare and warrior dress, with soldiers 'gaudily painted' and chiefs glittering with helmets made of gold and jewels, meant that they could not be prepared to fight at any given moment, and also that they were more easily spotted on the battlefield. Todorov, Hassig, Prescott, Todorov, Never were these different practices of warfare so crucial as during the final siege, when Cort The great anguish this caused is measurable from its volume of references in native sources: in illustration, \"Nothing can compare with the horrors of that siege and the agonies of the starving.\" To the Aztec peoples, to involve civilians in this way was the very antithesis of warfare, and had no part in their experience of it. D Leon-Portilla, Miguel (ed.), The Indians saw 'signs' on the battlefield that the Spanish did not, omens that destroyed their ability to fight. To give an example, Spanish sources cite the fall of the Indian commander at the battle of Otumba as the reason for their success, but it probably more likely the taking of the banner that signified for the natives that the battle was lost. Although Aztecs did gradually learn to adapt in the face of European weaponry, for example by beginning to kill on the battlefield, their belief system would only allow this up to a certain point and thus they remained at a disadvantage. Clendinnen, \"Fierce and Unnatural Cruelty\", p.32 As clearly, then, as we can see its effect on Moctezuma's behaviour, religion plays a central role in other areas of life. The use of a cyclical calendar meant that the", "label": 1 }, { "main_document": "is then incubated at 33 to 35 The gel is formed. The removal of whey which is resulted from syneresis of the gel or the curd is enhanced by cutting the coagulum. The resulting curd makes up about 10 to 30% of the original volume of milk (Walstra et al., 1999). All the steps promoting gel syneresis are characterized as a dehydration step. Stirring and salting after formation of the curd improves whey lose from the curd by means of osmotic pressure differences. Finally, shaping and pressing of the curd further improve whey removal. In some traditional soft cheese production, large lumps are cut from the coagulum, put into molds, where syneresis occurs resulting in high water content cheese. During these stages, a variety of cheese based on the moisture content can be made as a result of the different degree of dehydration. Because salting is an essentail step of cheese manufacturing, most cheese contains added salt of 1% - 4%. Addition of salt into the cheese has various advantages which are The syneresis of the curd is one critical phenomenon in determining cheese's final result (Walstra et al., 1999). It is defined as the process of gel contraction or shrinkage after it is being formed leading to the flow of the whey through the gel networks. The process is not a persisting action of rennet since no additional caseinomacropeptide is removed from the micelles shortly after the gel has formed. For high-moisture cheese, the syneresis should be slowed down or stop after a certain time though this is far less important in low-moisture cheese. Walstraal. (1999) summarized the factors affecting syneresis as follows: Refer to the methods describing in the practical handout. Analysis of milk and whey composition revealed the differences between them. From table 1, the fat content in whey is reduced because the majority of fat molecules are entrapped in the casein gel during milk clotting step which is attributed to the coagulation of casein micelles. Similar reason applied to protein content i.e. in whey the protein content is lower compared to that in milk because most protein in milk (~80%) is casein which is coagulated after the addition of enzyme. The protein content in whey, 0.93%, is probably ~0.6% whey protein plus the rest ~0.3% from casein and other small peptides. On the other hand, lactose is slightly higher in whey. This is because other major components in milk i.e. fat and protein were removed from whey serum thus lactose is more concentrated (higher proportion), though the amount should be similar in both milk and whey. TA is the measuring of buffer capacity in the solution. In whey, as >70% of protein is removed so the buffer capacity of the solution decreased (since charges on protein molecules contribute to the buffering capacity). Consequently, the TA is slightly lower in whey. In acid coagulation, Table 2, the higher acid addition led to higher degree of aggregation of the micelles. The mechanism is that casein becoming insoluble at pH around its isoelectric pH. This facilitates the aggregation of protein due", "label": 0 }, { "main_document": "leads me onto the female characters used in Japanese anime. I think 'Spirited Away' is a good example to use for analysing female roles as it has a mix of characters and is not by all means a feminist film. Wrote by Hayao Miyazaki who has become internationally famous for his Manga/Anime creations. (Rodriguez, 2001). His film 'Spirited Away' won the Best Animated Feature in 2003, following a huge success of his previous film 'Princess Monoke'. (National Catholic reporter,2005). The general story of 'Spirited Away' is of a young girl and her parents who are moving to a new place. They get lost on route and end up at (what they think is) an abandoned theme park, here her parents eat greedily at a restaurant and turn into pigs. The young girl (Chihiro) must work and make herself useful to survive here with all the spirits, talking animals and hybrids. She learns to value things and have respect. The film has a happy ending with her helping others and saving her parents, it is a very enchanting and gripping film. The main character in the film is Chihiro, she is only ten and at the start of the film she seems vulnerable, sad and weak. She doesn't want to move and sulks in the back of the car, her flowers she is holding get damaged and she sobs some more. When they get lost at the entrance to the park she is scared and tries to stop her parents from going in, but she is helpless and ignored by her parents. Chihiro's character soon comes into a different light when her parents turn to pigs. She sees the spirits emerge and tries to escape until a young boy (Huku) approaches her and warns her she must get a job at the bathhouse and work hard to survive, after this she becomes strong and knows that to save herself and her parents she must be brave. She finds a job with Yubaba, a mean spirited old women who is in charge of everyone. After a few days Chihiro becomes the bravest and hardest working member at the bathhouse. She treats all the spirits and other workers the same, and doesn't judge people on their appearance. She doesn't take gold off one the guests- like all the other workers were, and shows there is more to life than tangible products. She doesn't doubt anyone and helps those that do bad things, and she goes to extremes (even putting her own life at risk) to save others she has grown fond of. All these characteristics create a strong, heroic and brave female character that has been formed from a scared, vulnerable young girl. It shows that females don't always have to play the roles of the helpless and powerless that need to be rescued by the 'heroic' male. Mes points out that \" Chihiro is the little heroine at the centre of 'Spirited Away'\" ( The pictures below show Chihiro's change from a sad and vulnerable character (the first two pictures) to a brave and", "label": 1 }, { "main_document": "so called Third World country. International investments and job developments will help the population to improve their status quo. Organisations however, have to bear in mind the social and cultural environment they operate in. The design of human resource strategies has to integrate local community and their well-being in order to receive not only governmental support but also collaboration of groups and of each individual. Expatriates will only succeed when equipped with cultural sensitivity alongside with technical skills and knowledge. This is the task of organisations to train, motivate and to support their staff. Regarding development of HCN's the author suggests close cooperation with schools and universities, following best practice examples such as Marriott or Hilton hotels in order to get the most out of their staff and to gain a high reputation within the country. The Lakeside Group hence could proof that it is dedicated to perform better than industry in general as identified by Price (1994). To sum up, there are", "label": 0 }, { "main_document": "This study sought to evaluate the freshness of two samples of fish using sensory and chemical assessment methods. In the sensory evaluation one fish sample was found to be fresh and fit for consumption, while a week-old fish sample was reported to be too old for consumption. The chemical assessment proved to be problematical, and it was concluded that a re-evaluation would be beneficial. The oldest and still the most commonly used and trusted means of evaluating fish quality are the senses; smell, sight, touch and taste. The positive aspects of the sensory evaluation are that these tests can be carried out anywhere the fish happens to be, and without any laboratory equipment. The evaluation is quick, and many samples can be evaluated in a short time span. However, the sensory evaluation is a subjective procedure, and cannot be replicated precisely. The sensory evaluation is based on an interaction of psychological, physiological, environmental and economic factors, such as health state, personal prejudices, preferences, sensory acuity, and motives of profit or loss (Farber, 1965). Using this method, assessing the borderline between freshness and spoilage can be challenging. The criteria associated with freshness; the conditions of the eyes, texture of the flesh, odour, and the appearance of the abdominal walls, were identified by Anderson in 1908 (Farber, 1965). These sensory assessment criteria are still the most trusted sensory evaluation methods today, and the Tory Research Station in Aberdeen, Scotland, have included these criteria in the Torry Score; a graded quality assessment table for evaluation of the quality of fresh fish (Sutherland , 1986). This system, with grades of E, A, B and a \"not graded\" alternative, might aid in minimizing the personal uncertainties of sensory assessment. Fish naturally contains trimethylamine oxide (TMAO) which, aided by enzymes and bacteria, breaks down to produce trimethylamine (TMA) as the fresh fish devolves. This causes an unpleasant smell, as well as volatile bases such as ammonia. Thus the TMA or total volatile nitrogen (TVN) of a fish sample can determine the freshness of the fish. Freshness assessment of meat and fish using volatile basic nitrogen compounds was first suggested by Eber (1891) who found that the fuming of a diluted HCI solution in ether-alcohol could be used to determine the freshness of meat. In 1926 Poller and Linneweh found that TMAO was reduced to TMA during spoilage, and this brought Boury (1932) to suggest that this might be used to determine the freshness of fish. The final method, however, as it is used commonly as well as in the present study, was rendered by Beatty and Gibbons in 1937 (Farber, 1965). This method of evaluation has since been further developed and used to assess the quality of a variety of fish. However, inconsistent results have been reported by various authors, and the quality of the method remains disputed. The study used raw, whole specimens of trout, which is a white freshwater fish. Two fish were sampled, one fresh and one kept for one week in a refrigerator. The procedures used to assess the quality of the fish were", "label": 0 }, { "main_document": "found it more profitable to buy slaves rather than employ indentured servants who would have a limited term of service at the end of which they would leave to take land valuable to the cultivation of sugar cane. It was also becoming increasingly expensive to supply indentured servants to the Caribbean. The latter half of the 17 V.T. Harlow, Founding of the Second British Empire (Longmann 1964) As the number of enslaved Africans soared in the Caribbean, the number of white settlers fell. \"The enterprising social class left voluntarily to the escape hunger and lower standard of living that was brought about by the lack of work.\" As small farmers and artisans left to other parts of the Caribbean and the American mainland they were replaced by great numbers of imported slaves. This would have great effects for the social structure that was in place. Before there was the elite class who were defined economically by owning large farms and plantations, the majority of whom would own at least fifteen slaves or servants. Next in rank came the merchants, officials and professionals who were just below the planters. Next came the \"poor whites\" who were described pejoratively in Barbados as \"red legs.\" The \"poor whites\" consisted of servants, servicemen such as policemen and militiamen and small independent farmers. In 1645 there were 11,200 landed proprietors in Barbados; by 1667 there were only 745. The independent whites working the land for themselves were now being displaced by large farms with large numbers of enslaved Africans. This would mean that the general trend of the population shifted towards a more concentrated, elite white population and a vastly increasing number of enslaved peoples. Instead of three basic social divisions existing in a largely white population, from the 1650s onwards three different basic social divisions could be made: \"free white persons, free non-white persons, and slaves.\" Ramiro Guerra y Sanchez, Sugar and society in the Caribbean. (Yale U.P 1964) The basic three divisions within the white population still existed during the increasing demand for sugar in the mid 1600s, yet the elite class was becoming concentrated and the lowest white class was becoming smaller. As the number of whites decreased, they needed to impose laws to restrict the movement and activities of slaves in order to control them. One such law stipulated that any slave leaving their plantation for any period of time had to be issued with a slip of paper from their master allowing them to do so. Others issued concerning punishments for disobedient and rebellious slaves were released throughout the 1600s. Due to English attitudes, the white planters did not allow slaves to practise religion the Caribbean. The Christian faith was a great social rallying point for slaves in the southern States in North America and black communities built tight societies around their churches which were segregated from the white population. They were controlled by militia who were made up of the poorer whites and in turn were controlled by the plantation owners. One report of Cuban commissioners even claimed that: \"A country's", "label": 1 }, { "main_document": "The size of the world seems to shrink, as global travelling becomes easier and people move around on a wider scale. With an increase of diversity, people more and more have contact with other people from different cultures (Brislin & Cushner, 1996). People believe there is a right way of behaviour, which is based on a process called 'socialisation' (ibid: 5). Due to this ethnocentric belief, conflicts and misunderstandings arise. This learning journal will support my own development in cultural communications and increase my awareness of cultural diversity. The understanding of my own culture is imperative in order to understand and to be able to evaluate differences of other cultures. Until today I believed that I have a good understanding of my own culture, however, for the first time reflecting about it, makes me realizing that I have difficulties forming my believes and understanding into words. This is common, as aspects of cultures are not often spoken about, rather kept as secrets (Brislin & Cushner, 1996). The development of cultural communication skills is based on the ability to externally observe my own behaviour and to accept not to know everything, which has been identified to be a very difficult task (Adler, 2002). I grew up in a boarding school, which is teaching German as a foreign language. Students of over 160 nationalities have been there, studying for two weeks up to one year. Having contact with the students on a daily basis, gave me some knowledge of other people's cultures. Having had an incident during work (see Appendix E), made me think about different cultures and how to engage with them. There are so many little differences between cultures, which often are not as visible as the one described in Appendix E, but which may be significant for some people. Until that time I probably have ignored or overseen many issues, which might have disappointed some of the students I lived with or made them uncomfortable. Travelling more frequently into other cultures, I believe, made me more aware of differences, as I was now the person, who found himself in an uncomfortable or strange position, experiencing a The knowledge I gained through travelling is probably more authentic than what I learned in my home environment, as people try to adapt to their host country. The students I was living with, I believe, did not behave exactly as they were at home. My understanding of other cultures during childhood was heavily based on stereotypes, as my friends and I made jokes of people from other cultures. This might have been due to so called 'war stories' (Brislin, 1986), which we learnt from friends and adults. Maturing I realised that stereotypes had 'some truth', but only are a primitive and a limited form of each others cultural understanding (Emig, 2000), however not dangerous as long as decisions are not based on them (Brislin, 2000). Stereotypes are no longer prejudices, they facilitate storage of information (ibid) and are used to categorise other groups of people (Guirdham, 1999). People often use them, although they know that", "label": 0 }, { "main_document": "not cover the cost of any expenditure, the amount was simply added to the loan. This obviously had a snowballing effect, with increased debt leading to increased interest payments. Military expenditure was obligatory, and effectively, 'Britain was...able to maintain a considerable army overseas at someone else's expense.\" The Afghan war added It is questionable how this could come about when some of the areas of the nation were in the poverty trap. Chamberlain, Ibid, pp. 128-30. In 1857, 'home charges...amounted...under the East Indian Company's rule to about However, 'In the 20 years from 1857... India paid home charges to the amount of Hyndman, Ibid. The Crown also tagged public works expenditure to the account. This, as well as security, effectively were the only tangible product of the tribute and accumulating interest. Even in times of famine, the government showed no compassion. Hyndman describes how, for a typical Indian, 'the first thing to be met is the revenue and local charges, the next the soucar's usurious interest; the provision of food for men and animals comes after.\" He also describes the death of three hundred thousand people, despite food being exported that year. Yet still work on railways and irrigation continued: 'public works...are the greatest official panacea for famine.\" The problem is that railways did not produce wealth themselves. It is more likely that the creation of railway routes was maintained as it acted as an employer for British workers. In fact, India had to pay rent of On top of this was the bill for construction: 'up to 1877...a grand total of over Irrigation too was criticised, as it would not make water run up hill or produce water in times of drought. Hyndman, Ibid, p. 64. Ibid, p. 65. Ibid, pp. 65-7, 193-4. Taxation perpetualised famine in India. The British insistence on building the railways and irrigation channels saw money being used in a way that didn't help the starving Indians. The revenue used was not even disposable capital: the Government accumulated deficit of This was despite stationary revenue from India. Furthermore, Sir John Strachey believed that the poor and only the poor should provide revenue for relief. The wealthier obviously did not require relief, and so Strachey placed the burden of revenue entirely on the poor, despite it being a time of famine and scarcity. His idea was that the money spent on India should come from the savings of light taxation, rather than heavy taxation and loans. This saving idea was extended to the people: the only 'true remedy of famine and scarcity is the frugality of the people.' Ibid, pp. 68-9. Ibid, pp. 69-70. Hyndman, British policy in India was not destructive. It would be illogical for Britain to pump huge amounts of capital into India if this was the case. What is was possible to see was the Europeanization of India, with an European occupational army, a land policy of absenteeism and the employment of Europeans to work the state infrastructure and industry. The Europeanization of agriculture too was prevalent. There was the change from subsistence farming,", "label": 1 }, { "main_document": "Establishment of asymmetry and specification of cell fate in the C. elegans embryo are processes occurring at a very early stage of embryogenesis. Maternal effect genes play a predominant role in initiating asymmetrical divisions and certain cell-cell interactions that influence cell fate, and mutations in these genes often have detrimental results concerning the embryonic phenotype. The functions and mechanisms of regulation of maternal effect genes have been extensively studied in the last two decades, mainly by investigating the effects of genetic modifications on young The invertebrate It has several properties that made it very popular among developmental biologists as a model organism after being discovered as such by Sydney Brenner in the 1960s. Embryogenesis only takes 16 hours and can be accomplished in vitro. Furthermore the embryos are transparent, thus development can be monitored easily. Adults can be stored as frozen stocks and recover upon thawing. And because the species consists of few males and mostly hermaphrodites, one hermaphrodite individual can seed a whole population by self-fertilisation since it produces both sperm and eggs. Also, It can be easily genetically manipulated with techniques common in developmental biology to identify important genes. Because Thus the origin of the 959 somatic cells within each adult organism is exactly known. Therefore, Cell fate is specified by two processes very early on in embryogenesis. Firstly, a number of invariant asymmetric cell divisions, in which important regulatory molecules are being unequally distributed along the cleavage furrow, thus creating daughter blastomeres with different developmental fates. Secondly, there are different cell-cell interactions which have reproducible invariant patterns in every individual as well. In cell-cell interactions, a certain cell's signal will either alter the fate of an adjacent cell, or it will trigger polarisation of this cell, resulting in asymmetric division. The majority of genes involved in early cell fate specification are maternal effect genes; so called because these genes are transcribed from the maternal genome and are delivered to the early embryo by its mother instead of being transcribed from the zygotic genome. And while the effects of mutations in these genes are not visible in the mother, the offspring will often have an easily distinguishable phenotype. This essay focuses on a number of these genes and their protein products, as well as regulation of their expression. The first cell division of the P0 zygote is already asymmetrical, resulting in a bigger anterior founder cell called AB and a smaller posterior stem cell called P1. Each of the early cell divisions produces one somatic founder cell, denoted AB, EMS etc, and one stem cell, denoted P1 to P4, which is a germline precursor. The stem cell lineage always divides so at to produce another stem cell and a new anterior founder cell, as can be seen in fig. 1.: There are two autonomous processes that take place at the one-cell stage, which are important in asymmetrical division. One is determination of the posterior pole of the P0 blastomere and the second concerns asymmetrical localisation of the PAR proteins, which are vital for maintaining the initially established asymmetry and cell", "label": 0 }, { "main_document": "described by Michael Porter in 1985. They described how Porter's three strategies of Innovation, Quality enhancement and Cost reduction could be achieved by linking them with various human resource practices like Job definition and specialization, reward, appraisal and performance management etc. Thus, for example while innovation requires high creative talent in the organisation, hence jobs require close interaction and coordination among individuals whereas; cost reduction would require repetitive and predictable job behaviours. While deciding the human resource practices are to be linked with business strategies, organizations generally choose from \"six human resource practice menu\" (Schuler and Jackson 1987) that concern different aspects of human resource management. The policies, practices and strategic issues of one of these aspects that is recruitment and selection would be discussed in the next section. The issues to be taken into consideration as far as strategic recruitment and selection are concerned can be understood well when we take an organisation in question. Recruitment and selection of human resources certainly have a very important impact on how well an organisation can successfully materialize its business strategy. Also, at this stage organisations face an important question of whether it should develop its existing employees in such a way so as to fulfil its vacancy requirements or should it try to acquire or hire human resource from outside the organisation. In order to build a sound staffing policy organisations develop guidelines around these two major challenges (Kossek and Block, 2000). Organisations usually take help of the three strategic approaches to recruitment and selection, namely traditional approach, staffing as strategy implementation and staffing as strategy formation, to develop employees for successful implementation of their business strategies. (Snow and Snell quoted in Schmitt and Borman, 1993) The traditional approach to recruitment and selection gave very little or no importance to business strategies. It downplayed the link between staffing decisions and an organisation's business strategy. The aim was to find a person who could fit the job perfectly and to recruit people who could perform the best in the given job. The use of this approach was also seen extensively during the First World War, where cognitive ability tests were used by the armies of US, UK and France wherein soldiers employed were simply placed in a job where they were best suited without taking into consideration the strategies formulated by the army. This was a very successful practice and became extremely popular among organisations other than the army in a short span of time. While this approach may have benefited organisations in several ways but it is considered non-strategic because it completely ignores business strategy. A business strategy has to be taken out from the contextual position it was placed in traditionally and put in a prominent place. The following two strategies of staffing as strategy implementation and staffing as strategy formulation were developed thereafter and emphasize towards recruiting and selecting within a strategic framework. To be successful, an organisation is forced to take a strategic approach to recruitment and selection especially in the long term as following the traditional approach is not", "label": 1 }, { "main_document": "management accountancy purposes. The use of the more specialized software, such as software available from companies such as Farm Plan, does not seem to be used so highly in the farm business industry. However, this is likely to be a changing scenario and in the 3years since the survey was conducted, many changes such as the Single Farm Payment have occurred. These changes have meant that there are now much more stringent rules and regulations and the new agri-environment schemes will require detailed analysis of the farm business. Therefore, companies such as Farm Plan are likely to see a rise in demand for specialist agricultural software. This trend is likely to be similar across all European Countries due to the Single Farm Payment and related agri-environment schemes. The future trends of IT usage in rural businesses in less developed countries is also likely to increase as factors such as traceability and animal welfare are becoming issues of increasing importance worldwide. In many countries, important aspects to aid the farm business such as telecommunications are relatively new ideas and therefore the development of IT usage to the stage that the UK is currently at is likely to take a while. However, as the use of IT develops worldwide in the agricultural industry, it is going to be increasingly more difficult for these countries to cope and compete without similar resources. IT is becoming an increasingly important element for all businesses. 'It affects all aspects of organizational activity and opens up many new ideas for the management of organizations' The integration of IT into rural businesses is an important aspect for many businesses in the UK and the trend for increased usage is likely to continue to increase. This will partly be due to the greater concern in today's society over food safety and traceability issues and also the increasing amounts of legislation that are coming into force regarding agricultural practices. The use of IT will be a practical and constructive way of managing information relating to food safety and traceability and also a useful record keeping tool for legislation requirements. IT will also become increasingly more important as the use of GPS and precision farming increases. This is likely to occur particularly on larger scale farms. There are many issues relating to the use of IT in the rural sector which will need to be addressed before all businesses adapt to using IT. The adoption of IT into the rural sector will also be much more advanced in the UK in comparison to developing countries and therefore this factor should be considered. This could mean that in the long term, these developing countries are at a disadvantage because they do not have the technology to compete with the developed world. The use of IT in rural businesses today in the UK is likely to be very dependant on the scale of the business and the turnover from the farm. This is because it is fairly expensive to implement IT systems into the business and therefore it is likely that only the larger scale", "label": 1 }, { "main_document": "It was in 1903 that Emmeline Pankhurst and her daughter Christabel founded the Women's Social and Political Union (WSPU), following around forty years of organised campaign for female suffrage organisations in Britain (Banks 1981). After another fifteen years of campaign, interrupted by the First World War, women were finally granted the vote in 1918 through the Representation of People Act. It remains, however, a controversial matter as to whether the militant campaigns of the WSPU actually helped to quicken the vote for women, or not. In order to get a clearer picture of the tactics of the suffragettes, it is worthwhile to take a closer look at their use of the female body in violent, unconventional and often illegal ways, to draw attention to their cause. The connection of British femininity with a high morality, and the idea of gender equalities through historical argumentation were common ground arguments for the vote in the late nineteenth century. Although arguments of the 'constitutionalists' always sought to stay within the boundaries of middle class respectability, they certainly incorporated argumentation based upon the female body (Holton 1998). The most evident examples of this can be found in racist theories. Female authors attempted to present an image of a superior British race, of which women, had, by necessity, always been part. Charlotte Carmichael Stopes', in her book 'The moral codes, Constitutionalist feminists increasingly began to make use of racial reasoning to support their campaign for the vote (Holton 1998). This provided the movement with a legal means of enhancing female respectability and high morale in a way that was compatible and in harmony with society. However, after forty years of such campaigns, the women's vote was still nearly as far away as it had been at the outset. This realisation caused the WSPU to seek to pressurise the government, for they were responsible for the problem (Pugh 2000). From a harmony model, the dominant suffrage campaigns thence shifted to a model of conflict, bringing the movement into a new phase (Banks 1981, Holton 1998). The suffragettes, as the WSPU activists came to be known, sought to cut right through to the core of the problem by addressing the government directly. They sought to point out the inherent contradictions of the political system as it was: the partial inclusion of women into an essentially male-dominated environment (Lawrence 2000). From insisting on politicians' support in public meetings, the suffragettes soon radicalised (Vicinus 1985). They felt that the suffrage question was not dealt with seriously, and from there the WSPU leader Christabel Pankhurst set out to phrase the problem more directly: '[i]f Mr Asquith [PM] would not let her vote, she was not prepared to let him talk' (Lawrence 2001). This meant a great leap away from the Victorian feminist movement; suffragettes sought to replace the passive, homely housewife with a campaigning activist, a political being. In the words of the prominent suffragette Emmeline Pethick-Lawrence: 'Not the Vote only, but what the Vote means - the moral, the mental, economic, the spiritual enfranchisement of Womanhood, the release of women...' was", "label": 0 }, { "main_document": "for organic food in the literature worldwide. Organically grown products are perceived as very healthy and of good quality by Croatian consumers (Radman, 2005). This is also the case in a study by von Alvensleben (1998) (Wier and Calverley, 2002). In another qualitative study that examines what motivates consumers to buy organic food in the UK, it was found that consumers purchase organic food because of beliefs that it is healthier for them and their family and also because of lack of trust in conventionally produced food (Makatouni, 2001). Lack of trust in conventionally produced food was also the most important factor involved in consumers' demand for organic food in a study by von Alvensleben and Altmann (1987). Not only health and environmental related concerns but other reasons also determine demand for organic food such as taste, however, there is little consumer evidence that organic products taste better than conventional (Tregear Even though in some studies consumers perceive organic food to have a better taste (Makatouni, 2001), the improved taste is more likely to be explained by varieties that produce smaller quantities and are characterized by enhanced flavour (Davies Nonetheless, Meier-Ploeger and Woodward (1999) found that taste is a quite strong motive for purchasing organic food in some countries such as Germany (13-24%) and the UK (40%) (Fotopoulos and Krystallis, 2002). Other motivations behind consumers' demand for organic food not as strong as health and environmental considerations include social aspects such as support of local producers and local economy in general (Makatouni, 2001, Padel and Foster, 2005), fair trade (Padel and Foster, 2005), ethical considerations, such as animal welfare (Makatouni, 2001, Padel and Foster, 2005), curiosity and additive concerns (Tregear In another qualitative study, which involves word association, consumers expressed positive responses to organic food for various reasons. About two-thirds of the respondents were positive towards organic food. In particular, 40% of them associated organic food with 'chemical-free', which was the most frequently mentioned word association. Other responses mentioned include 'natural'/'homegrown', 'healthier'/'more nutritious', 'earth friendly', 'clean'/'pure' and 'fresh' (Raab and Grobe, 2005). In general, the motivations behind consumers' organic purchases are similar across countries and organic consumers have a certain profile, which is mainly affected by socio-economic and demographic characteristics such as age, gender, income and education level. According to Davies (1995), the organic consumers are predominantly female with a high level of disposable income. Factors like age and the presence of children proved not to be significant when examined in isolation, however, both of them had an effect when household income was also considered. So, high-income households with children were among the main organic purchasers and also consumers at the age of 30-49 with children whose income allows them to buy organic food. Another important finding of this study is that groups of consumers who express great interest and willingness to purchase organic food driven by environmental considerations do not necessarily coincide with those who have the highest actual purchasing behaviour (Davies For example, young Swedish respondents (18-25 years) expressed positive attitudes towards organic food, however, their actual purchasing activity was", "label": 0 }, { "main_document": "market competitiveness poses survival challenges for some companies. Korea's recent government policy changes shown in increased westernization and business transparency suggest an inclination towards more individualist ideals The argument that countries may have moved along a particular dimension since Hofstede's study is supported by Pizam (1997). In response to neighbouring booming and large-scale economies, which highlight high cost production and constrain export potential (Tong-soo, 2006), Korea is tackling its internal problems, to attract FDI, by improving labour market policies (About Korea, 2006a), introducing flexible work (appendix 5) rising the retirement age, and legislating age/women discrimination acts (About Korea, 2006a). Further efforts include opening job centres, the introduction of an annual pay system (Rowley & Bae, 2004) and making work insurance available to everyone-only implemented in 2003 (U.S. Library of Congress, 2005). These new policies significantly affect hospitality (especially greater female participation) and mainly the bakery sector which needs flexible shifts and working hours to maximize labour productivity. However, despite the improvements, there are still many stiff regulations regarding employability in Korea (appendix 4). Nonetheless, apart from reflecting greater gender equality, flexible work (appendix 5) demonstrates Korea's feminism, where a work/life balance is crucial, both genders have the same opportunities for the same vocations and there is a prevalence of service industry jobs (appendix 2)-Hofstede, 2001. The UK on the other hand, is a masculine country (appendix 3), where gender roles are still rather differentiated, competition dominates and returns are impartial. As female participation in the workforce is not a reflection of feminism (Hofstede & Hofstede, 2005), the UK's tightening gap between male and female work participation is attributed to improving attitudes towards working women, their increasing qualifications and returning to work after having children (UK National Statistics, 2005). The OECD (2006) asserts wage differences between male and female workers relate to the amount of time women spend in the labour force, the jobs they choose and the high incidence of part-time employment (appendix 2). Further evidence suggesting masculinity and specific gender roles is the very low number of working mums, especially with children under five. To adapt to Korea's feminine culture Saint Fusion should foster pleasant relationships between superiors and subordinates and cooperation, opposing its UK aim of inculcating a sense of accomplishment and job recognition- typical of masculine societies (Romani, 2004). Socio-cultural challenges for hospitality businesses in Korea include a homogeneous population, predominance of the national language, social class and Confucianism (appendix 2); which refer to different aspects of culture (Usinier & Lee, 2005) and prove high levels of UA (appendix 3)-a geographical characteristic (Hofstede, 2001). Conversely, UK's high immigration levels have brought diverse languages, ethnicities, nationalities, levels of education, types of profession, and religions, and consequently innovation and diversity (UK National Statistics, 2005), proving much tolerance towards difference and uncertainty (Hofstede & Hofstede, 2005). Further implications lie in how people regard social disparity and their association with power. In the UK, status and power are unimportant, however in Korea, these are differentiators of social class, linked to greater respect, formality (Usinier & Lee, 2005) and believed to bring stability through", "label": 0 }, { "main_document": "inflows and short-term debt) to reserves declined from 146.6 per cent as at end-March 1991 to 38.2 percent at end-March 2003 (International Financial Statistics, Budget reports). Machlup (1966) puts forth an interesting theory towards explaining the accumulation of forex reserves by central banks. He compares the reserve accumulation by central banks to his wife's passion for clothes; though her wardrobe was full she was not satisfied and wanted more clothes. According to him, monetary authorities believe that it is essential to maximize their reserves in order to ensure the smooth functioning of the monetary system. In order to do this, the demand for reserves is measured by monetary authorities as existing reserve base plus an increase over time, which is accounted as a growth factor. This theory put forth by Machlup is more commonly known as \"Mrs. Machlup's Wardrobe Theory of International Reserves\" (Machlup, 1966). Source: Reserve Bank of India Reports These so-called Reserve assets are composed of monetary gold, foreign exchange assets and other claims in foreign currency. The US dollar is the most important currency in terms of global reserves. According to the IMF Annual Report of 2005, the US dollar accounted for about two-thirds of the total global reserves. In India the foreign exchange reserves can be sub-divided into: The members avail themselves of the IMF's financial resources by purchasing (drawing) other members' currencies or SDRs with an equivalent amount of their own currency. The IMF levies charges on these drawings and requires that members repurchase (repay) their own currency from the IMF over a specified time. This drawing does not constitute a use of IMF credit, as its reserve position is considered part of the member's foreign reserves, and is not subject to an obligation to repay. I would be using the following set of variables in my research to determine the adequate level of reserve holdings: ABSTRACT The research is based on the voluminous literature available on foreign exchange and the Indian economy. The research endeavors to determine what the optimum reserve holdings should be. The duration of my study would be from 1991 crisis to July 2007. Most Central banks look to build up their foreign exchange reserves for different reasons. The optimal size of reserves is a question which has been bothering many policy makers at various levels. Adequacy of reserves has emerged as an important parameter in judging an economy's capability to absorb external shocks. However looking at, the rate at which capital flows have been flowing in recent times, the old school of measuring reserve adequacy in terms of import cover no longer seems appropriate. Reserve adequacy measures have been broadened to include a number of parameters. Most countries do not have set targets for foreign exchange reserves. It is very difficult to ascertain what an optimum level of reserve for a particular country should be. Different researchers have arrived at different models to ascertain the adequate size of foreign exchange reserve holdings for an economy. In the early twentieth century, under the gold standard system, the prevalent view was that the size", "label": 0 }, { "main_document": "we decided to build up stock to meet customers demand in these peak periods. It can be seen from the figures 1.1 and 1.2 below we had inventory cost almost throughout the three levels and the total at the end of 24 Initially we were only focused on meeting customers demand and ordering a lot of raw materials so had inventory costs but after looking at bank conditions we started forecasting more accurately to avoid cost of holding inventory also. Than we tried our best to do accurate forecasting but forecasting is never accurate and we had penalties, especially at the end of game in 23 I think our group under forecasted a little as overall we satisfied 88% of our customers demand and had penalties for the 12% orders we missed to deliver. To get the right product to right place at right time and for right cost it is necessary to schedule activities in an effectual manner. Because no matter how good scheduling methods are it is unlikely that supply and demand will be in accord. Supply chain then require batching of materials for efficiency. To get the full advantage a balance must be achieved with quantities that promote efficiency and to avoid disadvantages of tied up inventory and money From customer's view point the order winner for our product is on time delivery since any shortages in supply are not carried forward. Since the demand is rising which is 15% more than last year we made every effort not to miss the peak in demand where greatest profit were available and not responding late when capacity exceeds demand, increasing inventory and further loss in profitability. In an attempt to satisfy most of our customer's demand we emphasised to keep more finished goods than raw materials as the cost of holding inventory is same 0.5% of the value for both raw materials and finished goods. Capacity is the ability to produce. The overall aim of capacity management is to match the level of demand with the level of production. Less production than the demand means missed opportunity in terms of sales, turnover and profit. It also results in dissatisfaction among customers as a result an organisation looses market share. Over capacity means under utilisation of assets. Therefore, capacity management is another important area in operations management that needs to be carefully worked out. We operated our production system on 'made to stock'. Our total production capacity was 10,350 units per week against an average weekly forecast of 6250 units. Running a Saturday shift was not a feasible option in which a unit cost was Our strategy was to keep our unit cost at the lowest so that we could earn a reasonable profit from our sales. Based on this strategy we decided to produce at least 5000 units per week running a day shift to take advantage of low labour cost and decided to keep the unsold goods in stock for a peak demand period. It kept our unit cost at As a contingency measure we also decided to run", "label": 0 }, { "main_document": "a budget for the coming year after they were unable to bridge widening differences on fiscal policy in their party.\" It is up to the president to create the image of a united administration. Once again, it becomes apparent with long-term perspective that any legislative success is due to the president manipulating chance circumstances. Bush, for example, was able to push through controversial anti-terrorist legislation such as the Patriot Act following the attack on the twin towers. Once again, therefore, the character of the president is essential in working around chance circumstances to paint a favourable image of a potentially unsuccessful administration and manipulate chance events to attempt some legislative (or economic) success. Stadlemann, M. (2002) Ibid Neustadt, Stadlemann, Hulse, C. (2006) \"House Republicans Abandon Budget Effort\" If personality and the reliance upon fortune is key for perceived economic and legislative success, then a president's relationship with the media is of vital importance; perhaps more important than any legislative action he could attempt. Since Kennedy used television to craft his image with the first presidential news conference in Washington's State Department auditorium, presidents' actions have been closely tracked by television, radio and newspaper (and now also the internet). Becoming increasingly hostile and cynical over the years following events such as Watergate, Vietnam and the Clinton impeachment situation, it is important for the president to build a rapport with the media early on to encourage them to present unfortunate news with a positive spin. Though Bush is much derided by some members of the public for saying or doing things which make him appear unintelligent, he does have a very good relationship with the media. Once again fortune played a part, as he had the good luck to enter office after a president whose administration had be berated by the media towards the end of its term with scandals about pardons and the pricey New York office space. \"By comparison...[Bush] had to look good, at least for a little while.\" Bush does also have a certain amount of skill when interacting with the media, however. He uses techniques such as giving nicknames to reporters and questioning them on their family life and children. A \"personable person... he does try to relate to reporters one on one.\" Though this may not change the way that the media views his policies, if he gives press conferences early and often, and comes across in speeches as informal and amiable, then much of the (uneducated) public will look favourably upon him. Presidents also tailor their speeches and schedules to suit a media which cares more about the sound bite and quirky news story than in-depth coverage and analysis. At Thanksgiving, for example, the president is guaranteed news time by the gimmick of pardoning a turkey. It could even be claimed that a president could be seen as successful even if he never had a single piece of legislation passed and the economy was in tatters. As long as he was telegenic and created enough interest for the media to write unusual stories, then the majority of the", "label": 1 }, { "main_document": "that they are both significant as we are able to reject the null (t-statistic that is greater than 1.96 with a probability equal to zero). Since attc is higher than alevelsa, I have decided to include it in my model. I am now left now with alevelsa and hrsqt. Since it is not straightforward whether to include them in the model as well, my first tentative best model, let's call it model A, is one including, ability, attr, and attc. I then ran a regression on variables, ability, attr, attc, alevelsa and hrsqt. As table 6.1 (appendix p7) shows, the t-prob of variables are very close to zero (if not zero), except for hrsqt where it is 0.0325. We then reject the null hypothesis for each of the variables ability, attc, attr and alevelsa at 1% level of significance, as well as the null for hrsqt at 5% significance level. The t-prob of the f-test is also zero meaning that we also reject the null hypothesis of joint significance. All the independent variables within the model are statistically significant, and thus it could be a potential best model. My second best guess of the best model, model B is then: Having data for some relevant dummy variables which might influence performance, the next step would be to include a dummy variable in the model above and see whether it affects the model. The dummy I chose to add is uk. With the exchange rate of the pound high, not many international students are able to afford studying here in the UK and therefore, most of them are scholarship holders, sponsored by their home government and private companies. Consequently, we can expect to see international students performing better than local students as international students are probably top students in their country, chosen to study abroad. My third best guess of the best model, model C is then: From table 6.2 (appendix p7), we observe that the t-prob values of each variable including the dummy in this new regression (table 6.2), all the previous independent variables tested in model A, are still significant (0<=t-prob<=5%) and the null of insignificance for the dummy variable is also rejected at a 5% significance level. Again, the t-prob of zero in the F-test suggests we reject the null of joint significance. By looking at R To increasingly improve model C, we could think about other variables that may also affect the outcome of exam performance. As Romer's experiment suggested a higher quality of instruction may encourage students to attend more classes and by doing that, increasing their chances to improve exam performance. Such a variable could be treated as a dummy: it would take the value 1 for good quality of instruction and zero otherwise, where a good quality instruction is one that successfully gets students to attend classes.", "label": 1 }, { "main_document": "that of generational change as noted by Putnam. He observed that the decline in turnout is only part of a much broader change in the American political and civic landscape where the newer generations are less integrated with society and into the social life of their communities. There have been major declines in participation in other social aspects as well, in clubs, churches, unions and social visits hence the decline in voting is generational in origin. It is firmly expressed in his words that \"it is as though the post-war (WWII) generations were exposed to some mysterious X-ray that permanently and increasingly rendered them less likely to connect with the community.\" Hence it is not entirely true to say Americans do not vote, as Miller and Shanks observed that an older cohort of individuals - the New Deal cohort defined as those first eligible to vote btw 1932 and 1964 never wavered from its high propensity to go to the polls, rather it is the non-voting behaviour of the new generation of Americans that should be of the main concern of political analysts. Overall, the non voting behaviour of Americans, specifically of the younger generation, stems from the lack of the knowledge and skills that arise from experience and lack of incentives and benefits to vote. Strikingly, the 2004 elections serve as a reversal in non-voting trends. Curtis Gans, who is in the committee for the study of the American electorate reckons that 119.8million people voted in 2004 which is 14.4million more than in 2000 elections. An article in the Economist also stated that \"..(the turnout rate) at 59.5%, the estimated figure was the highest rate since 1968 and the rate of increase since 2000 was almost twice as great as it had been in 1996-2000.\" It is indeed an affirmation that if the election is a close one, if the outcome seems likely to determine the course of public policy (gives more incentives for voters) and if there are large perceived differences between policy alternatives (gives more choices for voters), then political involvement will increase. Therefore, the rate of increase of non-voting behaviour is dependent on the nature of election itself. The Economist, online edition, titled \"Back to Basics\", dated 4th November 2004 On the flipside, for the Americans that do vote, the basic assumption we make here is that American voters have some motives to reach decisions, that some are able to distinguish among choices and among their purported consequences and that they have preferences over the consequences. These preferences in turn lead to decisions. We shall examine whether voters vote according to political or social groupings, adhere to conscious and subconscious group influences or whether they make individual choices based on issues and candidates independent of the socioeconomic structure and influences. For the former, this trend of \"social\" voting is linked to the idea of partisan politics which is more prevalent in 1950s than in today's context. American voting behaviour was first accepted as a habitual ritual when voters have a political past of carefully cherished psychological commitments and", "label": 1 }, { "main_document": "As traditional views of predictable ecological succession are challenged by dynamic theories of patchy development (Bell & Walker 2005: 182), there has been increasing debate about the character of the vegetation of northwest Europe before human interference. The nature of forest environments in the past has implications both for the conservation and management of woodland today, and for our understanding of the human/environment interactions during the Holocene. The debate has centred around Vera's hypothesis that the natural vegetation of Europe was not closed forest, as previously assumed, but an open, park-like landscape managed by large herbivore grazing (Vera 1997). This hypothesis was based on the fact that oak and hazel do not survive well in modern closed forests but are continually present in the pollen record of temperate periods in north-west Europe. Jens-Christian Svenning is a biologist by training and wrote this paper while completing post-doctoral work at the Smithsonian Tropical Research Institute in the USA. The journal of publication, However, this ecological focus is not detrimental to the usefulness of this paper to environmental archaeologists. Svenning focuses his efforts on understanding the vegetation patterns in the temperate interglacials, particularly the last Eemian interglacial and the early Holocene, as analogues for the natural vegetation in the present if humans had not interfered (the present-natural vegetation). His conclusion is that the vegetation was predominantly closed forest, but that some areas such as floodplains and chalklands had more open vegetation (ibid: 140). Svenning suggests that light-dependent species such as oak and hazel would have been able to maintain themselves in closed forests through occurrences of fire, windthrows and certain edaphic-topographic conditions which favour these species (ibid: 140). The primary method Svenning uses is a literature review of palaeoecological studies of the temperate interglacials and early Holocene. It is not clear from the paper exactly how many sources were included in the review, but there are over 160 references, suggesting a fairly thorough coverage of approximately the last 40 years. Unfortunately there is no discussion of the methods used in the source material, and the reader cannot be certain that all the evidence can be directly compared. Although pollen methodologies have been established for some time, there is no clear indication from the paper as to the range of methods and analyses used in the source materials, or whether any criteria were used in selecting the papers to be included in the review. Svenning uses the non-arboreal pollen (NAP) percentage to estimate the amount of tree cover in an area. There has been some debate about the use of this measure for estimating tree cover (Sugita 1999), but Svenning makes a comparison of NAP percentages with estimates of tree cover from beetles, molluscs and plant macrofossils, and shows that pollen can present a reasonable measure of forest openness (Svenning 2002: Fig.1). He admits that, wherever possible, supplementary evidence should be used to confirm pollen results, and throughout the paper he draws together evidence from beetles, molluscan remains, plant macrofossils and vertebrate fauna. This multi-disciplinary approach ensures that the conclusions Svenning draws do not stand or", "label": 1 }, { "main_document": "The active and transitive verb 'holler,' implies an almost angry desire and desperation to go home due to the onomatopoeic nature of the word. An image of salivation reinforces her sense of depravation and suffering, and is created through the parallelism of similar word and sound structures in the line; 'An me mout-top start fi water/ Me mout-corner start fi foam.' (l.29-30) The binary themes of displacement and belonging dominate the poem. The unrelenting and fierce heat of the 'broiling sun'(l.3) reflects the strength and intensity of the narrator's love for Jamaica. This is then juxtaposed with the paradoxical cooling and soothing properties of the sea. A similar technique is employed in the third stanza in comparing the disparity between the hot spice and ice. Juxtaposition and antithesis fortify the theme of division in the poem as the narrator appears divided between her homeland and England. The proliferation of exclamatory sentences and humour jars somewhat with the poem's underlying seriousness. For example, the incongruity and improbability of seeing a; 'hairy mango pon de road!' (l.28) in England, both amuses and saddens the narrator by reminding her of the divergence and distance between both places. Bennett uses mainly nouns and verbs that relate to place and movement. A recurrent travelling motif arises through references to roads and roaming. The repetition of 'galang'(l.24)creates a kinaesthetic image that adds movement to both her literal and metaphorical journey to discover where her home lies. The metaphor; 'A dose a hungry buckle-hole me'(l.31) conveys a sense of being incomplete as 'hole' and 'hunger' imply that she is missing something. Being away from Jamaica deprives her of a part of her identity. The repetition of the vowel sound in the word 'waan,'(l.32) creates a noise similar to a wail that sounds almost like a child's cry. Like a child, the narrator looks to her family for comfort and security. Both poems contain ambiguous elements that create exploitable connotations within the language. Bolam's ambivalent use of the pronoun 'he' ensures that we are never sure who she is discussing. We can only assume it's Gillecomgain in the first stanza and then Macbeth. In Exclamatory sentences ensure that the poem ends on a sanguine note, but a sense of loss and depravation continues to pervade the poem. Whilst the narrator feels relived to find a reason to belong in England, the interrogative structure of the sentence 'to me wha?,'(l.34) implies that she remains unsure about what exactly binds her to Jamaica. Bennett's use of the possessive pronoun me in 'me Jamaica' (l.33) is somewhat ironic. The narrator implies that she possesses Jamaica, when in reality, it is the country that owns her- her history and culture is embedded in Jamaica and will therefore always bind her. The importance of family, roots and a sense of belonging is also explored in Gruoch is a figure associated with female strength and influence largely due to Shakespeare's portrayal of the ruthless and manipulative Lady Macbeth. Unlike Gruoch, Lady Macbeth sees her feminine tenderness as a sign of weakness, demanding; 'Unsex me here/ and", "label": 1 }, { "main_document": "show women drug users in a more favourable light than normal could have affected her decisions regarding whom to participate with and observe, and the questions she asked, (particularly in the 26 in-depth interviews). Gold, R. L. (1958), \"Roles in Sociological Field Observations.\" Taylor, A (1993), Patrick, J (1973), Taylor, A (1993), Collecting accurate and systematic data is always more difficult in participant observation where the researcher cannot record data in the normal and accepted ways. I would argue that Taylor's methods were effective and non-obtrusive: \"...unless I felt it was appropriate and not obstructive, I would not take notes in the presence of anyone\" For the interviews which were conducted at the end of the study she used a tape recorder. The use of tape recorders is debated by researchers: the main risk being that the interviewee may hold back information and detail when conscious of being recorded. However, it is likely that after a year of spending time with the women in her study, the subjects would have felt comfortable with disclosing personal information. Taylor, A (1993), Where I can identify possible flaws is in the presentation of data. Perhaps in an effort to produce a more scientific analysis of the subjects' lives, she organises the book into stages and aspects of a woman drug user's 'career'. Whilst these divisions may prove useful to the reader in understanding more clearly the life of a female drug user, it must be recognised that the information in the book has been highly processed by Taylor. The organisation of chapters, for example, \"Starting off\", \"Scoring and Grafting\" and \"Social networks\" Other ethnographies such as Patrick's \"A Glasgow Gang Observed\", have adopted a more chronological structure which can help in allowing the subjects and incidents speak for themselves. However, no matter how data is presented it will always have been interpreted and processed by the researcher, highlighting the constant need for a researcher, particularly a participant observer to attempt to view his/her results objectively. Taylor, A (1993), Above I have mentioned that Taylor entered the study with aims to disprove some of the conclusions and stereotypes which have developed through other academics' research and through the social stigma attached to drug users: \"Against the stereotypical view of pathetic, inadequate individuals, women drug users in this study are shown to be rational, active people...\" As already argued, the interpretative stage of research may have been affected by this factor, but it could also be argued that this deductive approach may have led to the possibility of influence on the subjects of the study. This quotation from Whyte's study most clearly illustrates this point: \"Now when I do something I have to think what Bill Whyte would want to know about it and how I can explain it. Before I used to do things by instinct.\" This type of influence was evident in a study where it took \"...eighteen months in the field before I knew where my research was going.\" By entering the study with perceived ideas of its direction, while often helpful in social", "label": 1 }, { "main_document": "posit that we do have obligations by virtue of belonging to families and states; whether or not these obligations are morally requisite I shall deal with later. Ibid, page 150. Hardimon (1994), page 342. Hardimon (1994), page 342. Simmons (1981), pages 16-23. The most important differences between the two cases involve considerations about authority relations and issues pertaining to legitimacy. While the individual stands under the authority of the state, it does not seem that this sort of authority relation holds amongst members of a family. It could be objected that the family, when young children are involved, does possess an analogous authority relation. However, I follow Horton in that I do not think that this family exhausts the range of families that can have the obligations I am concerned with. In fact, there seems to be a difference in the obligation that young children (may) have to obey their parents, and the obligation adult sons and daughters have to their parents. In the former case, obligation seems to stem from the authority of the parents, whereas in the latter case there is an absence of any authority relation, thus the obligation cannot stem from any such relation. As I said above, I am not concerned with the case of the family with young children, as this seems unnecessarily dissimilar to the case of the political community. The difference that I am concerned with is the lack of an authority relation in the case of the family, and the presence of an authority relation in the case of the state. It is important, though, to remember that this essay is not an attempt to explain all political obligations. The whole purpose of using the analogy of the family is to attempt to discover if there are any similar obligations in the realm of the political community. So, while it is true that there is a difference in authority relations in the two cases, it is false to claim that this causes the analogy to fail. This claim is false because the analogy (or at least my argument from it) is not concerned with providing justification for obligations that may arise from the fact that the state stands in authority over the individuals that comprise it; rather I am concerned with providing an explanation of the obligations (if any) that we possess because we are born in a certain political community, as we are born into a certain family. Horton (2002), pages 148-149. It is also the case that, because authority and accountability can be seen to be linked (see Cunliffe and Reeve (1999)) the two cases diverge when considerations of accountability are made; so a father cannot be said to be accountable in the same way a government can, for example. A similar argument can be made for considerations of legitimacy. It is clearly the case that governments and rulers are subject (or at least could be subject or should be subject) to judgments of legitimacy. The same cannot be said, for example, of a father. Of course, a father may not", "label": 0 }, { "main_document": "we have unrealistic expectations of where a student should be at the beginning of a Masters course. Indeed, many native speakers are also incredibly unprepared for post-graduate study, and do not have the same levels of preparation as their L2 peers in terms of familiarity with the library, researching skills, or with academic journals. It is true they do not have the same linguistic difficulties, but may lack some of the academic skills already practised by and familiar to many EAP students. In Sunny's case it seems that many of her difficulties stem from problems with reading, especially her ability to identify viewpoint: a fundamental skill if she is going to choose and incorporate sources wisely. Perhaps she would benefit from such study as Badger (1999) outlined, making her a linguist in her own genre. This study, although extremely limited in scope, has been a useful insight into the life of an EAP student who is wishing to pursue her Graduate career in Britain. Sunny has successfully passed all assessment criteria, and is likely to be accepted on the course she wishes to study in International Management. Her experiences do demonstrate very clearly that even while developing very successful study skills, the time taken for a non- native student to reach Masters' level can be significant. The sheer volume of new skills to be learnt, the assimilation into the life of the university as well as the challenge of meeting deadlines can mean that progress in linguistic terms may be slow. It is difficult to determine the implications of this finding. Does this mean that a much greater level of attention should be paid to linguistic development in EAP courses, obviously at the expense of the development of other skills? What can certainly be seen is that the individual needs of one student may differ greatly from another. I feel, therefore, that there should be much more flexibility built into EAP courses, where students can be guided into the areas they really need to study. This perhaps might have implications in terms of assessment, with fewer long assignments at the beginning of the course, but may allow students to develop with less pressure the meta-cognitive and linguistic competence they will need. The advent of more academic e-learning materials (eg Cauldwell, This of course means that much will have to be done in terms of producing quality academic software, in particular for the reading and writing skills needed at this level. Returning to our individual and the main focus of our study, Sunny could spend her time now immersing herself in her reading lists for her course, as well as analysing the method of discourse familiar to this academic community. She could also work though such academic course books as Bailey's Academic Writing (2003), which may help her to enrich the number of linguistic choices available to her in her future assignments and papers. Although interesting, this study cannot be said to have major implications for course design. It was extremely limited in terms of time, access to student and tutors, and number", "label": 1 }, { "main_document": "quality into four groups provides further insight. These groups are defined in terms of a factor's ability to dissatisfy and delight This is a useful technique because it facilitates the identification of factor's that may merit improvement, and contribute to a higher level of service quality. Figure 3 shows the four types of factor for Oriental Star. Johnston, R. Clark, G. (2005), page 119 The hygiene factors for Oriental Star are reliability, which means making sure dishes are available at all time, and communication. Not communicating prices has a high potential to dissatisfy. These factors should be in place but not over-specified. Enhancing factors such as flexibility of coping with busy periods and providing and attentive service have the potential to delight but these aren't the factors that attracted the customers to the restaurant. Neutral factors such as the aesthetics of the restaurant have low potential to dissatisfy or delight, and little effect on customer satisfaction. Customers coming to a low priced buffet restaurant will not be expecting plush surroundings but this is always a bonus. It is important for Oriental Star to understand the critical factors contributing to service quality, those which have the potential to dissatisfy and delight. Responsiveness, more aptly the time taken to get a table, the length of time for customers to receive their food and queue for the buffet dishes will delight customers if it is quick but annoy customers if the wait is lengthy. The restaurant must also be competent in providing the service set out by the 'service concept'. This matrix is useful as it clearly shows factors of the service quality that warrant consideration for improvement, responsiveness and competence. This also supports the customer survey that indicated the importance of queuing time to the service quality delivered. To fully understand the service offered by the Oriental Star restaurant the service process must be fully broken down in which we consider 7 transactions. This is shown by the Service Transaction Analysis (STA), which is appendix 4. Breaking down the service process helps ascertain which aspects of the process are satisfactory or unsatisfactory and can then be used to consider the \"zone of tolerance\". The analysis showed that customers felt that queuing for tables / food was the main negative point of the process. Messages from STA forms included The importance of dissatisfaction with one element of the service shouldn't be underestimated. According to Sasser (1978), some customers adopt what has become known as the 'incident based approach'; whereby a single dissatisfying incident such as the above, could, despite all others remaining acceptable activities, lead to a feeling of overall dissatisfaction with the operation. Johnston (1995a) also shows that dissatisfaction with one transaction may shift the zone of tolerance up for the remaining transactions. \"This shifting of the zone increases the likelihood of the outcome being a feeling of dissatisfaction\" For example a customer experiencing an unsatisfactory wait for a table may need to be delighted by later transactions (e.g. the quality of the food) for them to be satisfied with the outcome of the", "label": 1 }, { "main_document": "also included in the simulations using a cosine rule with the equilibrium being at 180 During the simulations the atoms move and vibrate due the interactions and thermal energy in the defined force field and within the given constraints, throughout all the simulations the temperature was assumed to be 323K, which is roughly human body temperature. This model of the phospholipids agrees well with experiment - where the area per head group is used as a metric. During this study a total of ten simulations were run. These were as follows: The first four of these had a random initial configuration of phospholipids. The runs RANDOM I, RANDOM II and RANDOM III had 1500 lipid molecules solvated in 400,000 WAT molecules (corresponding to 1.6 million water molecules.) Run RANDOM IV had 2500 lipid molecules. The remaining six simulations took an intermediate stage (called the Bicelle) of one of the random runs as a starting point. Runs BICEL I, BICEL II and BICEL III, used pure DPPC and could therefore be seen as continuation of the random simulations. The remaining runs other phospholipids were added to the DPPC such that the effect of these other molecules could be assessed. In each of the random simulations the formation of a closed vesicle was observed. It appears as though the dynamics of the formation is not sensitive to the starting structure - and in each of the simulations the vesicles formed within 200 - 300 ns. The longest simulations showed that once formed the vesicles are stable structures and remain relatively unchanged for the duration of the simulation. The simulation progresses through a number of structural stages to arrive at the formation of the closed vesicles, these stages appear to be universal in that they are observed in each of the simulations. Starting in a random configuration the phospholipids rapidly form micellular structures which coalesce to form threadlike structures - referred to as interconnected worms. The next phase is the formation of single structure called a bicelle - this is an open structure consisting of a curved bilayer of phospholipids, with the head groups exposed and the hydrophobic tails concealed from the water. The next phase is a recognisable vesicle with a pore allowing the passage of water - finally the pore closes leaving the closed vesicle structure. Figure 2 shows a cross section through the vesicle showing the pore closing. The pore is hourglass shaped. In these simulations the vesicle with a pore was not a stable state - and it was quickly closed. Not all the lipids in the simulation formed vesicles - some remain in micelles or are found in smaller 'dry' vesicles. Once the main vesicle has formed those lipid remaining outside the main structure stayed outside and in the timescales were not observed to fuse with the main structure. In all of the simulations which contained just DPPC the formation a vesicle was observed, the vesicles in all simulations were all very similar to one another being roughly spherical and containing similar number of lipids both on the outer and", "label": 1 }, { "main_document": "of alignment against productivity, are amphibious. It creates rooms to explore whether all organizations are well-served and benefited equally by allocating scarce resources to improve alignment and whether the adoption of particular business strategy or industry influences the extent to alignment matters. Not surprisingly, severe critique has been given on the difficulties and necessities of business IT alignment (Keen 1996, Ciborra 1998, Ciborra and Hanseth 1998). Carr's (2003) theory on the role of IT is well known in the literature. He claims that as IT's power and availability have expanded, strategic value of IT investment decreased. IT is shifting from a potential source of competitive capabilities to just a cost of doing business. It is essential, but not strategic resource in the organization. Therefore, IT and business alignment do not matter anymore as IT can no longer provide the organizations with competitive advantages. In his opinion, the key of managing IT and business units is not seeking advantages of alignment, but is defensively cautioned for the cost and risks in IT investment. Several possible antecedents of alignment deal with business - IT alignment as illusory, even inexpedient (Ciborra 1998, Maes 1999). Business developments are not solely depended on IT development. Even the rigid installation of IT infrastructure might be confined by the industry standards or political requirements. That's what Arthur (1988, 1994) called self-reinforcing mechanisms to the organization in economics. Chan (2002) argues that total business and IT alignment is complex and difficult to achieve. Earl (1996) further states that alignment is hard to achieve unless an understanding and shared vision within the organization, from top managers to front line staff. However, objectives are not always fully appreciated down the line in an organization, where series of decisions are taken by various level of management. For examples, details of the hardware, software and operation platforms might be reflected in an emphasis on cutting costs rather than adding value by the line management. Besides, many authors (Coakley et al 1996 and Ciborra 1998) question the measurability of the degree of business IT alignment. As alignment is a continuous process that requires monitoring over a long period of time and handling contingencies if necessary, difficulties on evaluating and measuring its effectiveness remain a major obstacle to alignment. Nicholas Carr's (2003) discussion on the role of IT has been widely examined. His source of idea based on the argument that IT - carries digital information just as railroads carry goods and power grids carry electricity, has become merely commoditized product and no longer confers competitive advantages, and thus contribute to the unnecessary to any alignment with business. His theory of commoditization of IT, with electricity and railroad analogy, however, do have their limitations and constraints. (Brown et al 2003) IT systems are not analogous as standard electricity or the railway gauges, rather than any confinement or standardization, the continuous improvement in processing power and performance have had a multiplicative effect coming together, leading to an extension to its reach to other areas like biological organisms and RFID. Furthermore, IT brings about new practices and possibilities", "label": 0 }, { "main_document": "selling needs to be aware of all of the factors as well as understanding which ones it can affect. Broadly, there are four types of factor that can influence the business buying process. The first three are factors that the selling company needs to be aware of and adapt to but they only have a limited influence over. Firstly, environmental forces such as PEST (Political, Economic, Social and Technological). These are wider macro-environment issues that any company needs to be aware of. Secondly, organisational factors at the micro level such as the buyer's objectives, the company's purchasing policies and the structure of the buying centre. Thirdly, the individuals within the buying centre in terms of their demographics will affect how they purchase products. Finally, selling organisations need to be aware of the complex interpersonal relationships that exist within business buyers - who has the most influence in the buying centre, how they like to communicate, how the culture of the company works and how all of this affects buying decisions. It is also concerned with relationships between buyers and sellers, which is the key area where the selling organisation can have influence. By engaging in relationship marketing and relationship management in a similar way to Fujitsu, companies gain an insight into the internal workings of the buying company, build up relationships and thus potentially increase sales. As well as understanding the type of market, buying process and influences upon business buyers, sellers need to understand that there are a few characteristics of business demand that are not present in consumer markets, and tailor their selling strategies accordingly. For instance, business demand is often joint, in that demand for one product is linked to that of another - in Fujitsu's case perhaps selling some system hardware, and then linked with that is selling the consultancy and implementation services for that system. Also, business buying is high risk as volumes are often higher and contracts are for longer. This, of course, also makes it high opportunity. Selling B2B is also resource intensive and a distinctive feature is that businesses often partner with competitors to deliver an end-to-end service to the customer and meet specifications - a feature not often found in consumer markets. Finally, it is important for B2B companies to note that whilst they are selling into businesses, demand is eventually derived from the end consumer so it is always worthwhile keeping an eye on the end consumer markets of the businesses they sell into and shifts in these markets. Fujitsu's updated appreciation of buying behaviour has fed through into its strategising and client engagement - leading from an increased focus on the customer and meeting customer specifications. This has manifested itself in a number of interlinked ways, discussed below. Firstly, Fujitsu has moved from a product led development programme to a client led one. This has meant that the company, rather than developing a product and brochure and giving these to the salesforce, is spending considerable time and resource with customers, understanding what they want and developing bespoke solutions in the eyes", "label": 1 }, { "main_document": "octonions O is non-associative so its Dickson double algebra is not a normed division algebra. It is important to remember that a normed division algebra is not necessarily a division ring, or any kind of ring. The fundamental properties of rings include associativity under multiplication. The octonions O is a normed division algebra which is non-associate. Therefore, it is not a ring. The next two theorems, due to Fronebius, are of great interest to us. It enhances the link between the normed division algebras and division algebras in general. Theorem 8. Proof: Let Let Multiplying on the left by b, we have b(ea) = ba = (be)a so be = b and A has an identity. Elements ce form a subalgebra R which is isomorphic to R. If If we say dim The general mth-degree polynomial can be written as a product of linear and irreducible quadratic polynomials. We may replace We see that a cannot satisfy a linear equation since e and Therefore, If so Let Then, unless We assume that We choose So Let We also have Then So We have Thus, any product is a linear combination of the elements so It is isomorphic to H by the following. We must show that Since c and c belong to distinct complex subalgebras they are linearly independent, so we need only show the same for To prove, we suppose Multiplying on the left by c gives Rearranging, and then taking the difference of the two expressions gives But it is impossible to find a real So the elements form a basis. We have Q\" = H since we can see the multiplication rules we examined above give the multiplication table of H. If Take c not in so Then we see that But ix = c is not solvable for So we have a contradiction [KS89]. More generally we have a proof that A is a normed algebra, which implies the result we are looking for by Hurwitz's theorem. Theorem 9. Proof: The ideas upon which the previous proof was based also hold for an alternative division algebra. The first thing we realise is that this theorem can be stated as follows: Every alernative division algebra is isomorphic to a normed division algebra. Then the rest follow by Hurwitz. So we need to define an inner product such that We define a conjugation operation on If If a and e are linearly independent, then Let also We see Then Then This lets us deduce The conjugate of We define an inner product We see Then The result follows by Hurwitz. We now explore another way to classify associative division algebras that is, perhaps, not so direct. The Brauer group arose out of an attempt to classify division rings. To get started we need a few definitions and redefinitions. A ring We now redefine the associative division algebra. An F-algebra A is known as a division algebra if the units of A are the quotient ring A/{0} [S99]. As we know, the If A is an algebra over F, then so", "label": 0 }, { "main_document": "of Allende's desire to make the women's word as strong as the man's. Allende's use of a typical literary device to add believability to her story; the fact that events that may otherwise seem to bizarre to be true have been written and preserved through time adds an air of authenticity to a story and proves 'the presence and influence of Clara's notebooks that bear witness to life; they are not only the most important source for shaping Alba's narrative, but are also symbolic in a broader sense of the power of the written word and it's ability to mould and transform life' Isabel Allende Linda Gould Levine The ability of writing to bring release from pain and loathing, and its capacity to change life is finally proved once and for all at the end of the novel. The everlasting Clara who though unable to save her daughter from rape is able to convey to her the strength to continue and to 'write a testimony that might one day call attention to the terrible secret she was living through, so that the world would know about this horror that was taking place parallel to the peaceful existence of those who did not want to know' Once again the reader can see the importance of the written word of the female, as without Clara's mystical encouragement and her physical notebooks Alba may never have had a chance to write her story. For Alba, the planning of the story gave her a release from the terrible ordeal she was living through and when she was released her writing managed to change her life. Isabel Allende By looking back at the past Alba finds out about the rape of Pancha Garcia and is able to see how things link together through time, 'I am beginning to suspect that nothing that happens is fortuitous, that it all corresponds to a fate laid down before my birth' Eventually through her writing Alba is released from her hate and need for revenge, 'I seek my hatred and cannot seem to find it. I feel it's flame going out as I come to understand'. Alba's realization that revenge is not the answer is something Allende has always espoused, 'we can't torture the person who has tortured us because then it's a never-ending chain of fear and hate and anger and violence' Along with Alba's change of heart at the end of the novel comes her final release from the bonds of pain and loathing, 'I have to break that terrible chain. I want to think that my task is life and that my mission is not to prolong hatred but simply to fill these pages' Again this idea of breaking a cycle of hate, anger and violence is another ideal that Allende holds close, and an ideal she expresses through her writing, 'I write for those who want to share the obligation of building a world in which love for our fellow men...will prevail' Isabel Allende writes with love and for an understanding, her ideas are mirrored through her", "label": 1 }, { "main_document": "the poor integrated ones. In particular, if the developing countries which integrated go down the vicious circle of underdevelopment, then surely developing countries which do not integrate experience greater level of growth as their economic surplus is not transferred to the developed countries and they don't suffer from unequal trade relations. However, that would be an odd conclusion, given that sub-Saharan Africa's economies are so comparatively isolated from the rest of the world economy and yet they experienced the lowest levels of GDP growth with some of them being negative. (Broad 1996: 8) If an explanation of underdevelopment relies purely on the global economy to provide causes for it and thus limits the explanation to make an account in terms of factors on the international, it fails to account for lack of development in countries which are not integrated or are have tried to avoid the dependency relations risks. Given the examples of success of countries which have integrated, e.g. China and India, the point could be taken even further and it could be argued that Sub-Saharan Africa suffers not from globalisation, but from lack of it. Special report: Global Economic Inequality, Special report: Global Economic Inequality, The dependency theory introduces an interpretation of relations in international economy characterised by dependency which conditions development of one country on another and thus explains inequality with the concept of underdevelopment. The global regimes on trade, finance and technology transfers have a bias which favours developed countries over developing ones and leads to underdevelopment in the latter. This explanation however leads to a number of implications which are not confirmed by developments in world economy. Failures of the Import-substituting industrialization strategy and an immense success of export-oriented growth which relies on foreign trade and capital for development goes contrary to dependency argument. Moreover, as both prices of manufactured goods as well as the prices of commodities have been decreasing, the unequal trade exchange is not as convincing as it fails to predict this. Also the theory fails to account for the least developed countries in Sub-Saharan region as these are not integrated into the international economy. The argument is further undermined by the decreasing gap between North and South. Still, dependency theory does account for the negative effects of unstable global finance and investment on the developing countries. However, this does not lead to development in the developed countries as the investors also loose in a financial crisis. It seems that the theory can only provide a partial explanation of inequalities in global economy at best. By concentrating on the global level as a source of explanation it ignores the effect of domestic level on prospects of development. Global economy offers opportunities and risks where dependency theory only tries to account for the risks and even then overestimates their potential and effect.", "label": 0 }, { "main_document": "two concrete groups as they stand. It is clear that whichever scenario is true, the Theban Magical Library contains a much higher proportion of Greek elements of the Osiris myth. This corresponds to Dieleman's theory that the demotic spells were copies from Greek spells with traditional Egyptian elements incorporated within the text, dating from the early Roman period Dieleman 2005: 294 The Osiris myth in the Theban Magical Library is not employed very differently from older Egyptian texts. All Egyptian texts utilize the myth as magical The small difference is that one spell has the magician assisting Set to defeat Osiris Frankfurter in Meyer & Mirecki 2001: 458 and 461 PGM IV: 154-285 Let us now turn to Plutarch. Richter claims that Plutarch uses the Osiris myth to create a Greek cultural superiority to Egyptian lore I am indeed happy to accept Richter's thesis generally, however, one particular point warrants further examination. He states: Richter 2001: Richter 2001: 207 I would argue that it is not all things Egyptian that Plutarch consistently rejects but instead the accounts given by his Let us examine some of the same extracts used by Richter to seemingly prove the opposite. Plutarch dismisses accounts which equate a god to one particular body or natural phenomenon This is precisely what Diodorus and Manetho postulate Plutarch see above note 32 The other major section that Richter uses is about sacred animals. Two points Plutarch rejects are: Both of these accounts can be found in Diodorus (I: 85-6). Most of the extracts where Plutarch rejects an account have either a parallel quoted in Diodorus (and sometimes in Manetho) and/or contain a citation to Manetho Given that Plutarch already wrote a reaction to the work of Herodotus prior to this work; I feel it is not unjustified to suggest that the See Plutarch I would like to conclude the essay by demonstrating the difference between Plutarch and the Theban Magicians concerning the use of the Osiris myth. The key difference between them is a conceptual one. While the Egyptians emulate the behaviour of the gods in myth because myths Nor do I believe that he claims that Greek philosophers are the only ones capable of realizing Greek philosophical ideas as Richter suggests, since Plutarch on one occasion even praises Egyptian lore for containing ideas similar to Plato Plutarch Plutarch's treatise contains a version of the Osiris myth that at best matches native Egyptian tradition perfectly and at worst uses episodes present in other Egyptian stories. I would argue that his ordered account of the myth is not highly damaging to the study of Egyptian mythology, when placed among other native legends. It also appears to be a reaction against earlier Greek accounts concerning Egyptian religious philosophy, while at the same time utilizing Egyptian mythology in only a marginally different way from the Theban Magicians and their Egyptian predecessors, advocating his Neo-Platonist theories concerning the divine and the nature of the cosmos.", "label": 1 }, { "main_document": "Hadjiloucas et al. A pump-probe spectroscopic device was used to map the potential energy(during non excitation state) surface of reactions and an evolutionary meta-algorithm in MATLAB (under LABVIEW environment) was applied . The evolutionary landscape generated (keeping in mind different trajectories followed during evolution of reaction) was searched with the EMA for all possible pulse shape leading to optimal product formation . When a global minima was reached a feedback loop in place was able to provide input to EMA for successful product formation . S Hadjiloucas A Shaver G Walker J Bowen S O'Leary ; Feedback control of femtosecond laser pulse shapes using LABVIEW . SSE . Uni. Reading < The software module had two portions with the first application used the GA to match target waveform while the second solved molecular control problem for a four level system . The experiment concluded that it was possible and feasible to observe at least one order of magnitude improvement in the convergence rate using the designed EMA in both waveform matching and molecular control problems. The National Physical Laboratories recent detailed report It is the culmination of work by several academics and scientist across the industry . The report provided several recommendations covering areas such as high speed electric pulse generation , generating tailored terahertz sources , molecular reaction dynamics and development of femtosecond enhancement cavity technology for applications among others . The report also compared and contrasted pulse shaping control strategies and explained their suitability for different kinds of experiments . H Margolis M. Harper S Lea , NPL Report DEM-EM-010 (May 2006) The relevant recommendations of the report are summarized below with the corresponding programmes they should become part of : NMS Electrical Programme : Electrical Pulse generation and measurement - To calibrate high sensitivity devices ,NPL will be applying optical pulse shaping to its existing sub-picosecond electric pulse facility to investigate and develop electrical pulse shaping on a very small scale in order to meet such challenges . THz Project : Terahertz Radiation : Pulse shapers will potentially be used for tailoring terahertz sources , this could be achieved by broadening the accessible spectral bandwidth or by enhancing the output at selected terahertz frequency bands of specific interest for end-user applications . NMS Photonics Programme : Optical Communications : The existing ultra-short optical pulse measurement infrastructure could be extended to provide a robust optical pulse measurement system and pulse shaping capability in the 1550 nm band . This would enable NPL to characterize the soliton transmission properties of conventional and novel communications fibers for long-haul data transfer . NMS Programmes including Time and Frequency , Photonics and measurement for Biotechnology : Femtosecond Optical Pulse Delivery : There needs to be further experimentation and studies into to optimize the propagation of femtosecond pulse trains in hollow-core fiber . Quantum Metrology Programme : Chemical Reaction Dynamics - UK University groups should collaborate and share their expertise for development of techniques for measurement of femtosecond time-resolved molecular dynamics using pulse shaping and manipulate to achieve quantum control . Multiphoton microscopy : pulse", "label": 0 }, { "main_document": "instead of 2-fold dilution series, as this would cover a wider range of values for the standard curve. It might be useful to use chromatographic estimation of global DNA methylation rates procedure (Chakrabarty 2003) to test if there are any different in the level of methylation between CH and CL samples first. If the result was positive, this would support the hypothesis of the second objective of this project fairly early on and this would give impetus for any further work. However, a negative result might not prove the hypothesis was wrong, it could be just that the difference in methylation between CH and CL samples is small and cannot be detected by using the above method. As finding the right primers for candidate loci is a major obstacle in this method of detecting methylation markers for a non-modelled plant, a possible solution is to use ISSRs to screen candidate loci. In order to have repeatable results, the condition of the reaction (i.e. all the reagent concentration, annealing temperature and thermal cycle profile) has to be strictly control as well as using purer DNA templates from a more reliable extraction method. The screening procedure is as follows: Check the ISSR PCR products on agarose gel and extract separated PCR products from any bands on the agarose gel. Use the extracted PCR products as templates to rerun PCR and check for the desired restriction site. If a locus contains the restriction site, have that particular PCR product sequenced and use the resulting sequence to design primers. Also, it might be worth to try out CRED-RA method (Cai 1996) as this method is very similar to ISSRs, but with a very different type of primers. As the results of the real-time PCR were less than satisfactory, it would be useful if further experiments were carried out to establish the causes of the problems. The real-time PCR can be rerun with the product from a normal PCR as template instead of using genomic DNA, as product from the normal PCR would match perfectly with the primers and this would be useful to find out if the problem was caused by the mismatch of the primers and the genomic template. It would also be useful to evaluate how effective is this method to detect methylation polymorphism. Products from any PCRs are without any methylated bases and methylation can be carried out By mixing methylated and non-methylated PCR products, we can have different concentration ratios of methylated and non-methylated templates, and these mixtures can be used as normal sample for restriction enzymes digestion followed by real-time PCR. As we know the starting concentration of the methylated template, the effectiveness of this method can be evaluated. Methylation-sensitive amplification polymorphisms (MSAP) (Chakrabarty 2003 and Ruiz-Garcia 2005) are commonly used genetic markers to analyse DNA methylation. In comparison with our novel method, MSAP markers cover the entire genome and no specific primers are required and it is very often used for any non-modelled plants or plants with very few sequenced genes. However, methylation-sensitive MSAP also has some drawbacks. It is", "label": 0 }, { "main_document": "production levels (Krueger, A.O., 1998). However, by opening up it economy the low-cost producers can expand their level of production through exporting and they achieve economies of scale and economies of specialisation. Resources from the relatively inefficient non-trade sector may reallocate to the higher productive export sector (Giles and Williams, 2000). Further, free trade may generate new activities in line with the business opportunities existing in the world market (Cuadros et al., 2004). Dynamic gains from trade are to arise from domestic industries producing more efficiently as they face foreign import competition (Bigsten et al., 2004). The firms have to improve their production processes as well as the quality of their products in order to remain competitive (Cuadros et al., 2004). The beneficial effects of opening a small developing country to free trade are contentious as also the inconclusive evidence on the positive impact of openness on growth shows (Cuadros et al., 2004 and Srinivasan and Chenery,1989 and Rutherford and Tarr, 1998). There are several concerns regarding the opening up of a small developing country. Firstly, the opening of the economy to world trade would make a small developing country more vulnerable because one of the major mechanisms underlying business cycle transmissions is trade. Output and price shocks are transmitted, generally from an importing country to an exporting country (Mansor, 2003). Secondly, the opening up of the economy would inevitably have to take place at a relatively early stage in the economic development of a small developing country. It may well be that after opening the economy a trade deficit arises. In order to avoid inflation the consequent net flow of money into the economy would have to be absorbed and invested by economic structures, which at that stage might not have been developed yet (Krugman, 1999). Thirdly, domestic industries may actually suffer from opening the economy to foreign competition, because of the specific higher costs they face at home. These put them at a dynamic disadvantage when competing against larger countries' firms. The costs are partly directly related to higher production costs by unit of output because of the small scale of the firms, although this disadvantage can be partially corrected when accessing export markets. But there are other costs that are related to the inadequacy of supply and the higher cost of non-tradable domestic inputs including finance services and skilled labour. Finally, there are higher transaction costs, especially in small islands or land- locked countries, as well as higher \"unit\" cost of public goods because of the non-divisibility of most public infrastructure (Cuadros et al., 2004). As a result, domestic producers and infant industries may be destroyed by import competition, and production may remain limited to its traditional sector of production which is the primary goods sector. The dependence on exports of primary products and the resultant instability in export earnings leads to instability in GNP (Srinivasan and Chenery,1989). The latter increases risk and instability of investments in production of the local economy , and it is investments that are seen to be a major source of growth. Ayhan Kose", "label": 0 }, { "main_document": "Both the US and Canada enjoy high government stability (Appendix 1) (World Bank, 2007). Regarding to the promotion of competition and economic efficiency, US antitrust laws apply equally to both the foreign and domestic corporate sectors, however in Canada these are less specific and involve less industries (AWWA, 2007). US corporate tax rates are lower compared to those in other industrialised countries (Appendix 1), however, different states and municipalities often assess special taxes on hotel and motel lodging (Country Commerce, 2006). The same rate is currently higher in Canada (Appendix 1), however government is planning to lower it by 2010 (Country Commerce, 2006), therefore creating favourable conditions for the hotel sector's development (IMF, 2006). Both governments offer a range of incentives, such as tax exemption or deduction, loan guarantee and employee grants (Appendix 1), though these are mainly industry and regional specific, and exclude the hotel sector (Country Commerce, 2006), giving less support and posing barriers to its development (Appendix 2). With a liberal trade regime, most industries in the US are open to foreign investors and there are comparatively less barriers for these to enter the market (Country Review, 2007). However, in comparison, Canada has relatively stricter policies regarding foreign investment with much less foreign-owned companies (Country Review, 2007) (Appendix 1). Employment and labour legislation, but especially equal opportunity laws, predominate heavily in both the US and Canada, however, in the former there are generally no contracts between the employer and the employee (Country Commerce, 2006), justifying high labour turnover. This has always posed an issue and a source of conflict in the hotel sector. Further unrest comes from the threat of terrorism. Hotels are known targets and with its danger growing every year, this region, and especially the US, remains a prime target of terrorists (Cetron, 2006), which may possibly affect the sector and popularity of the region. The North American economy has maintained stable rates of growth over the past two years (IMF, 2006; Appendix 1), mainly due to business non-residential investments, and personal expenditure which, nonetheless, have been counterbalanced with decreased investment in residential construction (BEA, 2007; BBC, 2007). Personal expenditure was particularly significant in the US with rising GNI levels and per capita also enhancing the strength and power of both economies (Appendix 1). Rises in services' gross domestic product and personal disposable income prove the demand for accommodation and food services (Appendix 1). Added proof of the industry's high and economic contribution is the increase in US international tourism arrivals and receipts (UNWTO, 2006) (Appendix 1). Though inflation has increased (Appendix 1), it is being controlled by both countries and maintained within targets so to ensure price stability, provide opportunities for and reflect economic growth (Bank of Canada, 2007). This control reassures the hotel sector that prices and wages will not be increased in face of domestic financial pressures. This confidence reinforces lower inflation, allowing for long-term planning and ensuring that investments will not lose significant value over the years (Bank of Canada, 2007). The Canadian dollar's revaluation may constrain the country's export potential (rising prices", "label": 0 }, { "main_document": "any troops for fighting the \"war on terror\" in the Middle East. According to Abuza, while Megawati was being pushed alike by the parliamentary leaders and terrorist groups like Laskar Jihad, to take a stringer stance against the US, other moderate Islamic organizations like the Nahdlatul Ulama (NU) and Muhammadiyah were rather supportive of the war. However, NU's reason for supporting the US further reinforces the argument I have been making here. This organization advocated supported the US -led \"war on terror\" lest \"[s]olidarity with Muslims in Afghanistan\" damage their national interest (Abuza 2003; p.192). Thus it was not \" 'spontaneous' consent\" but fear of losing bilateral benefits that instigated the moderates to support the US-led \"war on terror\". However, not through influencing government decision-making alone, the peoples of these countries also expressed their resistance to the US-led \"war on terror\" through direct protests, in the form of the rising appeal of Jihad against the \"war on terror\", for instance. There was major unrest in Indonesia and Malaysia, when the US declared war on Iraq. The growing intervention of the US in the Philippines was also opposed to by the people, for fear of it further aggravating the problem of maintaining peace in the country by instigating the Muslim population in southern Philippines (Capie 2004; p.236).While the \"war on terror\" was anyways being perceived as \"anti-Muslim\" by many in the region, the war on Iraq confirmed it (Abuza 2003; p.231). It was this increasing anger of the people that enabled (and is still enabling) terrorist organisations to recruit more easily and expand their diffused networks, to continue functioning 'smoothly', even as the counter-terrorism machinery in these countries led to the hampering of their operations. Thus, the missing \" 'spontaneous' consent\" of the people was manifested in their \" 'spontaneous' consent\" to terrorism, perceiving it as the only way of making the US \" \"taste\" the humiliation and injustice Muslims feel the world over\" (Abuza 2003; p. 231). It is therefore clear from the discussion above that the governments and people of Indonesia, Malaysia and the Philippines did not \" 'spontaneous[ly]' consent\" to the US' growing power and control over the world, which the \"war on terror\" exemplifies. While this sub-section dealt with the manifestations of this resistance, the next shall analyse the reasons behind it. Osama bin Laden's \"fundamental denunciation concerns US hypocrisy in the world arena\" (Wallerstein 2003; p.199). Harvey identifies the unilateralist policies adopted by the US, especially in the \"Bush II era\", as the basic reason for the decline in US hegemony The two key terms here, 'hypocrisy' and 'unilateralist policies' sum up the argument that follows. The \"war on terror\" exemplifies these features of US foreign policy. It has thus rendered US power illegitimate in the eyes of the people and governments of the region I am concerned with here. I am using the term hegemony here because Harvey too differentiates hegemony from domination. These countries were especially incensed at the change in the US' stance vis- Disclosures of the abuse of prisoners at the \"Abu Ghraib prison", "label": 0 }, { "main_document": "The receipt of solar radiation can be broken down into two layers: a more general application of radiation received from space and the appreciation of the effect of the earth's atmosphere, among other factors, on the surface heat budget. Solar radiation is our main source of energy here on earth. Many factors can affect how much of this radiation we receive but Figure 1 represents a good approximation of the effect within the earth's atmosphere (see appendix). Firstly however we will ignore the effect of the atmosphere and look at how we primarily receive radiation from the sun. To fully understand solar radiation we first need a brief description. The sun is continually shedding part of its mass by radiating waves of electromagnetic energy and high-energy particles into space. This constant emission represents all the energy available to the earth (with the exception of radioactive material decay here on earth itself). The sun behaves virtually as a black body i.e. it absorbs all energy received and in turn radiates energy at the maximum rate possible for a given temperature. Solar radiation is commonly divided into various regions or bands on the basis of wavelength. Ultraviolet radiation is the part of the electromagnetic spectrum between 100 and 400 nm. It is, in turn, divided into three major components (Fig. 7). For solar radiation about 7 per cent is ultraviolet radiation, 41 per cent is visible light and 52 per cent near infra-red. Solar radiation is very intense and is mainly short wave radiation. We will first assume that all the solar radiation is available to earth and analyse the receipt of solar radiation ignoring the effect of the atmosphere. The amount of energy received at the top of the atmosphere is affected primarily by four factors: solar output, the distance from the sun to the earth, the altitude of the sun and day length. Here is a brief outline of each: The total solar output to space is 3.84 x 10 This is what defines our seasons. So far, we have described the distribution of solar radiation as if it were all available at the earth's surface. This is, of course, unrealistic because of the effect of the atmosphere on energy transfer. It is important to understand some basic transfers of heat energy to fully appreciate the receipt of solar radiation and its effects at the earth surface. Heat energy can be transferred by three mechanisms: Radiation: Electromagnetic waves transfer energy between two bodies, without the necessary aid of an intervening medium, at the speed of light. This applies to solar radiation as it travels through space. Radiation entering the atmosphere may be absorbed in certain wavelengths by atmospheric gases, but most short wave radiation is transmitted without absorption. Conduction: By this mechanism, heat passes through a substance from a warmer to a colder part through the transfer of adjacent molecular vibrations. Air is a poor conductor so this type of heat transfer is negligible in the atmosphere but important on the ground. Convection: This occurs in fluids that are able to circulate", "label": 1 }, { "main_document": "applied to natural entities. The damages can be measured by the cost implied in restoring the environment's wholeness as it was before humans injured it Stone (2005: 21-23) Stone does not advocate for an extreme and unrealistic situation where no action could be undertaken in relation to any natural entity. The 'Co-existence of man and his environment means that For instance, pollution can occur only temporarily, when a social need is really high Stone (2005: 26) Stone finally gives the fundamental reason why it is not inappropriate to apply the language of rights to the environment. The language of rights does not limit itself to the expression of rules. It is endowed with a meaningful force that pervades our thoughts and social and legal rules It is important because the new ethics we need in order to give an intrinsic value to the environment will emerge only from a set of new attitudes, which arise from a different way of thinking and seeing the world. This is a long process which should not be started from the outset by giving an intrinsic value to nature in a way that would not be justifiable, but by giving manageable rights to natural entities. Stone (2005 : 31-32) The aim of this essay was to show how possible it is to gradually change the way we traditionally think the world. I considered the possibility of giving rights to nature and analysed on what grounds. I tried to justify it on a moral then on a legal basis. I finally opted for a 'weak anthropocentric' approach which does not require the abandonment of the rational framework, essential for human thought, but its adjustment from its pedestal into a rationality that considers life in harmony with nature as a fundamental value.", "label": 0 }, { "main_document": "the observation of single cases to the formations of generalisations and that observations should be objective and verifiable. If it is to be believed that logical positivism did influence behaviourism then there is even more evidence that the theories of logical positivism are still used as a foundation of psychological research today. Not only as a foundation, but work carried out by behaviourists such as Skinner (1938) provide modern psychologists with principals such as reinforcement schedules that appear to be indisputable. However, the most fundamental difference between the research carried out in the present day when compared to research carried out during the time of the logical positivists and behaviourists is in the aim of research. Nowadays, thanks to the work of Carl Popper (1959), a scientist's aim is to falsify a statement in order to verify it, the complete opposite of the aims of the logical positivists. It is due to this fundamental difference in attitude towards research that forces me to conclude that modern psychologists are not logical positivists. Despite logical positivisms' obvious influences and impact on scientific research both in and out of the traditional natural sciences, modern psychologists do not use the same method of verification of statements and theories. Modern psychologist also cannot subscribe to the linguistic constraints that logical positivists placed on themselves and it is now acknowledged that constructs that are not necessarily verifiable through experience (as defined by the logical positivists) do heavily influence the behaviour of individuals and therefore must be recognised in research.", "label": 1 }, { "main_document": "As Michael Doyle points out: \"There is no canonical description of Liberalism\" The lack of a single voice is a trait common to all theories of International Relations. Despite this, there are 'major conceptual ideas' which allow Liberalism to distinguish itself as a separate theory of International Relations. The two major conceptual ideas which form the normative basis and ultimate goal of all strands of Liberalism, (the identification of which can be supported by John Gray The melioric principle that human beings and the life of humans can be subject to improvability. 2. The emphasis on individual rights and freedoms as a means by which improvability can progress. However, as with all theories of International Relations, normative goals require a method of implementation. For Liberals, this implementation is based on the major conceptual idea of working within the present contextual and structural framework of International Relations in order to promote reform. Doyle, M. W. (1986) 'Liberalism in World Politics' 4 p.1152 \"There are two constitutive elements of Liberalism: 1. The commitment to the unique rational authority of liberal principles. 2. The Meliorist philosophy of history where liberal institutions are claimed to be the only ones within which human improvement can be assured.\" Gray, J (1995) \"...since the human race is constantly progressing in cultural matters (in keeping with its natural purpose), it is also engaged in progressive improvement in relation to the moral end of its existence\". Kant, I. as quoted in (ed.) Reiss, H. S. (1991) \"... I return to the emphasis of liberalism on human action and choice. Liberalism incorporates a belief in the possibility of ameliorative change facilitated by multilateral arrangements.\" Keohane, R. O. (2002) This distinction within the major conceptual ideas of Liberals, between the normative basis and its implementation, is key to the argument of this essay. Through an analysis of the issues listed below, it will be argued that the goals of Liberalism are incompatible with the methods by which it aims to attain them. The institutionalisation of individual rights and freedoms as a 'culturally specific' western, Liberal doctrine as evident in institutions such as the UN. The limits of Liberalism as a result of its inherent contradictions and the constraints imposed on it by its relationship with capitalism. A common criticism of Liberalism hinges on the view that Liberalism's emphasis on individual rights and freedoms is a conceptual idea which cannot be universally applied to all cultures and corners of the world. For example, the incompatibility of Confucianism and Liberalism has been highlighted by political leaders in East Asia who believe that \"confrontation, individualism and moral decay...characterises Western Liberalism\" Tim Dunne describes the main challenge of Liberalism: to \"preserve the traditional liberal value of human solidarity without undermining cultural diversity.\" However, there is a vast quantity of evidence to suggest that respect for the individual and a will to improve the lives of human beings is not an alien goal to non-western cultures. Here, a defence of the applicability of these principles proves useful in order to identify the true cause of controversy surrounding this", "label": 1 }, { "main_document": "the many ways that can be used to tackle unemployment. As it has been noted previously, both market monopoly or union monopoly is not very good for an economy as both firms and unions will demand higher wages which will cause higher unemployment. However if the monopoly is reduced or if firms and workers realize that exploiting their monopoly power will only produce unemployment and so persuade them to wages and prices in a way that will cause an equilibrium in the economy, in effect help the economy by decreasing the unemployment rate. Another way in which unemployment problem can be tackled is by active labor spending. These refers to a range of policies that the governments uses to boost employment and reduce unemployment. There are a number of way in which the governments tries, such as helping unemployed's job search, improving information flows about job availabilities, helping individuals with application forms and interview techniques, offering retraining, offer loans to individuals who want to start their own businesses, subsidize firms that hire those who have been unemployed for a long time and so on. Baylis and Smith (2002) In conclusion, reasons for unemployment include, greater monopoly power among firms, high labor taxes, union structure (labor unions, union membership, labor union coverage), high levels of unemployment benefits and its duration (how many months can an unemployed claim benefits), a large proportion of long-term unemployed, substantial regional variations in unemployment, labor mobility, demand shocks (i.e. real interest rate), employment protection laws, absence of active labor market policies and coordinated wage bargaining. European Unemployment since 1980 has been relatively high compared to other countries. Mainly, the 'Big Four' countries still face high unemployment is because they have failed to tackle unemployment, and they still exhibit the factors which cause unemployment, like high unemployment benefits, barriers to labor mobility and etc. Whereas most other economies in Europe has recovered from the 1980's unemployment crises. In order for the 'Big Four' to decrease it's unemployment level, it needs to re-establish those factors which effect unemployment.", "label": 0 }, { "main_document": "orientation of the elliptical principal axes within a single laser pulse. Brixner and his coworkers also demonstrated the generation and complete characterization of polarization-shaped femtosecond laser pulses (Brixner et al., 2002). In this experiment, the light polarization, the temporal intensity and the momentary frequency were arbitrarily changed within a single pulse. The generation and analysis of polarization-shaped femtosecond laser pulses was then combined with the optimization algorithm within a learning loop and the experimental feedback (Brixner et al., 2003). This experiment uses second-harmonic generation (SHG) to serves as a feedback signal in the learning algorithm. An evolutionary algorithm is used to iteratively improve the phase-modulated and polarization-modulated laser pulses. Two optimization strategies were compared in this experiment: phase-only shaping and polarization shaping. The optimization results of polarization shaping are better than that of phase-only shaping. The experimental implementation of shaping of femtosecond polarization profiles within an adaptive learning loop raises the possibility of generation of femtosecond light field in which the scalar and the vectorial properties are optimised automatically. This methodology can be also applied for adaptive polarization quantum control experiment with the feedback signal taken from quantum system itself. There is a large amount evidence of the progress in using evolutionary algorithms for optimal quantum control in order to control the molecule in chemical reaction. Feedback quantum control of the electronic population transfer in a dye molecule is one of such evidence. Traditionally, time domain quantum control needs the calculations of quantum dynamic to find the optimal laser field. This method requires knowledge of the molecular potential energy surfaces, whereas feedback quantum control in a molecular system uses a computer program based on genetic algorithms to analyses the experimental output from detector in order to optimise population transfer from ground to first excited state and then control an acousto-optic pulse shaping device in order to excite fluorescence from the laser dye molecule IR125 in methanol solution (Bardeen et al., 1997). The advantage of feedback quantum control is no requirement of a priori knowledge of the molecule's potential energy surfaces. Another advantage of such an approach is that the optimal solution given constraint can be automatically found by a control computer with optimization algorithms. This feedback control approach might be extended to several physical and chemical processes. Some research, however, shows that the achievement of the optimization without dealing with the electronic population transfer within the parent molecule is possible. Assion and his coworkers, for example, introduced the automated optimization of coherent control of independent chemical reaction channel using a computer controlled femtosecond laser pulse shaper including an evolutionary algorithm as well as closed loop feedback system from the femtosecond laser-driven photodissociation reaction output (Assion et al., 1998). Femtosecond laser pulses in their research are modified in a computer-controlled pulse shaper based on the design of Weineral. (Weiner et al., 1992). An evolutionary algorithm was also implemented to optimise the spectral phase of the femtosecond laser pulses. In addition, a reflectron time-of-flight (TOF) mass spectrometer was used to record ionic fragments from molecular photodissociation and then feed feedback signal directly into a", "label": 0 }, { "main_document": "largely preoccupied with the conservation and presentation of the finds to the local people, as that was their initial priority. There have been, indeed, many technological developments since the ships have been raised. New techniques of archaeological investigation were invented, such as geophysics or chemical analysis. Most of the data from the Skuldelev ships, however, come from the time of excavations in 1960s. Therefore, these new techniques have not been applied to them to the full extent yet. Although new dating methods have been already used and the species of wood analyzed, not much else has been done in terms of other potential source of information, such as the more detailed analysis of organic remains or the chemical analysis of traces of food or plants. Although there have not been many other stray finds discovered with the ships, the ones that have been found have not received enough attention. They have been studied in their own context, but have neither been put into a wider context of the region nor of the relations with other regions. Many inferences could have been made about the social aspect of the finds, had they been analyzed from a slightly more anthropological point of view. Roskilde project is a unique example of marine archaeology research, both in terms of its significance for Danish people and of the understanding of the development of Viking-age shipping. The wrecks themselves and the barrier as a whole can tell us a lot about the technology and society of contemporary people. The scale of the project, its importance and the amount of funding it received, all contributed to the establishment of such a successful centre for marine archaeology, which is a leading institution in Europe. Also because of the innovative techniques of excavation, the project became known to the wider public already at an early stage, which is evident in the number of people that visited the site when the works were still in progress. 28,000 people came to Roskilde to marvel the cofferdam that has been erected around the site and the emerging Viking ships, such an important part of their cultural heritage. The project has numerous strong points, bearing in mind its extent, duration and scale it has almost reached perfection. Since the very beginning the methodology applied to the analysis of the site has been innovative and effective. During the first phase of excavations in the 1950s the techniques of scuba diving excavations have been put into practice and during the second phase in 1960s archaeologists showed even more technological creativeness and advancement by constructing a cofferdam to drain the site. In the course of research in the post-excavation phase, the interpretation of finds has been strongly influenced by the development of processual archaeology, which resulted in adoption of a more contextual approach. This allowed putting the site into the historical and social context of the Viking-age Europe. Ship wrecks can be a useful source of information about the different types of contacts that occurred between European societies. This, however, has not been much explored yet, but Roskilde", "label": 0 }, { "main_document": "Infant milk formula is a common and useful product, which is widely used in the world as a substitute for breast milk if a mother chooses not to or cannot breast-feed her infant. The first infant formula was developed by Henri Nestl With regard to marketing of infant formula in Third World countries, Nestle and other IFM members ignore the actual conditions of the countries. They followed some of the same marketing techniques that they had followed with success elsewhere. One way of marketing was the distribution of free samples in hospitals to new mothers. In itself, the practice was neither illegal nor immoral. However, many of the new mothers who received samples were unable to correctly use the product when they returned home. One reason was the fact that they were poor, so that they were unable to buy sufficient quantities of the formula. Another reason was that they often used local, unsterilized water to mix the formula. But infant milk formula comes as a powder that is to be mixed in a specific proportion with sterilized water. So there must be pure water with which to prepare it, refrigeration to safely store unused prepared formula and customers must be able to read instructions and have the income to purchase adequate quantities of the product. Thus Nestle did not analyze either the market condition of the Third World or consumers there carefully. They marketed their products without considering the actual need of the consumers there. They should have applied the suitable marketing strategies to the local market. Referring to the what criteria companies should be evaluated, they might be judged by whether they abide by the rules set up by certain organisations------The International Code of WHO on Marketing of Breast-Milk Substitutes, regulations of Advertising Standards Authority, and rules of UNICEF and IFM. For example, The WHO has an International Code on Marketing of Breast-Milk Substitutes to protect mothers and babies from aggressive marketing and help them get accurate information. It stops manufacturers giving free supplies of baby milk to hospitals, promoting their products to the public or health workers, using baby pictures on their baby milk labels, giving gifts to mothers or health workers, giving free samples to mothers as well as promoting baby foods or drinks for babies under 6 months. Additionally, labels must be in a language understood by the mother, and must include a prominent health warning. (Christ's College Green Society, Oct 2002). 'If there were no standards, we would soon notice' (International Organization for Standardization, 26 January 2005). In addition to business principles, ethical standards definitely make a great deal contribution to most aspects of our lives. There have therefore been controversial issues regarding what parties should set the guidelines for ethical standards - whether the non-governmental organizations, the national governments or the markets/consumers. There has always been a general notion of the 'invisible hands', in which the market will sort an unethical behaviour out, thus promoting consumer protection. Any company will be shunned whenever unethical behaviour is unveiled. Building on the ground that consumers are at", "label": 0 }, { "main_document": "was to present the two views as one, all-encompassing view we would be left with a fairly dull view of cause and effect that is hardly earth shattering; namely that cause and effect relationship impressions arise from the real world and we have a mental capacity to perceive them. In first looking for a definition of cause Hume attempts to break down a compound idea into its constituent, simple parts. It appears that, for Hume, the two simple ideas are that of a mental one and a physical one, but the two are so dependent on each other that they amount to one compound idea. Each has holes that require the other to fill so that one cannot be explained without the other. It seems that the two simple ideas are so connected that they are, in fact, just one complex idea of cause and effect.", "label": 1 }, { "main_document": "Hugh Davies to illustrate this example of disgust felt against whites that had sexual relations with blacks. A white man Hugh Davies was whipped 30 times in public, for disgracing himself and Christianity by laying with a negroe. Africans were also denied the right to bear arms, something that everyone else in America was allowed to do. In the northern colonies as well legislation had been passed forbidding sexual relations between white and blacks, and Massachusetts was the first northern colony to deny Africans the right to bear arms. It is also important to mention briefly, how the Africans were viewed in North America and how they were viewed in South America, as this reinforces the idea of racism in England and colonial America. Many historians are in agreement that there was much less racism in Latin America, and one of the main reasons for this is that the Spanish thought of their servants and slaves as people. In colonial America servants and especially slaves were treated as objects. They were the property of their master, like a house of a tool, and they could never do anything about it. A plantation owner in colonial America could get away with the murder of a slave because it was judged that 'a man would never deliberately damage his own estate,' Elizabeth Cobbs Hoffman and Jon Gjerde 'Major Problems in American History volume 1' 2002 So was 'slavery basically an economic phenomenon' I consider that the ideology and racist beliefs of whites towards those of black skin is evident in the treatment and cause of the enslavement of Africans. Even before the arrival of Africans in colonial America, the whites had strong views of the black race. They saw that the 'color black stood in contrast to a range of cultural values associated with whiteness; with purity, goodness, virtue and beauty.\" However I don't believe that the economic reasons for slavery can be ignored in favor of the idea that slavery stemmed from racism. It is clear to see how money and business were important elements in the expansion of slavery in the colonies, and it is also obvious how slavery could be justified by its economic importance. So instead of trying to make racism and slavery mutually exclusive, should we be thinking of these terms as being equally cause and effect? Kenneth Morgan 'Slavery, Atlantic Trade and the British Economy, 1660-1800' Carl N Degler 'Slavery and the Genesis of American Race Prejudice' 1959 James Walvin 'Questioning Slavery' 1996 This brings us to the work completed by the historian Winthrop D. Jordan. Jordan came to the conclusion that racism and slavery probably evolved at the same time in a circular process, with most Negroes gradually assigned to perpetual servitude and simultaneously debased and identified with a lowly status. Jordan does stress a significant point as many of the first incidents of debasement in the colonies occurred around the same time as enslavement began taking place. Winthrop states that slavery and prejudice were 'constantly reacting upon each other, dynamically joining hands to hustle the Negroe", "label": 1 }, { "main_document": "opportunity for the enzyme and substrate to collide in the cell and therefore less browning. The addition of ascorbic acid to the 65Bx sugar solution is to prevent changes in colour and flavour that arise from oxidative enzymes which act after tissues in the fruit have been damaged. The results show that there was a significant level of browning, approximately 50% which is higher than the apple samples that were immersed in just sugar alone. Similarly to the previous sample, the prevention of browning is by the sugar acting as a barrier to oxygen. Ascorbic acid is a reducing agent which acts by keeping phenolic substances in a reduced and colourless state. This suggests that there should be less browning in this sample compared to the previous sample although this was not observed. The apples that were immersed in a 3% brine solution for 3 minutes had a low level of browning compared to other samples. The appearance of these samples was quite acceptable, in comparison to the apples immersed in the sugar solution. Immersion into brine allows oxygen to be removed from the tissue, meaning there is less available to react with polyphenol oxidase. Similarly to the sugar solution, the freezing point inside the cell will have also decreased. However the concentration of NaCl required for effective prevention of browning in fruit is too high to be palatable. This was observed in the samples. The salty taste over powered any desirable fruity or apple taste in the sample. For this reason, NaCl may be used in conjunction with other ingredients such as ascorbic acid or citric acid.", "label": 1 }, { "main_document": "of less than an exemplary service. OPCS (1992). Mortality statistics: general review of the registrar general on death in England and Wales. London: HMSO. Aries et al (1983) The education of medical students through the medical school faculty is held to be the most important locus wherein a successful effort will result in improved thanatologic health care delivery. The human diversity module aimed to provide an introduction into the field of thanatology with the Death and Dying lectures. Objectives included; the method of breaking bad news, understanding the speciality of palliative care as contrasted with mainstream medicine and considering the biopsychosocial factors that impact on a persons dying process. The two sessions began with a lecture presented by consultants in palliative care and were followed by discursive group work. This is the standard method of teaching thanatology at British medical schools. However, many doctors state that they were inadequately prepared by their medical schools for the task of caring for dying patients. Saunderson EMal. (1999) General practitioners' beliefs and attitudes about how to respond to bereavement: Qualitative study. Our lived experience, or that of close friends and relatives, is probably one of the most valuable resources, helping us to a more immediate understanding of human illness than any medical textbooks, or even a narrative can. Therefore I suggest it would be very insightful for groups of willing students to visit a hospice. This is especially pertinent considering the 4 year MBChB programme brings students in contact with patients early in the course in order to appreciate the concept of diversity in the population. In western societies the dying are often marginalized and medical students need to be reminded that they must care for people throughout the life course. Platt R (1965). Thoughts on teaching medicine. General Medical Council (1993). General medical council 44 Hallam street, London W1n 6AE The Oxford international centre for palliative care runs a number of informative and exploratory courses aimed at health care professionals. There are many aspects of the conclusion of a person's life which take on a spiritual dimension. A visit to this centre would be beneficial for many medical students by allowing them to see the process of dying in a holistic way and not just as physiological phenomena with associated psychological problems. Annual report (1999) OICPC. Oxford International centre for palliative care. Oxford: Oxford university press. Picardie, R. (1998) London; Penguin books. A ground-breaking course in thanatology is being taught at Harvard medical school. The end-of-life course for first-year medical students pairs them with terminally ill patients for a year. The course, developed by palliative care experts Dr. Susan D. Block and Dr. J. Andrew Billings represents an opportunity for students to integrate their experiences of terminal care into their medical training. This longitudinal approach to the study of end-of-life experiences is what many doctors suggest should be taking place in British universities. This would help many medical students dispel the dread they feel about death, and help them to eliminate the mystery and taboo that surround the conclusion to life. A palliative care", "label": 1 }, { "main_document": "that relation to each other.' One point worthwhile to discuss was Lord Buckmaster's dissent in The quoting of Cardozo's judgment in Buick Motor Co. It had definitely set up an example beforehand. This would probably be in favour of Mrs. Donoghue's condition. Yes, this was the point where it starts to differentiate. Lord Buckmaster denied the citation as appropriate since that was assumed only to 'dangerous articles'. Items like beer were well beneath the relevant topic. [1916] 217 N.Y. 382, 111 N.E. 1050 Heading with this case's triumph, the court gradually extended the contractual guarantee to protect not only the eventual buyer, but also persons (e.g. family, friends and others) held close relationship with them (neighbouring effect). The USA had allowed the claimant to go on a direct claim against the manufacturer, provided that the skipping of procedures could allocate resources like time and money properly (Gerven et al 1998). It was not until the establishment of The UK Consumer Protection Act 1987 to obtain the similar outcome with USA. Referring to this victorious case, the general 'proximity' involved was totally granted on relationship and should not be confused with 'proximate cause It associated with the determination of remoteness upon consequences of defendant's actions in the context of causation. To begin with The consequential chain joined up all those cases progressively. For instance, Ltd v. Heller & Partners Ltd. , Smith v. Surrey County Council Implied in later case, Stennett v. Hancock Yet, by taking the neighbour test into account, the garage was found to owe a duty of care. The same applies in Grant v. Australian Knitting Mills Ltd Tort seemed to have acted as an alternative way-out or route to seek indemnification. Last but not least, in impressive titles such as Malfroot v. Noxal Ltd. Cotterill Stevenson respectively, revealing the existence of duty of care (Percy, 1977). [1963] A.C. 465, HL [1989] 2 WLR 790, HL, [1994] 4 All ER [1939] 2 All ER 578 [1936] A.C. 85 [1935] 51 T.L.R. 551: a sidecar parted from the motor-cycle while climbing a gradient injured the passenger. [1934] 51 T.L.R. 21: a tombstone in a churchyard was erected causing the monument fell upon the plaintiff. With respect to the principles of UK law, the court usually exerted creative power in establishing case law (Adams, 2003). Even though common law had not been decreed by statute, it is influential to cases which would appear afterwards. To certain extent, those lengthy views done by judges were served as guidelines. Conclusively, before the hearing of How to prove the existence of the duty of care? What were the requirements and limits? The prominent decision definitely answered all these exclamation marks.", "label": 0 }, { "main_document": "Calcareous grassland ecosystem has evolved as a consequence of interaction with human agricultural management on steep slope of calcareous soils. Natural succession has been suppressed by sheep or cattle grazing, so plant community has been adapted to this disturbance on land and has become a home to many invertebrates and birds species. This habitat was once widespread in North-western Europe including southern U.K (Adriaens, 2005). One example is the Chilterns, Area of Outstanding Natural Beauty. Due to topography, human activities were relatively limited; hence the majority of agriculture on steep slopes was grazing by sheep, forming a species-rich habitat (EN, 2005). As happened European-wide, agricultural intensification and undergrazing in the last fifty years have led to the loss of the habitat, while use of fertilizers and herbicides have had further impacts on reducing the species diversity(Adriaens, 2005). Today, only a total of 685 ha calcareous grassland remain in the Chilterns, but the great concerns to the condition of this habitat in addition to direct loss are fragmentation and isolation of small patches which are inherited with high risk of extinction due to the decreased genetic diversity (handout, Adriaens, 2005). The value of biodiversity in calcareous grassland is recognized not only in the U.K context as listed in one of the U.K's Biodiversity Action Plan, but internationally as included in the European Community Habitat Directive (EN, 2005, the Chilterns AONB, 2006). Practical strategies for restoration and recreation of remaining calcareous grassland are now under investigation. This report will discuss the following topics with reference to colonisation, plant communities and succession; The Chilterns lay on a ridge of Cretaceous chalk with overlaying Tertiary clay, sand or sand and gravel on the foot of the dip slope. Primarily due to the difficulty in access, North-facing steep escarpment has been utilsed for sheep grazing while other more reachable areas had been converted to beech woodland for industry and to arable crop production (EN, 2005). Calcareous grassland occurs on those steep slopes, the climax vegetation is woodland but grassland community has established through woodland clearance or burning and centuries of agricultural practice Poschold, Many plant species, such as gentians, orchids, scrubs, mosses and liverworts (table. 1) have evolved with this environmental condition, supporting many invertebrate and birds species (EN, 2005). There are over 40 vascular plant species estimated to be found in 1 km Butterflies and other insects associated with the habitat are also of value. There are several important scrub species in the Chilterns which provide breeding sites for birds and shelters for invertebrates. Most of bryophtes are known nationally important survival of which is heavily dependent on grazing activities (EN, 2005, the Chilterns AONB, 2006). Abandonment of traditional agricultural management, successive conversion to arable land, fragmentation and isolation of small patches are the major threats to calcareous grassland community in the Chilterns, therefore, restoration efforts need to put an emphasis on re-establishment of appropriate management and the creation of linkage between small habitats (Kahmen, Some suggested conservation methods are (Adriaens, 2005, handout); Major problems in the existing degraded calcareous grassland are increased nitrogen content and", "label": 0 }, { "main_document": "of the consequences. This reasoning was first seen in the work of Immanuel Kant who proposed the categorical imperative, an indisputable duty to promote human rationality and freedom based on The fundamental difference between these positions is that whilst consequentialism conceives Rule consequentialism does not see these positions as mutually exclusive. Here actions must follow immutable rules but those rules are chosen based on the consequences, although the absolute nature of these rules is debated See Robert Nozick and Amartya Sen for opposing positions. These fundamental ethical positions illuminate the debate on the regulation of new medical technologies as it is not only the policy that is informed by them but the regulations themselves. An apt area in which this application can be seen is the topical debate on the use of human embryonic stem cells (hESCs) in medical research as it has been widely acknowledged that the investigations in this area have huge therapeutic potential for diseases such as Parkinson's and Alzheimer's. See for instance M. Raff, 'The Biology of the Adult Stem'; A. Bjorklund, 'Prospects for a Stem Cell Therapy for Parkinson's Disease'; and E. Gluckman, 'New Sources of Blood Stem Cells: Therapeutic Implications' in Human embryonic stem cell research is currently a salient element in the advancement of medical technology to alleviate human misery. hESCs are pluripotent cells meaning they have the potential to develop into any cell type through differentiation, a process where less specialised cells develop into more specialised cells through expressing or repressing different subsets of genes within the individual cells. Embryo-derived stem cells are not alone with this potential; adult somatic stem cells such as those found in bone marrow may also be used but the properties of this type of cell become increasingly restrictive with tissue development, hence hESCs provide the best prospects for curative therapies. De Sousa, P.A., G. Galea and M. Turner, The Road to Providing Human Embryonic Stem Cells for Therapeutic Use: The UK Experience, (2006) Reproduction 132: 681-189 pp.189 However, the use of hESCs has reopened the debate on embryonic technologies since it necessarily involves the use and destruction of embryos. This is further complicated with the successful application of cell nuclear replacement (CNR) The cytoplasm within the egg reprogrammes the adult nucleus allowing it to behave as if it were in a one cell embryo. Therapeutic cloning uses CNR to procure embryos that at the blastocyst stage can derive stem cells from the inner cell mass. This is at about 4-5 days into development where an a cluster of cells called the inner cell mass forms the embryo which is suspended in fluid and surrounded by an outer cell layer which will form the placenta. These stem cells derived from the inner cell mass can then be induced to produce any cells from within the body, for example dopamine-producing neurons that are lacking in the brains of Parkinson's disease sufferers. As the process does not require fertilisation, CNR has provided an effective source of embryos that does not rely on limited surplus quantities donated following successful IVF treatment. Dolly", "label": 1 }, { "main_document": "of it like this before but by empathising with Mary's situation I better appreciate the importance of the patient's perspective. Fulford There are many uncertainties in relation to cancer, despite improvement in treatments and cure rates there is always the fear that it will reoccur, there are no guaranties (Alexander This experience has also changed the way I see people's reasons for seeking health care and I have benefited from seeing the health care system from the point of view of what people expect rather than just what it can provide. A survey of patient expectations of A&E by Walsh (2000) found that reasons for attending included wanting attention at the time of illness, the constrictions of GP appointments and feeling that hospitals were more likely to make an accurate diagnosis because of technical backup. Walsh found patients expected A&E attendance to provide a diagnosis, timely treatment and the provision of information. Research by St Martin and Cole (2002) stated that the numbers of people accessing A&E for non-emergency or primary care matters are steadily increasing, factors that influence this include the 24 hour availability of A&E and the belief that access is everyone's right. The study found that the main reasons for non-emergency A&E attendance were lack of access to primary care, dissatisfaction with primary care, convenient out-of-hours care and difficulty obtaining appointments elsewhere. Although this research was carried out in America I feel it is relevant because the results are in line with my own observations and also because Mary's reasons for attending A&E were her dissatisfaction with her GP's answers and the wait for diagnostic or follow up appointments. Striving to deliver patient-centred care in practice represents the development of health care to focus on the needs of the individual receiving care over the needs of the professionals or the organisation (Ford & McCormack, 2000). The concept of health care related expectancy-value assumes that a particular behaviour or action will lead to an anticipated outcome, chosen on the basis of what is most likely to produce what they consider a positive outcome (Mason & Whitehead, 2003). Mary equated attending her GP with long waits and few answers while she saw attending A&E as providing quick answers and treatment. In 1992 the Department of Health produced It didn't however make clear what constituted an emergency or when it was appropriate to make use of these services (Walsh, 2000). In 2001 This sets out what people can expect from the NHS and what people can do to assist the service, however it doesn't actually state what constitutes an emergency situation nor does it explain when it is appropriate to see a GP or go to A&E. It puts the perception of urgency with the patient (DoH, 2001a). Patient empowerment and patient-centered care encourage patients to take more responsibility for their care and to learn from their own experiences. This has produced higher patient expectations and has changed patient attitudes, many are no longer prepared to be passive receivers of care (Fulford Ford and McCormack (2000) highlight that the rights, responsibilities, choice", "label": 1 }, { "main_document": "back of the eye - is considered to be the leading cause of blindness in people of working age. Diabetic nephropathy - kidney disease caused by diabetes - is one of the most serious complications of diabetes. It is caused by excess blood glucose damaging the capillaries in the kidneys and by raised blood pressure. Diabetic nephropathy is the leading cause of complete kidney failure, with approximately 8% of diabetics in the UK receiving treatment for kidney disease (Diabetes UK, 2001). In 1912 a French scientist called Maillard discovered that heating amino acids and reducing sugars led to the formation of yellow-brown pigments. For the next 60 years the Maillard reaction was studied extensively, and shown to be responsible for the generation of aroma and flavour compounds, as well as the decreased nutritional availability of foods, which occurs after prolonged storage. The initial step of the Maillard reaction is a reaction between carbonyl compounds such as amino acids and reducing sugars such as glucose. Subsequently there are many complex steps including 'Amadori rearrangement' and 'Strecker degradation', before ultimately melanoidins (brown nitrogenous polymers and copolymers) are formed. More recently, the importance of Maillard reactions The term Advanced Glycation End-products (AGEs) was introduced to describe Maillard products that form Studies of the non-enzymatic glycation of proteins Haemoglobin A Structural studies of haemoglobin A As in the case of haemoglobin, the amount of Amadori product on these proteins was increased in diabetics. This discovery led to the search for further Maillard rearrangement products on long-lived proteins. Crystalline proteins of the ocular lens were the first tissue in which AGEs were described. Yellow-brown fluorescent pigments were noted to accumulate in the lens with age, and at an accelerated rate in diabetics. These pigments were also found to be capable of cross-linking proteins. The progressive accumulation of AGEs on lens crystallins Connective tissue collagen, another long-lived protein, has also been found to accumulate AGEs The AGE content of human collagen increases over lifespan, and is higher in diabetics. From model studies it has been observed that the AGEs that are found on collagen are associated with increased cross-linking of collagen molecules, and can also cross-link various serum proteins trapping plasma proteins, which could result in tissue damage. Requena (1998) extends the study of Maillard reactions N(carboxymethyl)lysine (CML) is an AGE product formed, CML can be produced by the oxidative cleavage of the Amadori compound, fructose-lysine These studies demonstrate a complex in interaction between oxidation of sugars and lipids during the Maillard reaction. Elevated levels of circulating AGE-lipids were also reported in diabetic patients vs. healthy individuals. Therefore, the browning of proteins via the Maillard reaction Carbohydrates also contribute to non-oxidative browning of protein by rearrangement and elimination pathways. A common feature of both the oxidative and non-oxidative reactions is the formation of reactive carbonyl compounds, suggesting that carbonyl stress, as well as oxidative stress, is involved in chemical modifications of proteins during the Maillard reaction. As a result, the AGE hypothesis of protein glycation has evolved to accommodate a role for carbonyl and oxidative stress. In light", "label": 1 }, { "main_document": "One of the major factors here were the end of Cold War and advances of Globalisation. The end of the rigid alliance system and disappearance of the Soviet threat led to greater activity on multilateral and regional level. While the diplomatic negotiations was previously conducted within the alliances or between the two superpowers, several initiatives have been launched both on regional and global level which used old international institutions like the UN or refurbished old ones like GATT into WTO. On a regional level, Europe witnessed revival of EU in the Maastricht treaty and the Single European act, and North America concluded the NAFTA. (Barston 2006) Whereas discussion platforms like the UN have been previously paralyzed by the Cold war hostilities, international organisations now have an increasing active role in the conduct of international negotiations. States are too small for new problems facing them, so they try to collaborate to promote their joint interests. The Kyoto protocol of 1997 on environmental protection is a multilateral solution to the international 'tragedy of the commons' conducted within the UN framework. Here we can also observe the appearance of new topics on the agenda of international diplomacy. On the regional level, NAFTA, AFTA, African Union and EU are examples of increased activity in regional diplomacy. (Bayne 2003:97) The developments in EU are also affecting the multilateral level. The Maastricht treaty introduced the Common Foreign and Security Policy to the community, which is now represented by the Commission on a number international negotiations platforms (e.g. WTO), and this in turn affects the dynamics of international conferences. (Barston 2006:87) Greater reliance on regional institutions which are able to provide credible commitments in the negotiations process, for instance in the form of the ECJ, affects the manner of the diplomatic discourse. While Berridge (1995) argues that the number of international institutions in international diplomacy has declined, several example show that they play a greater role in the new diplomacy. A peculiar development is the emerging EU diplomacy. Here we can contrast the diplomacies of individual states and diplomacy of the EU bloc as a whole. This development is particularly marked by the Maastricht treaty which established the CFSP in 1992. Since then the Commission has increased its importance as an international actor representing the block as whole in important organisations like the WTO. The Banana wars in WTO are a good example of a dispute when EU was represented by the Commission. Whereas international organisations played a limited role in the Cold war diplomacy, EU now fulfils the representational and negotiations function of several states. One of the shifts in post cold war diplomacy is the increase use of Thanks to the advances of globalization, heads of state are playing an increasingly important role in making and implementing of foreign policy. Prime ministers and presidents are able and increasingly do get directly or indirectly involved in conduct of foreign policy. This could also be a reaction to the internationalization of domestic policies, where heads of state try to take charge of domestically increasingly salient foreign issues. Moreover, higher", "label": 0 }, { "main_document": "common sense approach. While this does not necessarily undermine the theory, it can make it less attractive as the \"best explanation\" available and therefore make the appeal to \"inference from best explanation\" argument less plausible. Another objection Armstrong has argued that \"any characteristic that sense impression have is a characteristic that physical object cannot have.\" (Mackie 1985:139) His argument was based on a false premise that physical objects and therefore also their characteristics/qualities cannot be directly perceived. I want to argue that the conclusion of his objection holds but for different reasons than he suggested. I would like to do so by looking at two possible ways the SD can be understood/described, either as physical causal intermediaries or as mind-dependent entities, and then showing that both of these accounts are ultimately incoherent, which undermines the whole R theory. The first possibility of SD, as \"physical real pictures somewhere inside one's head\" (Mackie 1976:42), is easy to refute. An example of causal intermediaries are retinas. The problem arises when we consider that R requires that we are directly aware of the SD, and thus perceive the qualities directly. On this account, when I have a perceptual experience of a square object, my SD is also square. But this seems to be impossible in the case of retinas. \"My retinas won't be square, since retinas are always curved\" (Mackie 1976). The causal intermediaries will not do as SD because they just don't have the right sort of properties. (Mackie 1976:47) Thus it is incoherent to assert that SD are physical intermediaries. The second account of SD, which takes them to be \"mind dependent in the sense that they exist in and by one's awareness of them,\" (Mackie 1976:42) also runs into problems. In this version we don't see objects, but only our ideas of them. R requires the SD to have the same kind of qualities as the objects they are supposed to resemble/represent. But it is hard to conceive that mind dependent entities can have qualities of physical a kind. For instance, if I see a tomato, R would require that I have a round and red idea. This account of SD is thus incoherent and as R fails to produce a plausible account of SD, which would also suit the R theory itself, their whole project seems to be problematic. Additional objection to the theory As Hanson argued, there are examples where two persons can be said to have same/different sense data and different/same visual experience. For instance, two persons can perceive a rabbit-duck figure and one judge it to be a rabbit while the other a duck. Thus \"it is meaningful to speak of situations in which two observers having indistinguishable visual sense data nonetheless have disparate visual experience.\" (Hanson 1972:188) It seems that there is more to our perceptual experience than SD only. But this refutes the picture R tries to make, as we can show that \"perceptual experiences can't be captured in terms of sense data.\"(Hanson 1972) The objection proves that there is a degree of relativity or subjectivity", "label": 0 }, { "main_document": "Church wielded the moral authority, thus simultaneously weakening the party's legitimate claim to governing the nation, which acted as the main obstacle in the Sovietization and may explain the Russian lenient attitude to change in communist politics, in contrast to the repressive methods used in Czechoslovakia and Hungary. Additionally the Polish population was very cynical and apathetic towards the communist regime due to the existence of 'another' memory of the Second World War thus reflecting the failure to fully indoctrinate and suppress the nation's true history. Despite the censorship repressive measures used to remove the 'enemies of the state' private memories were preserved as symbols and makeshift monuments remembering events such as Katyn massacre frequently appeared at cemeteries. Teaching of national history was not confined to the school but was actually acquired in the environment of the household and parish. This proved fundamental in undermining the political legitimacy of the communist government. However there were individuals also refused to discuss personal memories, either due to the survivor-guilt conscience, fear of punishment or rehearsal of the experience only instigated further suffering. Nonetheless the Therefore to conclude the political cult of the dead is extremely important in the process of nation building as it is may used by the nation to justify its existence and legitimise the system of power. Whether the nation was successful is a different matter as the Polish example clearly comes to show. The lack of legitimacy of the communist system explained the need for methods of coercion and reflected the corruptibility of the system. Meaning to death was expressed through various commemorations and rituals to glorify the sacrifice and provide dignity to the dead thus allowing coming to terms with the tragedy and understanding the contemporary situation. Not all death was accounted for as it did not coincide with the communist agenda, thus resulting in great disillusionment of society, leaving various communities still traumatised. However due to the competing memories of the war, a \"double memory\" The Soviet perspective removed soviet crimes and Polish heroic achievements through emphasis of the communist role as the great saviour of Poland, whilst the nationalist account which was taught by The private and public memories conflicted with each other; however private sphere's sense of collective understanding developed episodically and lacked an immediate political impact. The nationalist memory formed part of the Polish identity of opposition as \"people without memory were easier to make slaves, thus by preserving memory they remained human\" Taken from Piotr Wrobel's title: \"Double memory: Poles and Jews after the Holocaust\", Adrian Gregory, p. 123", "label": 1 }, { "main_document": "This quote, written in 1915, highlights the nature in which societal attitudes regarding food and diet were undergoing a period of transition. Food was increasingly being viewed as a nutritional source, with associated health consequences. It was during this period that Seebohm Rowntree produced a systematic study detailing a minimum requirement for subsistence or what is known as the 'poverty line'. Using the emergent ideas about nutritional requirement, his aim was to distinguish, between those who were unable to purchase basic necessities for economic reasons and those who's income was 'sufficient to buy basic necessities, but who were unable to do so for other reasons.' (Veit-Wilson, His work highlighted that malnutrition and poor dietary habits were not necessarily an exclusive product of insufficient income, but that nutritional choices were also shaped by sociological and/or political factors. By analysing the dietary behaviours and choices made throughout history and the contextual variation which dictated the shaping of these behaviours, it is possible to identify such influencing factors and indicate the way in which they have contributed to the shaping of dietary patterns in poor households. Technological innovation and scientific advance, together with incremental political change and the significant affects of the Second World War have provided the contexts of social structure I which dietary behaviours are dictated, nutritional attitudes shaped and food choices informed. Research into the dietary patterns of the 19 The dramatic difference between the dietary composition of the most affluent classes and the poorest classes was particularly astonishing, in that the poor existed barely at subsistence level. (Nelson, 1993: 102) The diet of the poorest class typically lacked adequate calorific, fat and protein intake, as well as severe essential vitamin and mineral deficiency, in particular, Vitamin A, Vitamin C and calcium. Consumption of fruit was particularly uncommon, especially in urban areas where fruit eaten consisted of a 'few currants'. Meat was particularly scarce due to its expense, and what there was of it was predominantly eaten by the male of the household. (Nelson, 1993: 102-103) \"The women and children suffer from underfeeding to a much greater extent than the men. It is tacitly agreed that the man must have a certain minimum of food in order that he may be able to perform the muscular work demanded of him\" therefore, this usually involved \"underfeeding of the women and children greater than is shown by the average figures.\" (Rowntree 1902, If he lost his job- none of the family would be able to eat. Bread became the main source of mealtime provision for most poor families, a rudimentary but nonetheless popular and widespread staple. It could be eaten cold and therefore not only saved money on expensive fuel costs of cooking, but was quick and easy to prepare- an important quality for busy mothers. Furthermore, when bread went stale, it could be toasted, it was a readily available and a relatively cheap foodstuff, it could be served at any meal and most importantly, the family liked it. (Reeves, M, P., 1979, p.103) This last point was, and still is, a particular concern", "label": 1 }, { "main_document": "'a firm that is permeated with transformational leadership from top to bottom conveys to its own personnel as well as to customers, suppliers, financial backers, and the community at large that it has its eyes on the future; is confident; has personnel who are pulling together for the common good; and places a premium on its intellectual resources and flexibility and on the development of its people'. Holding such a healthy public image, an organization will definitely have a bright prospect and attract more high quality human resources, which contribute to the well-off business in the long run. In respect of culture, as consistent with the social learning theory, leaders 'tend to model their own leadership style after that of their immediate supervisors' (Bass 1990: 26), while their followers incline to emulate their behaviors. Thus a comparatively stable style of leadership will be fostered and cascaded within an organization. If more higher-ups are transformational, more lower-level employees will be likely to act as transformational leaders as they rise in the organization, which will also be good for the organization's long-term development. To sum up, transformational leaders can 'identify the core values and unifying purposes of the organization and its members' (Bass and Steidlmeier, 1999: 271), inspire employees to enthusiastically achieve organizational goals, liberate human potential, and foster effective, satisfied followers. Charisma, individualized consideration, and intellectual stimulation are critical to organizational effectiveness, especially when the organization is faced with rapid changes, problems, and uncertainties (Bass, 1990). Nevertheless, some people argue that transformational leadership is unnecessary as transactional leadership can be effective enough to deal with problems. This may be true to some firms functioning only in a market with stable technology, workforce, and environment. Under such circumstances, the need for leadership may be cut down or eliminated, as it may 'move along quite well with managers who simply promise and deliver rewards to employees for carrying out assignments' (Bass, 1990: 30). More seriously, transformational leadership is even deemed as 'a dangerous curse'. In the following part, the potential disadvantages of transformational leadership will be presented with regard to charisma, individualized consideration, and intellectual stimulation respectively. In the late 1990s, the public and the media's fascination with charismatic leadership reached its peak. Almost every company wanted a charismatic CEO like GE's Jack Welch, Disney's Michael Eisner. However, on a negative note, there is a growing body of evidence indicating that the effectiveness of charisma may be situational. Moreover, recent frustrations in some companies which applied charismatic leadership suggest that 'there is a dark side to charisma that can potentially undermine organizations' (Robbins, 2005:167). In the first place, some charismatic leaders are 'pseudo-transformational leaders' who may cause destructive effects on organization. Pseudo-transformational leaders refer to leaders who 'create the impression that they are doing the right things' as transformational leaders do, but 'secretly fail to do so when doing the right things conflict with their own narcissistic interests' (Bass and Steidlmeier, 1999: 189). Pseudo-transformational leaders don't necessarily act in the best interests of their organizations, but 'indulge in fantasies of power and success' (Bass and", "label": 0 }, { "main_document": "Modernist literature had been developing for some time when the Great War began in 1914. It took a fresh outlook on the world and embraced change, its characteristics include a rejection of religion, an appreciation of science and psychoanalysis, a recognition of the inconsistencies and incoherencies in life and new writing styles and structures. It has been debated whether the war actually initiated the beginning of true modernism, but the war certainly contributed to its development. It offered a new angle for modernism, providing an insight into the depths of depravity to which humanity could sink - killing on a massive and brutal scale. It seems important that the Great War is presented in an unforgettable way, and yet such depravity is seemingly unrepresentable. Language is formed through relating something to experiences we already know, but the Great War seemed to be incomparable. For some, it took a long time to be able to write directly about it, but writers felt a duty to document the event, to eliminate the euphemisms and clich Poets and novelists attempted to achieve this seemingly impossible goal in various ways, but this essay attempts to show the basic underlying similarities used in both. Siegfried Sassoon famously denounced the continuation of war. His poetry attacked the people that were responsible for its prolongation and those who didn't want to hear or accept its true ghastliness. He used satire to present an experience he may have found too painful to write about otherwise. The final line A similar up-beat rhyming scheme (a,b,a,b) is used in both The latter poemuses shock tactics rather than satire. ' Once again Sassoon uses the final lines to attack his readers, ' The unrepresentable is made more comprehensible by Sassoon and others in different forms of literature, such as R.C.Sherriff, the playwright in Sassoon slips these into his poems to highlight how out of touch those in power are with the reality of the common soldier: ' Clich The first line ' The flippant remark ' Sassoon ridicules them, using the words ' He ends the poem on a similar note to Bernard Bergonzi in But Sassoon's strategy for representing the unrepresentable is a style particular to him and he is capable of painting a vivid picture of war and the feelings of soldiers in passages such as this from The diverse recording of the Great War means that it is important to look at different writers in order to get an overall picture. Sassoon befriended the poet Wilfred Owen, who chose to present the war from another angle. Claiming that ' Owen represented the unrepresentable by absorbing himself in the emotions of war, and from this his poetic style developed. In such poems as One of his most effective poems Like Sassoon, he uses the poem to show the difficulty that people at home had in accepting what happened in the trenches, the cheerful rhyme scheme perhaps implying the falsity of their welcome. Neither the rhyme scheme nor the people accept the reality that the poem really portrays and it is those", "label": 1 }, { "main_document": "shadows. (Atkinson, Atkinson, Smith, Bem and Nolen-Hoeksema, 2000) Relative size refers to the perception that an object of smaller size than another object, in a picture, is further away than the bigger object. Interposition occurs when an object obstructs the view of the other. People tend to perceive the overlapping object as closer in distance. Relative height relates to when an object is perceived closer to the horizon than others in the picture, it is further away. Linear perspective occurs when two parallel lines converge; people tend to perceive the lines as vanishing in the distance. Shadows and shading are another important cue about object shapes and distances between objects and the light source. These five cues come together to provide information about images. Artists have used these cues for many years in their paintings to convey perceptions about distance and size. Depth cues are also often used in the trick of illusion. (Atkinson, Atkinson, Smith, Bem and Nolen-Hoeksema, 2000) Another theory about perception is Gibson's theory of direct perception. (Sternberg, 1999) The theory states that all one needs for perception to occur is a visual system and an environment in which to operate. All the cues necessary for perception to take place, are already present in the environment, and it is not necessary to invoke any higher cognitive processes. For instance, depth and distance information can be determined by their regularity of surfaces. In this case, the person's perception is immediate and spontaneous. For example, people can instantly recognize unfamiliar but biologically relevant stimuli. Also, information for perception can only be obtained through movement, since, the perceptual information derived from motion leads to a more accurate action in the future. This viewpoint excludes intelligence from the process of perception. It is stated that any information about the object being viewed can be inferred from the stimulus informant. Intelligence only comes during the cognitive processing but only after perceptual processing has been completed. Hence, this model views perception and intelligence as two separate entities and processes that may possibly be sequential but not simultaneously. An additional theory about perception is known as the template theory which states that humans have pre-stored sets of templates which are highly detailed models for patterns that allow for easy recognition. When observing something, we mentally compare it to templates and try to match our observations with the pre-stored templates. We then select the template that completely corresponds to our observation. It is in this way that we perceive our surroundings. (Sternberg, 1999) In contrast to the template theory, there are feature theories. These theories state that unlike template theory, we do not match entire patterns to a template but instead match features of a pattern to features stored in memory. An example of such a theory is \"pandemonium\" formed by Oliver Selfridge in 1959. The theory states that neural clusters \"shriek\" to indicate the presence of particular features of the perceived stimulus. Each of these neurons or neural clusters were referred to as a \"demon\". \"Image demons\" would pass on an image to \"feature demons\". \"Feature", "label": 1 }, { "main_document": "This reconstruction produced a vocal tract similar to that of modern Recent work has examined the correlation between the size of the hypoglossal canal and the capability of speech. This canal contains the nerve supplying the tongue; it was proposed the varying canal size indicated the size of nerve and reflected the speech capability of modern humans in contrast to the linguistically impaired non-human primates (Kay et al, 1998). The study concluded the canals of Neanderthal fossils were comparable to those of modern humans and therefore Neanderthals were physically capable of speech, as were other members of the genus A re-evaluation of data and a later publication by others was prompted by the discovery of numerous non-human primates with a hypoglossal canal comparable to that of modern humans. The conclusion was that there was no apparent correlation between the size of hypoglossal canal and the nerve it contained, and therefore does not reflect linguistic capability (Degusta et al, 1999). The presence of a vocal tract capable of speech is not evidence of language in itself; the brain is responsible for the creation and comprehension of language (Stringer & Gamble, 1993). Some palaeoanthropologists and archaeologists, including Mellars (1996), suggest a neurological evolution may have resulted in the introduction of language, probably prior to the beginning of genus Neurological studies have been attempted over the years to explain the introduction of language; here, some early work and more recent data will be summarised. Of the fossils currently known, preservation has not allowed the examination of a Neanderthal brain, although the study of endocasts of the inside of the cranium, revealing the peaks and troughs on the brain surface, has allowed the study of various early hominids. Experts look for asymmetry in the brain hemispheres and structures linked to language competence, including Broca's area and Wernicke's area (Johanson & Edgar, 1996), searching for reasons behind linguistic communication problems. Working during the nineteenth century, French anatomist Paul Broca conducted an autopsy on a mute, revealing a lesion or defect on a particular area of the brain. Broca deduced this area was responsible for muscle control required during speech (Trinkaus & Shipman, 1994), however, recent data, including brain scans of humans, suggest several areas contribute to the control of speech and Broca's area may be responsible for motor function including limb control (Johanson & Edgar, 1996). Examination of non-human primate brains has found the presence of Broca's area, although under-developed compared to humans, resulting in the interpretation of the pre-evolutionary Neanderthal brain as capable of only limited speech or possibly a complete absence (Trinkaus & Shipman, 1994); however, Tattersall (1999) believes the examination of the contours of the brain will not disclose linguistic ability. More recent studies in the effects of Alzheimer's disease have reaffirmed the possibility of speech impairment if there is an under-development of the brain or if damage is present. An underdevelopment of the left hemisphere of the brain may restrict the ability of language as evidenced in the study of aphasia, conducted by Kempler in 1993. Lesions on this area were found to", "label": 1 }, { "main_document": "open-source can be proposed as a bridge for the technological, educational and cultural gaps between developing and developed countries. It allows collaboration and cooperation with a wide spectrum of experts in high-tech fields all over the world. Open source might help focus the spotlight on high-tech talents and qualifications in developing countries that are hidden due to market constraints. If this can be fully realized, knowledge of software and hardware design would become more diffused through society and the whole world, destroying absolute barriers between `creater' and `consumer'. Designs that people wanted could be produced, rather than designs planned for people to buy. In technical use, a standard is a concrete example of an item or a specification against which all others may be measured. They may be produced by organizations, some for internal usage only, while others for use by a groups of people, groups of companies, or a subsection of an industry. Standards can be followed for convenience and eliminating the mutually incompatibility, or be used because of (more or less) legally binding contracts and documents. Government agencies often have to follow standards issued by official standardization organizations. Following such standards can also be a prerequisite for doing business in certain markets, with certain companies, or within certain consortia. Standardization, in the context related to technologies and industries, is the process of establishing a technical standard among competing entities in a market, where this will bring benefits without hurting competition. Nowadays, effectively handling IPR(Intellectual Property Right) in the technical standardization process has become a major challenge and difficulty which has caused heated debates around the world, especially in the information and technology field, for example digital audio-video standards, wireless LAN standards and DVD etc. As an overseas student from China, I know that there are many cases related standards and IPR recently, such as the famous affairs DVD, CISCO and HUAWEI. Because of the huge influence on the economic, we had to pay more attention to the issues about IPR and standardization, as well as China government. The key issue for IPR in standardization is patent. Larger companies hold 'patent pools', series of patents combined together, which they use as bargaining counters with one another, and which block new entrants from the market. The contradiction between privatization of patent and publicity of standardization is difficult to conciliate, while there combination is easily tend to result in monopoly. Monopoly is not what most people expected. The basic motivation for technical standard setting is to boost optimal public interest and a reasonable patent system should balance the rights and interests between the owners and users. Open sources may act a wise role in this situation. For instance, market competition is mainly based on patents and intellectual property that maintain all rights for the originator firm. So goes without saying, companies may oppose aspects of open source that generate alternatives for commercially protected products. However, there is always an optimal position between contradictions. The suggested solution is that companies might take advantage of open source as a way of bridging the gap for", "label": 0 }, { "main_document": "pose a very serious threat to the ergonomic keyboard market. PCD Maltron has a range of products but because the owner has never really marketed the products there is little IPR. However, this is largely due to the fact that the main Maltron keyboard is not a patented design. Without a patent there is no possibility of licensing to firms with existing complimentary assets. It is especially important to get a patent for an ergonomic keyboard as it is fairly easy to reverse engineer due to the relatively low amount of technology involved. There is IPR available for the company which was effectively free. This is the rewards the companies products have won and been ranked as a \"millennium product\" by the Design Council in this country. PCD Maltron needs some considerable investment capital to get at that global market and supply it. This could be done by using a lower cost assembler company enhancing distribution, both of which will cost more money. The options available to PCD Maltron for sources if finance are Bank Loans, Venture Capitalism and Business Angels. Each of these sources of finance has advantages and disadvantages and I shall now try to explain them in relation to PCD Maltron's position. Borrowing from banks has an advantage which would seem important to the owner in that there is no effect on control/ownership of the company. However, since I estimate the amount that would have to be borrowed to be greater than In PCD Maltron's case this is bad news; as the company is not very well established the security they can offer is quite low and so the interest charged on the loan would probably not justify taking the loan out. Also, the loan can be called in at any time which would be bad if something went wrong and the company was going through a bad patch at that time. From PCD Maltron's point of view getting finance from Venture Capitalists is probably the most likely source of finance but also provides moderate concern as to the issue of control/ownership of the company since dilution is more or less inevitable. There is an attractive advantage of using Venture Capitalism for finance; although the problem of control/ownership cannot be helped the extra management team brought in will mean the company will be more professionally managed. This professionalism would help the company to become more focused and improve their business strategy as well as gaining access to business experience and networks. Other advantages include the fact that the funds last for ten years and the given credibility value of working with a Venture Capitalist company. The problem of dilution is because the Venture Capitalist Company will try to get the majority of the stock/equity and also may try to get the majority of the company ownership if things go well. Business Angels tend to invest a smaller amount, typically only 24% of investments are greater than The best thing about going down the Business Angel route to finance is the \"injection\" of capital the company would receive. There are,", "label": 1 }, { "main_document": "and epilepsy. Incidences of human molecular diseases are rapidly increasing especially in Western civilisations. Medical, hygiene and nutritional improvements have generally lowered the frequency of deaths from infectious disease, yet mortality rates from 21 WHO estimated that molecular diseases cause 40% of all deaths in LEDC's and 75% of all deaths in MEDC's (cited from With 12% of adult hospital admissions (cited from Mueller Mutations involving single genes can lead to autosomal conditions, chromosomal abnormalities will prevent the expression of certain genes, somatic cell mutations may lead to cancer and mitochondrial disorders which cause serious problems with metabolism. Developments such as the human genome project will enable scientists to increase our understanding of the mechanisms involved in molecular disease. It will allow gene therapy to become a viable treatment for these conditions. Gene therapy involves the introduction of a gene through a vector into the recipient to replace the defective gene. Germ line therapy aims to introduce transgenic cells into somatic and germ cell line curing the individual and any offspring of the trait. Somatic gene therapy focuses only on the somatic cells. Both techniques use viruses to deliver the therapy. Unfortunately this technique is far from viable as several recipients have developed cancer or died from such treatment. Until more research is conducted, incidences of molecular disease are likely to continue to increase.", "label": 1 }, { "main_document": "often find it hard to refuse to help when asked even if it 'puts me out' as I feel I lack the confidence to say no. There are many recommendations regarding how to say no. In future practice I will try to consider why I need to say no; say it clearly giving reasons for my decision and explain that I would help if I could as this shows that I have understood the request, considered my own needs and shown empathy (Hodston & Simpson, 2002; Thompson, 2002). I will also remember that it is not always in the best interests of the person asking to agree to a request (Glen & Parker, 2003); I had always assumed it was. If I find it hard to 'say no' to a request that would 'put me out' I will try the advice of Hodston and Simpson (2002) who recommend that to avoid feeling 'railroaded' you should request time to think by asking 'can I get back to you? Assertive communication is evaluated in competency 5 (critical to placements 7 and 8). I will reflect on and evaluate my progress following every situation where I should have or did 'say no'. I will also request feedback form my mentor regarding how I handled the situation and where possible I will ask from feedback from the person making the request about how they felt I handled the situation to aid my self-awareness. I feel that by undertaking both stages I have improved my self-awareness and been able to focus on personal characteristics requiring development in a positive way. Reflection plays a central role in my development plan as reflection increases consciousness of actions, which in turn offers opportunity to develop (Palmer In addition to the evaluations listed above I will review and evaluate my reflective journal at the beginning and end of placement 8 to assess my development in all areas. By including others in the reflective process I hope to incorporate perspectives other than my own which Palmer By reflecting on and critically analysing key personal characteristics and proposing an action plan for their development I have taken a problem-based approach to my development to enable me to build self-confidence and improve my professional practice (Biley & Smith, 1998). I feel the issues identified contribute to each other as reducing self-consciousness will help me assert myself and build self-esteem, in turn improving confidence. I have valued the opportunity to identify areas for improvement and hope that my action plan will provide a framework to both develop and assess my future development.", "label": 1 }, { "main_document": "My initial understanding of Occupational Therapy (OT) that I described at the beginning of term one has not changed, although through completing the first term, I feel I am now more secure in my understanding of what Occupational Therapy involves. Each module I have done so far gives a greater insight into what OT is about. The knowledge needed is extensive, as the Anatomy and Physiology module demonstrates, and each module covers interesting and relevant topics needed for practical and background information to become a professional. I experienced a period of illness the week before Christmas which really made me think about the work of an OT. I was ill for three days with severe diarrhoea and vomiting and passing out. I didn't eat for two days, and was very weak and bed ridden. During this period, I longed to be able to do things again. I really wanted to be able to eat, and thoughts of lovely foods dominated my mind. I knew I couldn't eat yet as my stomach was fragile. This longing I felt, just to be able to eat, was all I could think about at the time. I wanted to get well as soon as possible so I could eat and have the strength to go out, do things, see people and not feel like I was wasting time. It was all that was important to me. The experience of being in that situation really made me appreciate the strong desire to carry out simple, essential daily activities, when ill health is preventing you to. It made me think about the work of an OT, and their role in helping and enabling people to do the things they really want to do, no matter how small or unimportant the same activities may seem to others. My illness was over in a few days, and I could do all the things I'd been longing to do whilst I'd been ill. However, for people with a long term illness or disability, having someone to work with them and help them to do things that are important to them, may literally be life changing. The work of an OT seems so worthwhile and important to me. One thing that puzzles me about the course is that often, I feel the focus is generally on Occupational Therapy in hospital settings, and OT in learning disability settings is rarely mentioned. It is this kind of work that I see myself doing in the future as I currently have experience in it and learning disability is an area which interests me that I would like to develop. I would also like to gain further experience in other areas so I can clarify which areas I enjoy working in the most, but I wonder why the focus often seems to only be on OT in hospital settings. I will solve my query by talking to my personal tutor and other tutors, and seeing if learning disability is covered further in term 2 of the course. I see from my core skills portfolio that we", "label": 1 }, { "main_document": "both principals and travel intermediaries (Cooper, 2005; Buhalis, 2001; O'Conner, 1999; Inkpen, 1998). Tourism suppliers (particularly airline, car rental and hotel chains) took advantage of the new opportunities and developed their own websites which allows users to access directly their reservation systems. (Buhalis, 2003; Buhalis,2001 ; Sheldon, 1997).This included single supplier provisions, such as United Airlines (e.g. (Sheldon, 1997; Buhalis, 2001, Buhalis, 2003, Cooper, 2005). With the development of internet, a number of web-based travel agencies also emerged (e.g. From the perspective of intermediaries, the rapid development of Internet offer both opportunities and threats to their existence (Sheldon, 1997; Buhalis, 2003; Cooper, 2005). On the one hand, most intermediaries actors realized the importance of using internet as a new media and quickly exploited the opportunities offered by internet and developed their online presence. Therefore they could quickly build complicated itineraries on line in accordance with specific need of customers by accessing into tourism suppliers' database, finding up-to-date schedules, prices and availability data. (inkpen, 1998).In addition to that, Sheldon(1997) proposed that the internet may provide travel agents with a business opportunity by enabling them to promote their products to a broader geographic market at a low cost. On the other hand, consumer access to travel database creates an inevitable threat to the existence of travel intermediaries as consumers can plan their trip and purchase online. (O'Conner, 1999) Moreover, intermediaries represents a considerable part in distribution cost and therefore it is understandable that the more players getting involved in a distribution channel, the more commissions and fees need to be paid and consequently increasing the price of the final product(Buhalis, 2003) .So the direct and intermediate contact between suppliers and customers can reduce the importance of travel agencies as middleman in the customer-agent-supplier network and consequently result in the disintermediation. (Buhalis, 1998) .However, Buhalis (2003) argued the travel intermediaries can remain their competiveness by enhancing their ability to \"re-engineer\" their core product and develop value-added services or final products. If intermediaries fail to do so, they will gradually loose their market share as more consumers will be purchasing tourism products directly from suppliers. So they need to identify their specific market segments and adopts different marketing strategy. From perspective of customers, the internet provides access to transparent and comparable information on destinations, holiday packages , travel and lodging services as well as providing prices and availabilities .( Buhalis, 2003). Apart from that, development of Internet created experienced and knowledgeable \"new\" tourists seeking exceptional value for time and money (Cooper, 2005). As the number of Internet consumers continues to grow, it is important to learn more about them and the type of services they value the most (Buhalis, 2002). The development of the Internet, electronic market also raised many new issues for branding. Horner and Swarbrooke (1996) defined branding as \"a dynamic process of developing a name, term, sign, symbol, design or combination of these elements that is intended to identify the goods and services of a seller and differentiate them from those of competitors (Buhalis, 2003:167). On the one hand, the Internet provides a", "label": 0 }, { "main_document": "theory Weiner A.M Rev. ,Femtosecond pulse shaping using spatial light modulators ,Sci. Instrum., Vol. 71, No. 5, May 2000 A Kisel , An Extension of Pulse Shaping Filter Theory , IEEE transactions on Communications , Vol 47, no.5 May 1999 Evolutionary Algorithms are popular tools for search,optimization,machine learning and for solving design problems . EA s use simulated evolution to search for solutions to complex problems Generally evolution strategies and genetic algorithms Genetic Algorithms were developed in the US under the leadership of John Holland and his students The main ways these differ are that GA s put a great deal of emphasis on selection , recombination and mutation acting on a genotype that is decoded and evaluated for fitness resulting in the Recombination to be emphasized over mutation on the other hand ES tend to use more direct representations with mutation being emphasized over recombination . Although nowadays both of these freely borrow ideas from each other . Evolutionary algorithms are easily parallelized and are what are known as weak methods in the AI community as they do not exploit domain specific knowledge and usually are a sort of blind search method T . Back , Evolutionary Algoritms in Theory and Practice ,Oxford University Press NY 1996 T Back , F . Hoffmeister , H.P . Schwefel , A Survey of Evolution Strategies in: L Booker , R Belew (Eds.), Proceedinds of the fouth International Conference on GAs , Morgan Kaufman , Los Altos CA 1991 , pp 2-9 J.H Holland Adaption in Natural and Artificial Systems , 2nd MIT Press , Cam 1992 H, -P. Schwefel , Evolution and Optimum Seeking , Wiley , NY 1995 D. Whitley An overview of Evolutionary Algorithms : practical issues and common pitfalls , CSU Information and Software Technology 43 (2001) p. 817-831 . The idea of Coherent Control was born out of mode-selective laser photochemistry , despite the initial buzz in the scientific world the dream of exciting specific chemical bonds dimmed for various reasons including the fact that controlling certain aspects were hampered by fast relaxation, complicated mode structure ,imperfect knowledge of inter-nuclear potentials along arbitrary coordinates and the distorting influence of strong external light fields Recent successful experiments have led to control of molecular ionization Large condensed molecules are so complicated that it is nearly impossible to calculate optimal pulse shapes in advance but recent experiments 15 have used experimental feedback to determine the optimal optical pulse shape to achieve a particular goal . In these cases ,absence of detailed knowledge of the system lead to selection of new pulse shape inside feedback loop using a Genetic Algorithm in order to discover control pathways in complicated physical systems . Judson and Rabitz 15 were also clearly able to demonstrate that an adaptive learning procedure could teach a laser to selectively excite chosen states of a molecule . Constraints on the form or amplitude of the driving field could readily be included in the Learning Algorithm in accordance with laboratory capabilities (laying foundations to experiment such as : by Hadjiloucasal. Conducted some", "label": 0 }, { "main_document": "could well be related to these events. The careful burial of this treasure suggests that it was temporary in nature, meaning that the owner intended to return and recover them One of the only ways to provide evidence to support this theory is to take a comparative approach. To do this it is possible to take into account hoards from later better documented (not always!) periods such as the early medieval and middle to late medieval periods. The hoards would have to be as accurately dated as possible and then compared to surviving records (both documentary and archaeological) of political and social unrest from that particular period. It can then be argued (if a correlation is found between periods of warfare and unrest and an increase in hoarding/deposition in that region) that hoarding does indeed intensify during times of warfare and political and social upheaval Richard Bradley argues against this approach and says that it has Some hoards are representative of the systems concerning Bronze and Iron Age metal production, manufacture and manipulation. This type of deposition is commonly known as a Founder hoard (Minster in Kent and Ashbury in Oxfordshire are two such examples) and is generally associated with the production of metallurgy. For the most part they consist of broken, used or unfit metal objects, bronze/iron/copper/tin ingots, casting waste and in many circumstances complete or newly finished objects Hoards of this variety are another example of depositions that are temporary in nature as the person or persons responsible probably did intend to return and retrieve the objects. Hoards that include metal ingots are generally indicative of raw material usage, distribution/exchange networks, supply and material composition which provide archaeologists with vital data; they also provide key insights into standardization Whereas hoards that are composed of valuable objects of a similar design and style that were deposited near to or soon after final stages of production and manufacture are argued to be the stock of a metalworker being held securely in reserve whilst awaiting distribution and trade This argument (if true) has important ramifications and provides vital data concerning the demand for certain objects during the Bronze and Iron Ages. It can be asserted that metalworkers were producing and manufacturing items in anticipation of demand as opposed to waiting until the items were needed, it shows that in some cases and regions a surplus was required. The two previously discussed types of founder hoard are mainly dated by archaeologists to approximately the 13 This new type of deposit is composed of objects that are to all intensive purposes scrap metal. The deposits are comprised of objects that are old and worn, some showing heavy long term usage and have in many cases been rendered useless. Some of these particular deposits are made up of hundreds, even thousands of metallic items, many of which have been broken up into smaller pieces. The largest deposit ever found in England is a founder's hoard known as the Isleham hoard, numbering 6500 pieces is the most famous example. Hoards such as these, some archaeologists have argued;", "label": 1 }, { "main_document": "may be for the individual to ignore, subvert or 'bracket' them off, as Bury (1982) notes some of his respondents were simply 'taken over' by the disease. Indeed, there may be a contradiction between separating the self from the disease, and yet feeling its effects pervading all aspects of life (Bury, 1982). Murphy (1987) has referred to this state of limbo as 'social suspension' whereby people with impairments are neither wholly in nor out of society, where old statuses and have been lost, with nothing to replace them. This concern with 'coping' in many ways overlaps with Bury's notion of 'strategies'. Here, Bury examines what individuals actually This includes the 'mobilisation of resources', i.e. the material and social resources available to overcome the effects of chronic illness. Studies have revealed the great importance of social networks as a source of social and moral support through the experience of chronic illness. Indeed, the 'tolerance' of surrounding people may be tested during periods of worsening symptoms or increased dependency, as respondents to Williams' (1993) study described it as finding out who your friends 'really are'. Maintaining jobs together with 'keeping up appearances' socially, were mentioned by Bury's (1982) respondents as areas which proved more difficult with the onset of their symptoms. Some respondents actively withdrew from situations and environments in which their illness would prove obtrusive or awkward, whilst others experienced frustration and embarrassment at their inability to 'keep up', Having resources to call upon therefore, whether social or otherwise, upon the onset of chronic illness, is a crucial aspect of an individual's adjustment to their biographical disruption. Whilst problems with social interaction, for example, may not result directly form the inability to perform certain tasks or activities, the attitudes and actions of others may have an important influence not only on the capacity of the individual to maintain a good standard of living, but also on the way in which the individual comes to perceive their illness and ultimately their sense of self. The final aspect Bury (1997) sets out is that of style. By 'style' Bury refers to the adoption of a form of self-presentation in illness or disability, and the 'performance' that is required to maintain active participation in mainstream life. In other words, in order to resolve what Radley (1994: 152, cited in Bury, 1997) has referred to as the 'competing demands of bodily symptoms, and those of society', a form of 're-fashioning' of the self has to take place so that social relationships and also the relationship to self is kept in tact. Bury argues that the degree to which this re-fashioning of self, and the adoption of a lifestyle which allows the individual to regard themselves as 'capable' is closely tied up with levels of individual confidence, the anticipated responses of others together with the availability of resources to tackle them. As examined, Bury's notion of 'biographical disruption' provides us with a framework with which to analyse the meaning and experience of chronic illness; from the way in which people attribute meaning to their altered physical and social", "label": 1 }, { "main_document": "value of meat itself, the increased availability of other animal products such as milk, cheese and eggs supplemented the level of protein in Indian diets. In fact, meat was incorporated so successfully into the American diet that in 1589 when the Viceroy of Peru advised a healthy diet in order to prevent disease, mutton, fowl and goat were three of the foods that he mentioned specifically. As Super concludes 'meat, a scarce commodity before the conquest, became commonplace in the nutritional regimes of Spaniards and Indians by the middle of the sixteenth century' Pilcher, Pilcher, Super and Wright, As well as introducing new foodstuffs into the indigenous diet, the Columbian Exchange also reformed the role of existing items- most notably alcohol. In Pre-Colonial Latin America the main alcoholic beverage consumed was Drunkenness as a whole was frowned upon, but inebriation during times of celebration such as harvest, births, marriages and other religious ceremonies was considered a display of devotion; as Taylor notes 'alcohol, especially pulque, was associated with periodic, peaceful rituals that expressed village solidarity' However, colonisation introduced grape vines in a wider abundance, which increased the volume of wine available. Also, the Spanish introduced the technique of distillation and thus Indians had access to much stronger alcoholic beverages than they had experienced through the fermentation process. This wider range of alcoholic beverages not only increased access to alcohol, but also contributed to the secularisation of drunkenness, as the Spanish drunk on a daily basis in moderation and the new drinks were not closely linked to ritual as pulque was. The result of this was the prevalence of drunkenness on a more regular basis among Indians, and an increased involvement of alcohol into the regular indigenous diet as a whole. Colonialism led to the development of alcohol from a product consumed infrequently as part of ritual to a drink featured in the everyday lives of Latin Americans. William Taylor, On the whole it is easier to note the effect of New World products on the rest of the world, but the influence of European foodstuffs on Latin America cannot be forgotten. Whilst the Columbian Exchange did not alter the staple diet of natives, auxiliary items were more readily adapted to greatly reform the meals that Indians ate; the tortilla endured, but was consumed with meat, fruits and vegetables introduced by the Old World. New items not only meant changes in variety of consumption, but the increased products reduced the chances of starvation from crop failure, and also led to better sources of nutrients- most notably protein. As Super concludes 'nutritional regimes were an expression of the changing historical reality of Latin America' As the New World was embracing the influx of foreign foodstuffs, they were also enduring the commercialisation and slavery implemented by colonisers in their attempt to exploit these newfound lands for European benefit. The Columbian Exchange not only reformed the diet of Latin America, it altered the manner in which foodstuffs were produced and in some ways led to the subordination of natives under the colonial government. Super and Wright,", "label": 1 }, { "main_document": "England. One effect of the reformation was to abolish the ecclesiastical courts, removing the opportunity of appeal to the Pope. Many common lawyers advocated for the abolition of the church courts at the same time, however this would have required a fusion of canon law and Common law, which was never a feasible proposition. The Judaeo-Christian influence upon the common law can clearly be seen through the practices at the Inns of Court. The Courts symbolic eating rituals and oratory nature were likened to those of monastic origins. The order of dining was described to be \"the order of a lawful world, a symbolic order in which Justice, Rule and Law are to be understood to be expressed together through culinary measures, victuals and wine\". These rituals became fundamental to the teaching of law and legal practice. Fulbecke, William: Direction or Preparative to the study of Lawe, London, 1599, epistle, p3 taken from Goodrich, Peter: Commons, Common Land, Common Law, The Journal of Legal History, Vol 12, No.3, Dec 1991 p246-67 In essence the relationship between the Inns of Court and the Judeao-Christian theology was the defining influence upon the common law during the sixteenth and seventeenth century. The Act of Supremacy in 1534 resulting in the emancipation of England from the jurisdiction of Rome fundamentally altered the legal and political culture of England. The newly created imperial power of the King created a division between common law and the monarchy; both vying for legitimacy and supremacy. Henry represented himself as The image of God Henry de Bracton, The Laws and Customs of England (1200-68), trans. S.E. Thorne (Cambridge, Mass: Harvard University Press, 1968-77) This became an issue of great conflict for the Judaeo-Christian theorists, two competing bodies fighting for divine legitimacy. The failure of natural law to deal with the resultant conflict with the imperium of the King was a significant theme within Titus Andronicus. However it would be incorrect to argue that the common law abandoned all Judaeo-Christian influences as this was not the case. Raffield, Paul: Titus, Troynovant and the Loss of Justice During the rule of Charles I common lawyers continued to oppose the concept of the \"intrinsical\" prerogative of the King, Coke stating that \"it is an act of right, not of grace, that we stand upon\" Charles I continued to be at odds with the common law and during his Personal Rule intervened numerous times as to the regulation of the Inns of Court. The outbreak of civil war Paragraph taken from Lecture Notes 1642-1653 In the political context of the Act of supremacy, it became necessary for the common law to adopt an opposing source of legitimacy in opposition of the monarch. The common law adopted two major theologies in its fight for supremacy; that of the Graceo-Roman theology, based upon community and citizenship and that of the historical supremacy of common law. I shall examine both of these in relation to their impact upon the development of common law and the secular legal profession. The Graeco-Roman tradition is one emanating from that a positive", "label": 1 }, { "main_document": "test for the Poisson distribution. The aims of this experiment are to appreciate the statistical nature of the radioactive decay process and verify that radioactive is governed by Poisson statistics. During this the aim is to learn how to collect and interpret Obtaining the Identifying unknown Calibration relates the channel numbers of the multichannel analyser with actual energies. Without calibration the real energies of the However ratios such as detector efficiency would still be the same. The equipment used to detect The microprocessor is in a PC and scintillator is sodium-iodide crystal doped with thallium (NaI(Tl)). Sources of These sources were The software on PC was used to reset any spectrum currently displayed and then start recording data until the spectral peaks were clear, that is, until the signal-to-noise ratio was high enough to accurately place the energy photopeaks of the sources in a definite channel. The channel number the respective peaks were in were defined as having the energies stated above exactly. The MCA then uses a linear equation to define the energies of all the other channels. Because the way the calibration is carried out systematic errors are likely to occur. Firstly if the temperature of the apparatus changes after calibration the detected energies will appear different. Secondly the MCA might not be linear and so only making two exact measurements would just be a linear approximation. By calibrating the device it allows unknown photopeaks to be measured and thus can aid in the identification of unknown emission sources, which is part of the objectives of the experiment. Having the apparatus at a fixed temperature or having a temperature correction function would prevent the calibration drifting away from true as the local temperature varies. Having more sources of known photopeak energy would enable a non-linear calibration to be made, if the MCA is not linear. The part of the experiment will determine if the MCA is linear or not. After calibration a source of A live time of 180 seconds was sufficient to achieve a good signal-to-noise ratio for the photopeaks to be discerned. The decay energies, daughter nuclei and decay type were looked up in a data book and matched to the observed photopeaks recorded. The measured energy was then plotted against the expected energy to see if the MCA was linear. If it were, then a straight line would be the best fit. If not then the MCA would not be linear and the linear calibration would not hold true. The recorded data was saved in a file containing the counts per channel and the equation for the linear calibration. From this the graph on the output monitor can be reproduced. The Refer to Figure 5 for a graph of the data in Table 1. From Figure 5 you can see the clear linear correlation between the measured energy and the expected energy for the same decay events. The errors are related to the resolution of the detector. This was defined as the FWHM divided by the energy of the centre of the peak. Thus the errors are", "label": 1 }, { "main_document": "confined to the small-scale activity of the secular clergy. The Andean regions housed a variety of native peoples, many of whom were dispersed in the remote and formidable physical terrains outside of the empire's main cities. These assorted groups spoke numerous, often markedly different languages and many had traditionally maintained little contact with their Inca lords in the years preceding the Spanish invasion, making them more resistant to intervention in their communities than had previously been encountered. These initial years then witnessed a relative apathy for the extensive spread of the Catholic faith, alteration of indigenous spiritual tenets having subsided markedly from the agenda. Henry F. Dobyns and Paul L. Doughty, (New York, 1986) P.84 Edwin Williamson, (London, 1992) P.98 Henry F. Dobyns and Paul L. Doughty, (New York, 1986) P.83 Mark A. Burkholder and Lyman L. Johnson, (Oxford, 1994) P.85 The Spanish were nevertheless from the outset ardent and successful in speedily extirpating the Inca nobility's official religion, destroying religious ties and beliefs that conflicted with the new state paradigm upon which their authority rested. Simultaneously removing non-compliant nobles and suppressing the Inca's elite religion became paramount to subverting the sovereignty, both political and spiritual, of the former order. Aside from the early revolts, the most significant of which was Manco Inca's rebellion, the Spaniards achieved marked success in pacifying regions and consolidating their position. The colonists were fully aware that the replacement of the existing aristocracy with members of the conquering contingent would provide the principal avenue towards bolstering a tenuous hold on the state. Evangelization enabled then the creation of formal religious and cultural connections with distinguished figures, facilitating the creation of a superstructure. Such an approach patently fulfilled a political need, appeasing latent Indian resistance and enabling the conquerors to insert themselves at the apex of a pre-established, stratified imperial structure. Accordingly, the colonists favoured the continuation of existing provincial rule, maintaining through local caciques a system of indirect governance based on hereditary figures in a similar way that the Incas had done. Henry F. Dobyns and Paul L. Doughty, (New York, 1986) P.64 Henry F. Dobyns and Paul L. Doughty, (New York, 1986) P.78 Whilst the Spanish colonists achieved success in stabilizing their position, the initial post-conquest years were not to witness any sustained extension of such 'spiritual conquest'. Whilst they could hardly be deemed to be self-governing, the immediate post-conquest period afforded the colonists a degree of autonomy, the Crown it seems being more focused on activities elsewhere in its empire. As such, despite the appointment of a Viceroy in 1542, the conquerors were largely unrestrained by royal controls. Initially dividing into factions loyal to the leading conquistadores of Francisco Pizarro and Diego de Almagro, the fierce competition for the untapped resources of the land and its dormant labour force incited a period of civil and political turbulence. Even by the middle of the sixteenth-century when Spain had established a more substantial presence in Peru, native uprisings would continue to occur, their impetus principally deriving from a spiritual perspective. A salient example of such response was", "label": 1 }, { "main_document": "in New York, and the lower the status of the store, the less it was used by shop assistants. This experiment highlighted the link between social status and pronunciation of language. Another phonological experiment was conducted to analyse the percentage of [h]-dropping between social classes in West Yorkshire and Norwich by Peter Trudgill in 1974 (Holmes, 2001:138). The results proposed the idea that the higher the social class, the less the percentage of [h]-dropping. Speakers from the Upper Middle class in Norwich dropped only six percent of [h]s) compared to speakers from lower social classes who dropped ninety-six percent of [h]s. The way we use language everyday is influenced by our gender. To what degree men and women speak differently is the cause of many studies. Firstly there is the 'sex-exclusive' theory - where men and women seem to speak different languages. Secondly there is the 'sex-preferential' theory - where men and women speak the same language, but Or one might question the whole concept of gender as a social variable and conclude that the debate has been totally exaggerated (Lecture handout notes ' In considering the 'sex-exclusive' idea, it is clear by observing language use in British society today that men and women are in fact using the same language. However in some parts of the world, female and male members of the same community literally speak different languages. A village near the Amazon Basin consists of men who use the language 'Tuyuka' and women who speak 'Desano' (Holmes, 2001:150). This is an extreme example of speech affected by gender (men and women marry into different tribes who have their own language) and it is more likely that in other societies men and women use the same language, but they have different features and forms - the 'sex-preferential' theory. The key question to ask when researching gender as a social variable of speech is, 'do males and females differ in their use of standard and non-standard speech?' Trudgill (2000:70) states that, 'women on average use forms which more closely approach those of the standard variety or the prestige accent than those used by men'. Many sociolinguistic studies have been conducted - whilst taking into account the social variables of social class, ethnic group and age - in order to answer the above key question. One such study was carried out in Norwich by Peter Trudgill in 1974 to ascertain the percentage use of the non-RP - He concluded that in all social classes, women use the higher prestige form In particular the highest gender difference is in the Lower Middle class where only three percent of females used the non-RP form compared to twenty-seven percent of males. The sociolinguist Jenny Cheshire (1982) carried out a case study in Reading to find out if one's gender was linked with their social network and use of the vernacular. From conducting an ethnographic study of two single sex social groups over a period of time, Cheshire found that females socialise in more closely knit groups than males, and in both cases the closer the", "label": 1 }, { "main_document": "an identity relationship with one another. Consider the true identity claim Saul Hudson = Slash. Let us now consider another person 'Duff'. Imagine a situation in which Saul Hudson is always accompanied by Duff and an observer, who knows that Saul Hudson has a second identity, always sees the two together and always hears the two referred to as 'Slash' and 'Duff'. The observer makes the false identity claim that Saul Hudson = Duff and is always accompanied by a second person Slash. Though a highly contrived conversation, we can imagine our observer consistently discussing Saul Hudson, Slash and Duff and all the definite descriptions associated with each term, and their mistaken identity is never revealed. Here we have a situation in which a false identity claim is being made, Saul Hudson = Duff, yet it is used in a consistent manner such that its falsity is never discovered. Could not 'heat = molecular motion' be precisely the same? One day we realise that the identity claim is false and that there are, perhaps, magic fairies that actually are identifiable with heat but they are constantly conjoined in our past experience with molecular motion. Although Hume's division of ideas and impressions oft leads to a distinction between a priori, necessary and analytic on one the hand and a posteriori, contingent and synthetic on the other, and their polarity means that we cannot make sense of contingent a priori statements nor of necessary a posteriori statements, this is unimportant for the current discussion. The current claim is that science produces predictive testable hypotheses as opposed to rigidly designating relationships. This account of science is in accordance with the Humean attack on inductive reasoning. Even if the concept of rigid designation is correct there is still the possibility that the identity claim is false; it can be the result of At the moment the current hypothesis is that heat = molecular motion, and we can predict fairly successfully the behaviour of molecules and subsequent temperatures. There is the distinct possibility, however, that there is a far more plausible hypothesis waiting to be discovered, perhaps involving magic fairies, perhaps not. This is not to say that heat = molecular motion is a contingent identity claim, more to say that heat = molecular motion is a contingent hypothesis involving a necessary identity claim. Within any theory the identity claims must be necessary, but the theory can be contingent as a whole. Using the distinction between causal networks and causal structures Maxwell says that: This preserves the necessity of identity statements yet also shows, perhaps, where the illusion of contingency in 'pain is c-fibre stimulation' lies; in the thesis as a whole. Just as 'heat is molecular movement' is contingent as a thesis. Within the thesis of 'heat is molecular motion' there is a further illusion of contingency, which is the illusion that Kripke attempts to explain away. The illusion of contingency of the thesis 'pain is c fibre stimulation' can be explained, as Maxwell does, whether this is the illusion that needs explaining remains to be", "label": 1 }, { "main_document": "implanted in the flank, it takes as long as 17hr to induce Fgf-10 expression in the surface ectoderm in the lateral plate mesoderm contrasting with the rapid activation of other limb genes in response to Fgf beads. The first mediator is The second mediator detected by a similar experiment is And finally, the 3 These intermediate factors all have activities that are The third family that is critical for fore and hind limb development is the This family of gene encodes T-box transcription factors with the T-box domain binding DNA as a dimer (Rodriguez-Esteban Two members are specifically expressed in the hind and fore limb buds: These appear to be the most upstream factors in limb initiation. This is supported by 3 lines of evidence: mice carrying a null mutation in Recent work has also shown that there is a regulatory loop between Hox9 and Tbx for both limb initiation and the wing/leg identity specification (Takeuchi Figure 7 shows the expanded model of this reciprocal loop. Normal antero-posterior patterning in the limb depends on the zone of polarizing activity. This crucial signalling centre releases a number of signals (Niswander, 2003). The first identified signal, using whole-mount in situ hybridization is Sonic hedgehog (Shh) protein, which has the main attribute of the limb morphogen in that it acts by forming a gradient across the early bud and is diffusible (Tickle, 2004; Riddle Degenerate polymerase chain reaction primers corresponding to a sequence highly conserved between drosophila This resulted in the isolation of a 16.6kb cDNA clone whose gene was name Its attribute as a morphogen arises from graft experiments in which small blocks of limb tissue were grafted to the anterior of host limb buds and the strength of the ZPA activity was quantified according to the degrees of digit duplication to determine the extent of correlation between the spatial and temporal pattern of ZPA activity and The outcome of this experiment was that Shh signalling is pivotal in controlling both digit number and identity (Tickle, 2004). Further evidence support this statement as application of Figure 8 shows these mirror- image duplications. The patterning effects of Shh signalling require This suggests that the default state of the limb is to form many digits and that those 2 genes and their transcripts repress the polydactylous capacity of the autopod. In other words, Shh and Gli3 are required for digit identity, not the formation of the limb; and the type of digit that develops in a particular position depends on the signalling strength (Lintingtung The Gli3 processed form acts as a repressor (Gli3R) and the unprocessed form acts as an activator. The relative ratio of Gli3A: Gli3R is crucial in Shh signalling (Niswander, 2003; Litingtung Defects in Shh signalling can be seen in Upstream of It appears to be induced by Retinoic acid (Retinal is oxidised to retinoic acid by retinaldehyde dehydrogenase2- Raldh2) as embryos treated with disulphiram (which inhibits RA synthesis) showed severe Hoxb8 down-regulation (Stratford Conclusion, Hoxb8 only induces Shh in the anterior forelimb. Interestingly, grafting and removal experiments showed that Hoxb8 is inhibited", "label": 0 }, { "main_document": "leadership of the TUC in alliance with Ernst Bevin, the Labour minister, strengthen and united the trade union movement. Moreover, it reinforced the TUC and Labour's historically significant association and created an integrated, respected and popular British left-wing. Thus the alliance of the TUC and the revived political left meant the British left was able to succeed in creating a lasting welfare settlement without a coalition with other interest groups. Nonetheless the important welfare reform of the 1940s cannot solely be attributed to pressure from the left; arguably it was the British right that ensured the implementation of lasting socio-economic changes when it accepted Keynesian economic management. As Peter Baldwin argues (1990: 109), the 'left may have done the pushing but the door was ajar' suggesting that Labour was only able to create the welfare state with the support of the Conservative party and their associates. Bourgeois parties had a direct and positive self-interest for reform as the main effect of the post war welfare state would be providing free social services to the middle classes. British rightist support was a decisive factor as whenever 'those in positions of power opposed social action (they)... largely had their way' (Thane, 1996: 247). The construction of a national minimum of subsistence benefits, as recommended by Beveridge, was never fully implemented as it was opposed by the upholders of the last vestiges of classical economic thought. Moreover, the British Medical Association only allowed the creation of the National Health Service as Aneurin Bevan the health minister 'choked their mouths with gold' (Navarro 1978:23). The parties of the centre and right had an unprecedented association with reform as the pos war legislation 'sprang not solely or even primarily from the strength of the left' (Baldwin, 1990:112). Therefore the lasting welfare settlement created in Britain in the 1940s was successfully formed by the left only with the support and input of right-wing parties and bourgeois interests. In conclusion, the Swedish left succeeded in creating a lasting welfare settlement in the 1930s as it was a united force in coalition with other groups with a radically new and popular programme of reform. The Social Democrats already had a coherent policy of significant social and economic restructuring thus profound reform was possible in Sweden. Moreover, the Swedish left skilful unification of their own interests with other groups created labour peace and ensured support for their reforms; thus dissolving conflict over welfare policy adaptations. In contrast, Britain in the 1930s was dominated by powerful ruling elite that upheld the vitality of the economic liberal tradition to ascendancy of their own interests. The left in Britain was unable to maintain its tentative hold on government after the devastation of the Depression; thus it was too weakened to produce an alternative welfare plan. Moreover, there was arguably little pressure for reform as the unemployed were too demoralised, the far left were too marginal and the radical economists too isolated to provide an alternative vision of British society. Significant socio-economic reform was only achieved in Britain after World War Two; when the stable", "label": 1 }, { "main_document": "that selection graph. The position and size of the constraint box was determined by what material characteristics the graph was displaying and how many materials were required. In the case of Reference source not found. , the constraint box was placed in the upper right hand corner of the graph, this selected materials with high fracture toughness & fatigue limit. Each graph stage used this constraint technique to eliminate all the unsuitable materials. There is a function in the CES software that allows the user to apply all the constraints from each selection stage to the database, which then shows only the materials whose properties meet all the selection criteria. With this function selected, the constraint boxes on the graph stages can be manipulated to either increase or decrease the list of possible materials. If there were any fixed constraints such as a maximum cost, or minimum allowable strength, then the CES software allows the user to add additional limits to the graph stages, via means of a selection line which can be placed on the graph axis. Either side of the line can then be selected as the minimum or maximum limit. Another feature is the ability to plot a performance index line, where all the materials lying along the line have the same index. Using this eliminating process, the list of materials was reduced down to 7 different materials (material groups were discarded, such as Cast Duplex Stainless Steel & Wrought Ferritic Stainless Steel). AISI 1095 & 9255 were selected from the short listed materials since they had the highest values for Young's modulus, tensile strength, Endurance limit and Fracture toughness. The density of the materials wasn't particularly low, but this was typical for all the short listed materials, purely because of the fact that lighter materials with the same strength values etc tend to be more expensive. The main deciding factor, since all the materials meet the strength and endurance requirements was the price of the materials. The AISI 1096 & 9255 were the cheapest of the available materials at 0.25-0.45 The stainless steel AISI 201 was also selected from the list for its excellent corrosion resistant properties. AISI 201 is at the lower end of the allowable tensile strength region, with a maximum tensile strength of 860MPa, where the specified minimum is 820MPa, it is also a lot more expensive than the Carbon and Alloy steel at 1.5-2.75 However there is one very good reason for selecting stainless steel as a materials for use in a pressure vessel, and that is if the pressure vessel is to contain pressurised hydrogen. If a pressure vessel is to contain hydrogen, then lower strength materials should be used, such as Stainless Steel type 304, 316, 321, 347 or Alloy's type 2024 or 6061. This is due to the fact that high pressure hydrogen drastically degrades ductility of highly stressed pressure vessel materials This phenomenon is known as hydrogen embrittlement. In researching pressure vessels and materials for use in pressure vessels, it was observed that all large pressure vessel are manufactured out", "label": 1 }, { "main_document": "levelled off (Visser 2000). Non-union candidates, therefore, have gained support at the expense of the CGT. Interpreting this decline as a decline in support for the trade union should, however, be treated with caution. Although it could be argued that the loss of support for the CGT reflects disaffection of workers due to the ideological commitments of the union, it could also be argued that with the CGT, as with the other confederations, the decline or relative stagnation of elected union representation could be attributed to the lack of emphasis placed on organisation and representation within the workplace. Firstly, from the traditional desire from both trade unions and employers to neutralise the workplace as a place for bargaining (Eyraud and Tchobanian 1985). Secondly, from the administrative overburdening of unions, with scarce resources, particularly since the Auroux laws (Goetschy and Jobert 1998) and thirdly, from the concentration of 'activists in state-constructed bureaucratic function' (Jeffreys 1996b: 509). These factors will have affected both the unions' ability to organise and their commitment to organising in the workplace. The bulk of non-union representatives elected onto comits d'entreprises and into dlgus du personnel positions are in small and medium sized firms. In 1995 the five main confederations polled 81.4 per cent of votes for comits d'entreprise elections in firms with 1000 employees or more whilst the non-union vote was 6.4 per cent. In firms with 50 to 99 employees the non-union vote was 63 per cent and the five main confederations together polled only 33.4 per cent of the votes. This pattern of voting was similar for dlgus du personnel elections (DARES 1998). This evidence indicates the strength of unions in representation structures within larger firms but not necessarily the lack of The 'ideological commitment' of activists within firms affecting commitment to membership recruitment, the fact that some union members will stand for comits d'entreprise elections as individuals, not as union representatives, and the willingness of non-members to take action (Jeffreys 1996b) are important considerations if we are to attempt to assess union influence in relation to elected union representatives, particularly in smaller firms. Furthermore, recent evidence on comits d'entreprise elections shows that the non-union vote has actually declined as 'non-union candidates...(see) their gains made over the last 20 years checked' losing 'considerable ground to the major union confederations in the smallest companies' (Dufour 1998b). The recent publication of the 1998 comits d'entreprises results supports this view and reveals that the CFDT has been particularly successful in smaller workplaces (Dufour 2000). Thus, since 1995 there has been an overall reduction in support for elected The 'joint drive for unionisation in companies' launched by CGT and CFDT (Rehfeldt 1999) in 1998 is manifest, in this instance, as unions gain support in workplaces through greater organisation and a focus on membership as a means to strengthening trade union representation in the workplace. Lack of trade union organisation in areas of the private sector rather than a very limited interpretation of influence is apparent. It is too simplistic to suggest that the identification of a recent increase in support for", "label": 1 }, { "main_document": "Avril Taylor's book, like many participant observation studies, is an interesting and informative read. Participant observation methods lend themselves to the study of 'deviant' groups of society and therefore through the study's very nature often result in more captivating and readable content than other research might. However, all research has flaws and limits. In this critique I will assess Taylor's research methods by considering how successful the book is in allowing for or avoiding the common limits of and problems associated with participant observation under the following headings, as identified by Layton-Henry Layton Henry (2005) Observations may be limited by problems of access The problems of ethical dilemmas The risk that an investigator may be captured by part of the community The problems of collecting systematic and accurate data, and more importantly in this case, the presentation of data, (not identified by Layton-Henry). The risk that an investigator may influence her subjects The group or association may be atypical leading to unrepresentativeness My main criticism will focus on the deductive approach adopted by Taylor. Quite uncharacteristic to participant observation studies, this approach can be seen to have detrimental effects on Taylor's research strategies and results. Secondly, the processed and pre-interpreted nature of her results. Again, contrary to many other ethnographies, this could be seen to undermine the point of adopting participant observational methods. Both these points revolve around the problem in participant observation, that as the following definition highlights, scientific understanding is considered highly important and desirable in social research. In Taylor's attempts to produce scientific results, the impact of her study is arguably weakened. Participant observation is perhaps most usefully defined as: \"a process in which an investigator establishes a many-sided and relatively long-term relationship with a human association in its natural setting for the purpose of developing a scientific understanding of that association.\" Taylor saw that the benefits of participant observation would allow her to provide a picture of women drug users through their own perspectives: \"Much of the text allows the women to speak for themselves, describing from their point of view the lifestyles which have evolved around their use of illicit drugs.\" Her other main reason is that \"...no ethnographic study of female drug users alone has been undertaken anywhere\". However, due to its qualitative nature, participant observation has many methodological risks and limits. Loftland and Loftand (1984), Taylor, A (1993), Taylor, A (1993), Firstly, access to the chosen group of study often proves difficult, and even after obtained, will affect the nature and success of one's study. Successful ethnographies, such as that of Whyte's 'Street Corner Society' can often depend on finding a sponsor or 'gate-keeper' who not only can introduce the observer to the subjects he/she wishes to study but is also seen in a favourable light by those subjects. Taylor was fortunate enough to find a contact, similar to that of Whyte's. Like Doc, the local drug-worker was known and respected by many of the women drug users in Taylor's study: \"He was accepted and trusted by the women...\" Limitations of access also proved of", "label": 1 }, { "main_document": "in the extraordinary, unites the real world with the supernatural realm, elevating himself above the former and thereby gaining access to the latter. Not only is fate inevitably linked to character in The text is ambiguous with regards to Anselmus' background. Certainly, we are told that he is a student, wears attire which is 'ganz aus der Gebiete aller Mode' (Hoffmann, 6) and that he has ambitions to be 'der Geheime Rat' (Hoffmann, 9). Evidently, then, his aspirations and intelligence elevate him above the common man, even if his misfortune has so far left him apparently doomed to a life of obscurity and to being labelled mad by those superior in station to him. However, an extraordinary ability to fantasise prevents him from being poisoned on too regular a basis by delusions of social mobility. For although he hopes, as I have already mentioned, one day to rise to the position of 'der Geheime Rat', and Registrator Heerbrand declares, in the Fifth Vigil, that he has the potential to be 'ein geheimer Sekretaer oder wohl gar ein Hofrat' (Hoffmann, 45) , as Konrektor Paulmann says, 'er will sich ja zu gar nichts applizieren' (Hoffmann, 45). Indeed, his overactive imagination seems to cancel out an active body, in contrast to such characters as the fiance of Veronika Paulmann's friend Angelika, who, in proving his physical heroism in battle, will soon be promoted 'zum Rittmeister' (Hoffmann, 50). Likewise, the picture of Veronika lying 'winselnd vor Jammer und Schmerz auf dem Sofa' (Hoffmann, 99) after the Ninth Vigil's drunken orgy both satisfies and contradicts the stereotypical picture of the nineteenth century damsel in distress. For she surrenders not only to the expectations of society at the end of the novella in accepting Registrator Heerbrand's proposal of marriage, but also to her own real desires, described by the narrator, in the Fifth Vigil, with much condescension but certainly more than a glimpse of reality. This fantasy in which she is Hofraethin to Anselmus' Hofrath is, therefore, false for two reasons. Firstly, because it is neither Anselmus' real desire nor his true fate to be Hofrath - this is proved both by the fact that he declares his love for Veronika only once and that he personifies that character required to marry one of Lindhorst's three daughters and which one often finds \"bei Juenglichen, die der hohen Einfachheit ihrer Sitten wegen und weil es ihnen ganz an der sogenannten Weltbildung fehle, von dem Poebel verspottet wuerden\" (Hoffmann, 89), as Serpentina explains in the Eighth Vigil. Secondly, because fate will decree rather rationally, at the end of the novella, that Veronika marry Registrator Heerbrand who has been promoted to Hofrath. Clearly, one's class determines one's character, which, in turn, determines one's fate. Thus, in the conflicting romantic destinies of the Heerbrands and Anselmus and Serpentina, Hoffmann once again proves that, ultimately, supernaturalism or rationality must reign wholly over our lives and cannot be united. Since I have just mentioned the links between the mortal concepts of class and character and how they inevitably influence the supernatural concept of", "label": 1 }, { "main_document": "getting a bad impression at the last minute. See customer audit trail, appendix 9. Concerning the customers, it is advisable to seat all business and family guests in separate areas of the restaurant to avoid disturbance of the business clients. The host should be trained and aware of the different types of customers to make sure every guest feels comfortable. \"The balanced scorecard is a management system that enables organisations to clarify their vision and strategy and translate them into action\" (Balanced Scorecard Institute, no date). According to Kaplan and Norton (1996) the benefits of the balanced scorecard is that it not only measures financial aspects but also the customer perspective, the internal business processes and the learning and capability of the organisation. Therefore it gives a wider picture of the actions of the business. The balance scorecard in appendix 15 shows how Branca can measure the performance of the recommendations that were made earlier in the report. The best way to look at how customers perceive the changes is through questionnaires. It was considered for Branca to employ a consulting company to measure customer satisfaction. However, it was decided that the cost implied would be too high for a non branded restaurant. Therefore a questionnaire incorporating questions on all changes should be designed to avoid customers being over exasperated. It is hard to measure the learning and capability of the organisation. Most of the recommendations do not involve employees having to be trained as there will not be any changes to the service procedures. Quality and service recovery can not be measured in financial terms. For the other recommendations it should be calculated if revenue increases to supervise the effectiveness. This is especially important for the change of the menu as it was suggested to do this four times a year. If it is seen that there is no significant increase, it should be considered to only alter it twice yearly. Should the restaurant make any future changes, these should be monitored to assess their effectiveness. This should include financial as well as the other introduced measurement perspectives. Through examining these modifications continuously, management is able to detect any discrepancies that should occur and take appropriate action. It is essential that the business is aware of changes in the macro-environment. It is also important for Branca to monitor its competition. These issues can result in opportunities and threats to the business, just as any internal influences can become strengths or weaknesses. From analysing all these aspects recommendations have been drawn. Branca should implement various changes, the main two being the following: the menu needs to be changed more frequently and the internet should be used to its full extent. This will result in Branca gaining a competitive advantage, having satisfied customers and increasing revenue.", "label": 0 }, { "main_document": "fit to the data seems to be a multiple linear regression model using PEmax as the dependent variable. However, given the number of variables involved in the data set, careful analysis will be required to determine the variables chosen to use in the regression. An important aspect of the initial analysis will be data reduction and summary to examine relationships between the variables. Due to the number of variables measuring the same underlying features (and therefore expecting to have high correlations with each other) multicollinearity will likely be a consideration in the regression. The data will also need to be analysed to determine if any anomalous results or outliers are present. The table below describes each of the variables individually with several summary statistics suggesting the location and spread of each variable. Additionally, 44% of the group was male and 56% female, suggesting that there is a fairly even male-female spread in the population. The table gives a fairly good indication of the location, range and general spread of the data. Weight, age and RV in particular seem to have a large amount of variation. In particular, none of the subjects have a BMP over 100 so all of the subjects are below the average body mass index for their age. This is a fairly good indication that most of them are suffering from malnutrition. There do not seem to be any cases of extreme outliers within the variables - all the minimum and maximum values are within three standard deviations of the mean, and the table below demonstrates that all the points fall within the inner fences (at 1.5 times the interquartile range) of the data. This is not definitive evidence that there are no outliers present in the data, but there are no clear extreme values that appear to be caused by anything other than normal variation in the data. The values of subjects 15 and 25 for PEmax come close to being considered mild outliers and warrant further consideration, but there are no values which need to be eliminated immediately or investigated because they appear to be caused by experimental error. The table of quartiles also gives some indication of the spread of the data within variables that the simple summary statistics did not highlight. Many of the variables appear to be quite heavily positively skewed. BMP, RV and PEmax all have median values that are far closer to the value of the lower quartile than the upper quartile, indicating skewness of the data. This is further emphasised by examining the distribution of PEmax. The above histogram gives an indication of the distribution of PEmax. It appears as though PEmax is roughly normally distributed but highly positively skewed, with a heavy right tail. The extreme values could be considered outliers but the number of them suggests they are not. With a relatively small sample size it is difficult to determine whether a variable is actually normally distributed but the histogram seems to indicate that PEmax has some significant deviations from the normal and a normal probability plot of PEmax", "label": 1 }, { "main_document": "that no approach is universally applicable and therefore management must be flexible, and at times, even combine different methods. The task of management became to study the situation at hand, analyzing technological, task and human constraints, to strategically decide on the best approach. There has also been a tendency to push decision making to as close to the point of action as possible, so as allow for greater flexibility and quicker adaptations. Knowledge Management is probably one of the most relevant topics in organizational behaviour today. With the advent of globalisation and the improvements in communication systems, work and organizational structures have taken new forms. Teleworking and virtual organizations emerged out of these developments, and also helped force the development of more flexible work organizations and the changes in management practices. Management is an evolving concept, and everyday new approaches are studied and developed. It responds to social-political changes and is culturally sensitive. Many of the better ideas observed in a different culture's approach to management are 'recycled' and adapted, as happened with the Japanese style of management and its focus on quality management and group development. Management practices are dynamic and must therefore be studied as so.", "label": 0 }, { "main_document": "back for their friends or colleagues to strengthen social and family ties. Mok and Lam (1997) attributed Taiwanese fondness of overseas shopping to their culture value of maintaining social relationships .As a result , souvenir shopping by Chinese is a very popular activity in destination. From this point of view, the souvenir shops in Oxford, for example, the one in Oxford Story shall display the goods and products that represent the identity of Oxford to attract Chinese tourist .Such products can not only help them to preserve a wonderful memory of the trip but serves to legitimize and commemorate their visit to Oxford .Accordingly, logo-embossed products linking with their trips, mugs, and t-shirts are favorable items among Chinese tourists. Introduction on special items and recommendation on the choice of goods from shop assistants will be appreciated by Chinese tourists. Over the next 10 years, increasing disposable incomes will mean that Chinese tourists will buy more items during their overseas trips, and that possibly their tastes will be more like the tastes of Taiwan Chinese or Hong Kong tourists. The managers should bear in mind that even Chinese tastes in souvenirs are different , but most of them prefer gold color, which represents status in Chinese traditional culture .Thirdly, Wang and Lau (2001) stressed Chinese cultural values are mostly affected by Confucianism and harmony is regarded as the foundation of Chinese culture. Chinese cultures emphasize self-restraint, avoidance of negative emotions, criticism, negative opinions and complaints. (Reisinger and Turner, 2003). Leung (1991) explained that sometimes in china one answers a negative statement with a \"yes\" (Cited in Reisinger and Turner, 2003). The value for harmony has profound implication for Chinese consumer behavior which concerns customer satisfaction and complaints. Influenced by these values, Chinese people tend not to complain to service providers directly in order to maintain harmony (Mok and DeFranco, 1999). From the point of view of destination managers, they need to be more sensitive to the customer satisfaction of Chinese visitors and take a more active approach to obtain the feedback from them, such as providing a message book. Chinese are likely to be more discerning consumers as they become wealthier .So service providers will need to be very sensitive to their feedback, and develop new mechanism to get feedback. In Chinese culture, one's face could be refer to the prestige one possesses by virtue of social achievement such as wealth, talents and social status (Mok & DeFranco, 1999). AP and Mok (1996) in their study on leisure travel motivations of Hong Kong residents found that prestige is an important reason for traveling abroad (Cited in Mok & DeFranco, 1999). So some Chinese regarded overseas traveling as a prestigious product. From this point of view , Chinese travelers are likely to be more brand conscious than Westerners .The symbolic values of products and services are as important to Chinese tourists as their functional values ( Mok & DeFranco,1999). Therefore the tourism products or services marketed as status symbols or have \"face enhancing \"qualities will appeal to Chinese tourists. Clarke (2000) pointed out that", "label": 0 }, { "main_document": "which became a serious burden during downturns. The dominance of world markets meant labour intensity was again offset by economies of specialisation with each shipyard concentrating on a small range of vessels. Moreover, shipbuilding is another good example of Marshallian economies of scale with a large number of specialised producers highly concentrated in the North East of England and Western Scotland. Shipbuilding therefore shows that the latest technology, which was developed on American and German shipyards, was not necessarily the most efficient. We must remember, however, that the absence of Anglo-American productivity gap reflects that mass production techniques had not been successfully applied to shipbuilding at the time. British shipbuilding also benefited from the disruption caused by the American Civil War. As sited in Broadberry (1997:175) The final area where Britain competed successfully against foreign competition was where demand factors allowed the early adoption of high throughput technology in Britain. Against an uncompromising background of free trade in Britain which allowed subsidised sugar beet in from other countries Here demand was largely homogeneous and branding and packaging was successfully used to differentiate the product. Hugill In fact by 1909/07, comparative labour productivity figures show Britain being close to the US level and double the level of Germany labour productivity figures as shown in table 5. due to bounties which were only abolished in 1902 at the Brussels convention (Broadberry 1997: 200) As sited in Broadberry (1997:200) Conversely there are also sectors of industry where Britain did badly. These are where high throughput technologies were successfully developed in the US. Unfortunately in these industries there were demand conditions that prevented their uptake. These shall be looked at with the example of the automobile industry. The motor vehicle production industry was very small in Britain compared to the US with the leading vehicle producer in the US producing 202,667 cars in 1913 as opposed to just 3000 in Britain. Indeed labour productivity in the US was roughly double that of the Britain. This is shown in tables 6 and 4. The much smaller scale of production in Britain was shaped by the lack of a mass market. The US producers had an advantage in that they could reap economies of scale from a large homogenised market that reflected the demand conditions in the US. Conversely in Britain, per capita incomes were not only lower, but also income was more unequally distributed. This meant that automobiles were unaffordable to most of the population and also meant that those who could afford a car did not one that was the same as the rest. Lewchuck (1979) This is largely explained by the differing production strategies based on the lack of a mass market and factor endowments in Britain, where there was an abundance of skilled labour experienced in the operation of metal working machines. This resulted in a rather more labour intensive process. US producers also faced difficulties in raising sufficient capital to buy metal working machines. As sited in Broadberry (1997:182) In the chemical industry there is a widespread perception of poor performance and a", "label": 1 }, { "main_document": "which is practical and with right amount of resources and funding could be implemented in real-life scenario. My next aim was to design the project in a most sophisticated and state of art technology. While doing my research I came across various technologies that I could have used but I went with .NET because of it's benefits and its upcoming attracting in financial firms. After all careful considerations I decided to implemented the project in .NET technology supported by SQL Database. Before starting the final year I did an intensive course in .NET for three months to have in-depth understanding of the technology and successfully gained the accreditations of Microsoft Certified Solution Developer. The technology aspect is covered in more detail in Appendix B. The purpose of this project is to demonstrate the advantage of bringing together various methods together to design a single model to carry out online transactions in a most efficient manner. The following are the objectives to be achieved by the end of the project - Design a new model for electronic commerce with high speed and efficient verification and strong authentication mechanisms that would allow legitimate users easy access to carry out online shopping payments and thwarting fraudulent transactions attempt by others. The project aims to provide reliable and a user-friendly environment involving various phases. This feature has been covered in Section 3 of the report. A Fraud Detection Engine built in modules using the concept of \"Neural Networks\" and \"Data Mining\" where fraud detection rules with certain general characteristics are included in standard model and modules with merchant requirements are added as needed. This feature has been covered in Section 5 of the report. Reduce the probability of fraud, by sending a random four-digit number on to customer's mobile phone in order to identify the real customer. The implementation would make it difficult for fraudster to successfully complete the transaction, as with stolen card they would also need that person mobile phone to clear to initial validations process. This feature has been covered in Section 5 of the report. Design a transaction management application for merchants to manage transactional activities. This involves the system for manual review of the transactions that being referred by the system. It provides all essential features, which would be present in a management application in real-life scenario. This feature has been covered in Section 8 of the report. Reduce major security threats available in Online Transactional Model such as spoofing, unauthorised disclose, unauthorised action and data alteration by implementing authenticated Secure Socket Layer (SSL) digital certificates to provide crucial online identity and security to establish trust between parties involved in online transactions over digital network. Also including Encryption of data, process of transforming information to make it unintelligible to all but the intended recipient. It also includes other security features such as code-access, database and web-service security implemented in .NET, IIS (Internet Information Service) and SQL Database. This feature has been covered in Section 9 of the report. The system was developed based on software development life cycle, which is outlined", "label": 0 }, { "main_document": "by a graph with edges labelled by integers corresponding to the values that the angular momentum of a particle can take according to Quantum Theory (equal to an integer multiplied by half of Planck's constant) Smolin, 2001, p134 The edges of these spin networks carry discrete units of area, and the area of a surface originates from the intersection between it and one edge of the spin network. A similar pattern is applied to quantify discrete units of volume; the volume (in Planck units) of a particular region is given by the number of nodes of a spin network within it. The result of a very large spin network when viewed on a grand scale is therefore a seemingly continuous quantum geometry of space, as required by the classical regime. Furthermore, as a spin network evolves over time, a 'spin foam' over a manifold of one dimension higher than the dimensions of the corresponding spin network This notion of spin foam can then be applied to the gravitational collapse of black holes, where a very large volume in a spin network evolves to a single volume over a spin foam manifold. So we have seen then, that Loop Quantum Gravity, an elegant, background-independent theory which quantizes space-time, provides a consistent basis for a quantum theory of gravity. Both String Theory and Loop Quantum Gravity are strong candidates for a theory of Quantum Gravity, but is one 'better' than the other? Both theories seem to provide a fundamental basis which can describe gravity at the Planck scale whilst doing away with the renormalization problem, which states that treating the graviton simply as another particle field would result in gravitational interactions of infinite values which cannot be mathematically cancelled to yield finite results However, both theories, of course, have their downfalls. It can be argued that LQG respects local Lorentz invariance, but violates the invariance on the global scale, whereas Lorentz invariance does not pose a problem for String Theory. Superstring Theory gives a description for particle scattering experiments, something which has not yet been achieved by LQG. Nevertheless it does seem as though String Theory has quite a few more setbacks than LQG: The understanding of String Theory requires new fundamental principles; it predicts that space exists in nine or ten dimensions; unknown elementary particles are hypothesized due to the nature of the oscillations of the strings; it provides only a very limited explanation of the entropy of black holes; there are currently five different forms of string theory, which differ in terms of what type of strings they allow and how they implement supersymmetry Without doubt the key disadvantage of String Theory is the fact that it has space-time as a background metric, i.e. it is a background-dependent theory which although may be a useful tool for approximation, cannot be a fundamental theory. A complete quantum theory of gravity requires a background-independent manifold, which LQG is able offer, and it is this key feature that puts LQG in pole position. The problem with validating either theory, or any scientific theory for that", "label": 1 }, { "main_document": "the Lee et al study (1988) providing evidence of a changeable intelligence through intervention programs, the use of these programs is still a debated topic. Furnham, (2000) cites a research program involving 50 intelligence world experts, who concluded that genetics plays a larger role than the environment in intelligence. Howe (1998) cites the work of Murry who claimed that 'we do not know how to raise the intelligence levels of youngsters because intelligence is stable and resists efforts to alter it.' However, if intelligence tests are to be proven worthwhile it makes sense to adopt the stance that an intelligence level can be improved upon. It seems fundamentally wrong to allow children to believe they can never achieve the same levels of intelligence as their peers, because of their genetic limitations. Block et al (1978) warns of these problems by claiming tests can be turned into an 'engine of cruelty in the hands of the blundering or prejudiced'. Low intelligence scores are in some cases connected with other environmental factors such as housing conditions and parenting style (Lee et al, 1988). To deny a child help through an intervention program because of their genetic limitations, when this program may also elevate other environmental pressures, seems unfair. If Jensen's theory encourages people to ignore environmental components of intelligence there is a possibility that intelligence tests could be abused, and children may go without help. If this was ever the case intelligence tests would not just be un-worthwhile, but damaging to children (Block et al 1978). Intelligence tests have the capacity to be hugely worthwhile if used fairly and without abuse. The immediate benefits of intelligence tests are mainly directed towards those children who achieve the highest marks, and those who are not as clever are labelled as such. Through intelligence testing intervention programs are formed, which enable children to feel they can work to achieve a higher level of intelligence test performance. However, until a firm definition of intelligence is produced, which also fits with the lay person's description, intelligence tests can only go so far in to being worthwhile. Yet it is worth considering that a universal definition of intelligence may never be reached, because different cultures see different aspects as being important, in that same way that intelligence experts and lay people have different opinions. It may be that the best definition of intelligence to use is one tied closely with the countries cultural beliefs, and this will be when intelligence tests become really worthwhile.", "label": 1 }, { "main_document": "In this essay I will be looking at how the formal properties of Joseph Conrad's The word 'form' implies a structuring of some kind, and, when applied to literature, the formal properties of a work 'usually refer to its structural design and patterning, or sometimes to its style and manner in a wider sense, as distinct from its content. Baldick, Chris, A first-time reader of The piece is too short to be a novel, and so can be termed a 'novella', described as 'a fictional tale in prose, intermediate in length and complexity between a short story and a novel, and usually concentrating on a single event or chain of events, with a surprising turning point. Greater in length than the novella, the novel 'permits fuller, subtler development of characters and themes'. Differing from the prose romance '...a greater degree of realism is expected of it...it tends to describe a recognisable secular social world, often in a sceptical and prosaic manner inappropriate to the marvels of romance. Baldick, Chris, Baldick, Chris, Baldick, Chris, So both Both texts, however, are far from being conventionally realistic. They are instead innovative, adapting these structural designs to give a stable body to an otherwise complex narrative. How then do Ford and Conrad subvert the structural designs they choose to utilise? Baldick, Chris, Richard Adams, In this way the novella bridges the gap between Victorian and Modern novelistic styles. Written for Conrad lulls this audience by stirring up feelings of imperial pride at the beginning of Conrad does this by making reference to the actions of imperialist pioneers such as Sir Francis Drake, evoking 'the great spirit of the past' (p.17), only to present a story that questions the morality of imperialism. As Robert Hampson suggests, 'the narrative strategies of both Conrad and Marlow work to subvert many of the assumptions accepted by their audience' Robert Hampson, Robert Hampson, ' Conrad uses the enigmatic narrator, Marlow, to carry forward his subversive narrative. Unlike traditional Victorian narrators such as the technically distant, removed narrator in George Eliot's Marlow is such an unreliable narrator and only too ready to interpret facts in the light of his own prejudices.\" Conrad breaks the stereotype of the omniscient narrator that spoon feeds the reader information that is taken for granted as 'truth', Marlow's narration instead '...is essentially reflective' making George Eliot, K.K Ruthven, Richard Adams, As previously stated, Conrad is generally agreed between critics as a modernist writer. Richard Adams' description of modernist writing is useful, suggesting that, Richard Adams, Other than T.S. Eliot's Baldick, Chris, Conrad does not utilise stream-of-consciousness in Richard Adams notes that ' An example of interior monologue in Baldick, Chris, Richard Adams, This passage is contemplative, its style is reflective and shows the psychological process of Marlow revisiting previous experiences. Sigmund Freud came up with his theory of psychoanalysis just after Richard Adams, Written at the end of the nineteenth century, the daunting fear of entering a new century is reflected in the novella's dark story. David Bradshaw discusses the turmoil of the era in which Richard", "label": 1 }, { "main_document": "of fiscal policy than is taxation. This is because, whereas the entirety of any increase in G directly enters the circular flow of income, a cut in taxation increases disposable income, and some of the increased Yd is withdrawn from circulation as savings. (Note below that the government-spending multiplier Similarly, income taxation is a less direct way of affecting income than is non-income taxation. The incidence of a sales tax, for example, falls more heavily upon the producer than on the consumer. A cut in sales tax increases income by lowering production costs of firms and increasing aggregate supply. By contrast, a cut in income tax impacts consumers' disposable income, which may then be saved rather than reinvested into the economy. A high rate of income tax also lowers the effectiveness of government-spending, as it reduces disposable income so that less of a change in G is multiplied around the economy. Mathematically, the taxation and government-spending multipliers are larger if taxation is assumed as exogenous than if income tax rate t Without income tax, Thus, tax cuts based mostly on income tax may not be very efficient: in France, drastic plans to cut tax rates were criticised for the very reason that they \"touch only income tax, not the other host of contributions that choke the tax-payer\" \"When cuts aren't kind enough\". 9/24/2005, Vol. 376 Issue 8445 In summary, fiscal policy is most effective when investment is responsive to income, marginal propensity to consume is high, and money demand is sensitive to the interest rate. It is less successful when the rate of income tax is high, when investment is sensitive to the interest rate, or when money demand is sensitive to income. Thus, while government-spending in Argentina was \"the central reason for impressive economic growth\" in 2005 The effectiveness of fiscal policy varies greatly from one economy to the next. \"Consumer market insights: Consumers in Argentina are Jan2006, Vol. 14 Issue 1, p3-3, 1/3p Hashemzadeh, Nozar, and Saubert, Wayne, \"The Effects of Bush's Tax Cuts on Income Distribution and Economic Growth in the United States.\" 2004 Issue 3, p111-120", "label": 0 }, { "main_document": "quit, it has to pay considerable amount of penalty to the other one. But if the external environment changed, and both sides feel the necessity to make any change in contract, negotiation could be issued. Language: only English could be used as communication language. But concerning about the English level of each company's officer, contract could have an additional local language version. Law: We both obey the Amazonia's law and the international law. Taxation: only one tax policy. Currency: Place of delivery: FOB HC could be responsible for supplying production machinery, supervising the manufacturing and arranging exports. CNI should be responsible for worker management and domestic sales. The position of general manager should be allocated to HC and the position of deputy general manager be allocated to CNI. To sustain the continuity in individual departments, the position of departmental chief could be allocated to CNI and that of the deputy chief be offered to HC. At the senior level of management, the joint venture should have 15 people from HC, and 35 from CNI. Cross-national and cross-cultural diversity may add complexities to strategic conflict and IJVs are seen as operations where \"interests do not fully overlap and are often in conflict\" (Peng and Shenkar, 2002:92). In particular, IJVs are a forum where stakeholders may disagree about whether the IJV should adopt localized practices reflecting the host country(localization) or emulate an international parent's globalization or standardization(Pucik, 1985). From the perspective of the \"globalization\", the joint venture is an extension of HC and our HRM operations. Followings are assumptions about adopting \"globalization\": Our current MNE HRM practices are clearly superior to CNI's We could hold a significant advantage in IJV industry and particularly in international sector. We could dominate the partnership regardless of equity and can control many or all management decisions. These assumptions will tend to be \"drivers\" in the strategic actions. But in the IJV environment, disagreements between two partners may arise. Diversity in strategy and beliefs often result in \"conflicting voices\" which increase with cultural heterogeneity(Hoskisson et,al, 2002). In contrast to the standarsization, there are many advantages with localization: Local HRM practices are much easier to work in the local environment because of cultural and philosophical differences. And they have better knowledge about local constraints. The ability of local managers to control access to the labour market and product distribution channels, expertise and information may effectively stymie standardization efforts - either directly or indirectly. All in all, globalization practices are much more favorable for HC in the short-term. But concerning about our long-term planning in Amazonia , we would better choose localization which is also more acceptable for CNI. In the next five years, we would remove CNI's current old and slow machines, and build up new advanced machines in their factory. Five years later, with the increasing demand of equipments, we could buy a new factory in Amazonia. Location of new factory need be researched and negotiated later. HC provide all the fast and advanced machines and relevant know-how skilled employees. Meanwhile, HC could continue its new product development in", "label": 0 }, { "main_document": "zero, and therefore we are close to competitive equilibrium that is close to efficient. As a result, a monopolist will try and produce a product with price elasticity as close to but not 1, which maximises the mark up. These results are depicted in diagrams 2 and 3. Hence the mark-up and therefore monopoly power is inversely related to the elasticity of demand facing the firm. We must raise an important distinction here however. Seeing as there are few actual pure monopolies in the world today, one could think of only Cadbury's chocolate being available in the London Underground Hence we are looking at the firm specific demand curve. The ability of a monopolist to affect the price of a good with their output decision. The first factor that affects the firm's price elasticity of demand is the elasticity of market demand. If there is only one firm then its demand curve is the market demand curve and hence its market power depends completely on the elasticity of market demand. What affects this? Normally, the availability of close substitutes has an impact; however, one of the assumptions related to monopolies is that there are no close substitutes, seeing as there is only one firm in the market. In a wider scope, there may be competitors, for instance the post office may have a monopoly over posting, but it is unable to control the cost of email, so the price of postage may be price elastic. In a monopolistic market, where there are a few competitors, another factor is the size of the consumers' budget that the good takes up. The smaller the portion of the budget, the smaller the price elasticity, for instance a pin manufacturer is unlikely to see its sales fall if it increases prices. On the other hand, if there is more than one firm in the market, the market demand curve provides a lower bound to the firm's elasticity of demand. Therefore if a market is particularly elastic, with a large number of substitutes such as coffee or tin, producers will find it difficult to raise price above marginal cost. Conversely, inelastic products such as oil, have a larger potential for monopoly power, as the oil cartel OPEC showed in the 1970s and 80s where prices were raised far in excess of marginal cost Pindyck R.S. and Rubinfeld D.L. (2001) pp345 The second factor that affects the monopoly power is the number of firms in the market. As the number of firms increases, the monopoly power of a firm falls ceteris peribus. However, what also matters is the number of 'major players' in a market. A market that has one firm which accounts for over 90 percent of sales, with another 20 accounting for the rest is said to be highly concentrated as the large firm will therefore still have monopoly power. This is the case in the computer operating systems market with Microsoft. Markets with few players will always fear competition, as this will increase elasticity and push the price toward marginal cost. Hence barriers to entry", "label": 1 }, { "main_document": "Activity of immobilised enzymes was investigated in this experiment. Sodium alginate and calcium chloride was used to produce immobilised alkaline phosphatase trapped in beads of alginate. The amount of product can be measured using a spectrophotometer since one of the products; p-nitrophenol is yellow at high pHs. The enzyme activities of the native and immoblilsed enzyme are measured and compared; their specific activities are 259mg The amount of substrate was found when immobilized enzymes were incubated with substrate over a period of time to observe the change in the rate of reaction. The amount of enzyme leaked out into the surroundings was also found by submerging immobilised enzymes in buffer and incubating the buffer alone with p-nitrophenyl phosphate. Enzymes are also known as biological catalysts. They speed up reactions and are reusable. Their specificity for substrate makes them very useful in food industry and laboratory work. Most enzymes come in solution form; therefore it is not possible to reuse them without complex extracting procedures. By immobilising enzymes, trapping them to a solid, enzymes are reusable and are more resistant to denaturation (both by extreme pHs or temperature). This is more economical when applied to industry. Since it is bound to a solid support, it is more difficult to alter its shape. Alginate consists of a long sugar chain with carboxyl groups projecting out. Calcium ions cross link the carboxyl groups to solidify alginate. Apart from trapping enzymes in alginate beads, other materials such as gelatine, polyacrylamide, carragenan and alginate can be used. Alkaline phosphatase was used in this experiment, it converts colourless p-nitrophenyl phosphate (pNPP) into phosphate and p-nitrophenol (pNP which is yellow at high pHs). The yellow colour allows the extent of the reaction to be measured by a spectrophotometer at 420nm. The reaction is done in 15mins assays; it is stopped by adding 1M sodium hydroxide to denature the enzyme. At extreme pHs, the hydrogen bonds within the enzyme break and denatures the enzyme. The procedures in described in the laboratory report were carried out with a few modifications. No enzymes were added in the control when the activity of native enzyme was measured. When determining the enzyme activity of the immobilised enzyme, water was added instead of enzymes in the control. The reusability of beads was determined by repeating 15mins assays. The liquid from the previous assay was removed using a pipette and the absorbance was measured. The beads were washed using 3ml washing buffer. The washing buffer was then removed and fresh assay buffer was added. The tube was placed in a 30 After 15mins, 1ml 1M sodium hydroxide was added to stop the reaction. The absorbance at 420nm was measured and recorded. This was repeated for six times. The assay of total protein was not carried out due to the poor results produced in previous years. The amount of enzyme leaked out was investigated by incubating 4 beads in 2.7ml of assay buffer for an hour. The bathing liquid was removed and was incubated with 0.3ml of pNPP for 15mins. The reaction was stopped by adding 1ml", "label": 0 }, { "main_document": "unintended timing error, with no rude intent. Similarly, at various points in the interaction, there are complete overlaps of speech whereby almost everyone is speaking about a different topic or are referring back to something previously mentioned, for example; This shows the difficulty with having such a large number of people involved in an interaction as here, M is talking about one topic where as N and D are talking about something completely different at exactly the same time. The speech of all three is consequently interwoven due to constant interruptions and therefore the discussion is completely disjointed. Adjacency pairs are another prominent feature of the conversation analysis approach and are concerned with how participants of the conversation position themselves socially in relation to their interlocutors In the majority of cases, the adjacency pairs are either an accusation or opinion from the interviewer followed by a denial or agreement from the PM. The group extract consisted of a large number of adjacency pairs, although these mainly involved opinions and subsequent agreements, for example; The above extract shows a consistent flow of adjacency pairs where the participants are continually expressing opinions, which are then responded to by other members of the group, showing a high level of conversational cohesion. Speaker selection is important in this approach although with there only being two participants in the political exchange, it was always the other participant who took over the turn. In the group interaction, it was mainly self-selection as when a question was asked or an opinion was expressed, it was spoken to the group as a whole, not individual people and therefore it was up to anybody to take over the turn. Another approach to the analysis of spoken discourse is interactional sociolinguistics, which focuses on looking at the speech features people use when interacting with each other. Turn-taking (as previously covered with conversation analysis) as well as the use of immediate responses and minimal responses are a large part of this approach. In the group exchange, these minimal responses or back-channels were usually made by two or more people simultaneously as a reaction to a previous statement, for example the use of 'yeah': Throughout the group interaction, it is clear that each member was pursuing the same conversational goal for the purpose of the assignment and the use of these minimal responses are a clear indication of cooperation between the group members. They are used to motivate the speaker to continue by showing understanding and interest in what they are saying, without the listener wishing to take over the speaking turn. Therefore the interviewer has no interest in showing agreement and understanding to the Prime Minister; rather he aims to be abrupt and argumentative in order to provoke a reaction which could cause controversy and debate. Tag questions are another feature of this approach but were not seen to be present in much quantity in either of the two contrasting speech extracts. In the group exchange, only one tag question was asked when M enquires, 'no wasn't actually his baby, was it?' but", "label": 1 }, { "main_document": "of girls. Assuming that adult males and females have equal attention skills, we would expect this acceleration to occur somewhere during later childhood. Such a finding might be important for general education of the different sexes. The other finding from our analysis was that the residuals from both conditions correlated with the original baseline task score. We can interpret this as meaning that the baseline task score is a useful predictor of the score obtained in the actual 'Opposite Worlds' tasks. However, this is not unexpected, as it would follow that someone who performed poorly in the baseline task would perform proportionally poorly in the actual 'Opposite Worlds' task. We now consider the methodology of the experiment. Firstly, only one sub-test was used from the test of everyday attention for children (TEA-ch), the 'Opposite Worlds' task. As noted earlier, the TEA-ch was designed with different tasks to test different types of attention. In using only the one task, we have essentially tested only one type of attention, which in this case is attention at a verbal inhibition task. Whilst the idea of inhibition in general is appropriate to the domain of investigating traffic accidents, the \"walk don't walk\" subtest of the TEA-ch was developed for the more specific type of attention that is important in considering whether it is safe to cross a road . Hence, to a certain extent, using only the 'Opposite Worlds' sub-test has reduced the validity of the experiment, and future investigation into the same area might consider using different sub-tests. Another potential methodological problem was the fact that in the distraction condition, participants were distracted for only half the time, as the TV showing cartoons was switched on only half-way through the task. Had the distraction been present for the whole of the test under this condition, we might have observed even more impediment of attention skills, reflected in higher scores for this condition. However, this should not affect the significance of the outcome of the experiment, it should merely be noted that the findings from the distraction condition are probably not as high as they should otherwise be. Finally we discuss the generalisability of the findings. It should first be noted that there is a very different motivation for completing the attention task as there is for crossing a road. Whilst children were told that they would be rewarded for completed the task (with a cartoon), this does not compare with the possible consequences of failing to exercise due attention when crossing the road, which could lead to a serious accident. The immediate sensory information from a road crossing context may lead to greater focus of attention on the task at hand. In the attention task presented here, there was no motivation to complete the task successfully, merely to complete it. In contrast, there was no motivation for not finishing the test correctly, as this would not result is finishing the test any faster. So we could assume that the findings are, at least, an indication of the trend of attention between the sexes, even if", "label": 1 }, { "main_document": "target consumers (Dijkstra, M This supports Oliva's argument (1998) about the benefit of choosing the Internet as a marketing communication tool, which helps marketers and advertising planners reduce cost, discover new market, transform relationship, and add better value to the shareholder. With regards to the tourism industry, Lake (2001) argued that the Internet has been one of the most essential sources for travel information search. Evidence shows that people in the twenty-first century have a significant tendency to surf on the Internet when searching for tourism destination information, especially, when seeking for the distant places where they have never been to. On daily basis, 8 percent of 97 million Internet users in U.S. seek for travel information online on daily basis, while 3 percentage purchases or makes a reservation for travel online (Pew Internet & American Life Project, 2006). Further, a recent research (Kim Therefore, as the reliance on the Internet for potential travellers to obtain travel information online is considerable, the increased need of understanding online users' attitudes and behaviours for a Tourism Marketer has become more and more fundamental. However, the information (including literal context and visual image) seen by the searcher (potential customer) may vary depending on the different genders. Meyer-Levy's previous research (1988) demonstrated that females tended to explore more detailed information before making decisions, while males relied on more objective cues, only available information, and their own opinions. Compared with males, females are regarded to be more \"visually-oriented, more intrinsically motivated, and more romantic\" (Holbrook, 1986). Over two decades, a recent research examined this argument and proved that, in modern times, gender differences influence the attitude and preference of the online users when searching travel information online, and hence have impact on decision making and consumer behaviour (Kim D Yet, as online travel information includes literal context and pictorial representation, what is the communication and functionality of pictorial representation (such as still visual image and dynamic images) in online tourism marketing? Moreover, what are the differences of perceived pictorial image and hence of consumer behaviour from a gender perspective? For the reason that little research related to the issues mentioned above has been performed, this paper involves the approaches of existing literature review and secondary data analysis in an attempt to bridge the academic gap and contribute better understanding of the effectiveness in current applied tourism marketing strategies on the Internet and the diverse role of visual communication in online tourism product websites between female and male. This paper is also anticipated to bring out implications for further studies in the field and recommendations of more effective marketing approaches on tourism destination websites for managers and advertisers in the industry. Communication in Marketing has long been seen as merely a promotion tool, while Varey argues (2001); every communication undertaken is meant to be the mode for marketing in modern times. Since marketing concept is centred on the concept of exchange, the theories of communication are supposedly on the basis of exchange and ought to account for 'the co-production of identity, meaning and knowledge' (Deetz, 1992). That is", "label": 0 }, { "main_document": "Whenever we are introduced to a new person, we make judgements about them based on their looks, their mannerisms, what clothes they are wearing and, although we may not realise it, on the language they use. In fact, even if we cannot see the person and only have a printed sample of their speech, we are able to analyse the linguistic evidence to make assumptions about that person's age, gender and social status. Trudgill believes that 'the internal differentiation of human societies is reflected in their language' (2000: 23) suggesting that language is affected by social variables to such an extent that we are able to associate certain language traits with different divisions of people within a society. This essay aims to discover how we are able to make these associations, by examining the effects of firstly class and then gender on the way we speak. It will refer also to the way in which these two social factors are linked and how this, in turn, has an affect on the different linguistic choices people make. The term (Milroy, 1987) In Western cultures, societies are stratified into social classes. Differences between the ways in which people from each social class speak, can be determined by looking at both social class dialects and social class accents, which take into account grammatical and lexical differences as well as phonological and phonetic variations. In England we associate rural dialects and older language varieties with the lower classes and, if we analyse these dialects, we find that there is a dialect continuum as we travel from one side of the country to the other. The continuum shows a gradual merging of accents between each neighbouring area, with the dialects on either side of the chain sounding extremely different. However, it has been found that people who speak Standard English (which is very much the dialect associated with the upper classes) show very little variation from one area to the next. Trask (2004:78) referred to the following lexical example from Trudgill's 'The Dialects of England' (1990) to demonstrate this point: It was found that for a single word, 'girl', used in Standard English, there are 5 non standard forms used amongst the lower and middle classes: 'Girl', 'Lass', 'Mawther', 'Maiden' and Wench. There are also large numbers of grammatical variations found in non standard varieties of English which are not found in Standard English. However, this fact becomes less significant to the argument that class affects the way people speak, if it cannot be proved that Standard English can actually be linked to the highest social classes as opposed to the lower social classes. This can be proved by taking a feature which is known to be associated with Standard English, such as the 's' affix at the end of the third person plural (for example 'he likes') and examining how often this is used by speakers from different social classes. This study was carried out in Norwich, using Labov's random sampling techniques to ensure fully representative and completely unbiased results. It was shown that, as suspected,", "label": 1 }, { "main_document": "breast cancer more frequently. It is therefore important that practitioners are able to develop an understanding of the disease processes involved so they can competently identify possible malignancies when necessary. In addition, they must also be aware of the varied psychological implications breast disease may have upon the patients presenting to them, irrespective of whether or not their initial fear of malignancy is justified. The aim of this dissertation is to provide a broad overview of the major features associated with breast cancer in women. It will start by examining the normal breast anatomy and physiology, the pathological processes involved, the clinical features of the disease and the methods by which can be detected and diagnosed. The available treatment options and prognosis will subsequently be discussed. Various social and psychological issues are also associated with breast cancer particularly with regard to screening, diagnosis and treatment. These issues will be highlighted and discussed where appropriate. Additional patient case studies will also be used to address more specific social and psychological problems which may be faced by patients. It is important to appreciate at this stage that certain important omissions will unfortunately be made due to the limited coverage available. As mentioned above benign breast diseases account for a large proportion of breast masses, however these will not be discussed in further detail despite their significance. Although a comparatively rare phenomenon, carcinoma of the breast may also present in males, however given the rarity of this condition it too will not be discussed in the following dissertation. Finally, the numerous complications which may occur as a result of breast cancer, metastatic disease and the medical and surgical interventions used will not be discussed due to the extent of this topic. The breast represents a down-growth of the epidermis which usually only develops significantly in females (4). The main structures contained within the breast tissue are the mammary glands which are involved in the production and expression of milk. These glands therefore represent an important accessory to reproduction in women (5). The mammary gland is attached to the dermis of the overlying skin by the suspensory ligaments of the breast. These ligaments represent thickenings in the connective tissue stroma, and are particularly prominent in superior part of the breast in order to provide support for the glandular lobules. At its posterior margin, the circular base of the breast rests on the deep pectoral fascia overlying pectoralis major and serratus anterior, there is however a loose connective tissue plane between these two structures which gives rise to the retromammary space. This is a potential space which contains a small amount of fat and therefore permits movement of the breast over the fascial layer. The base of the breast extends transversely from the lateral sternal border to the mid axillary line (MAL) and vertically from the 2 The mammary glands are highly vascular, being supplied by perforating branches of the subclavian and axillary arteries (Figure 2.1). In addition they also receive arterial blood from the thoracic aorta via the posterior intercostal arteries which enter the 2 Venous", "label": 1 }, { "main_document": "A platinum film thermometer and a thermistor thermometer were calibrated over the range 0 The sensitivity and resolution of each instrument was calculated and these were used to compare the instruments. Recommendations are made as to the suitability of these instruments for meteorological applications. Thermometry involves measuring and quantifying the temperature of a medium. Thermometers are based upon the principle of measuring a known physical quantity that varies with temperature. Resistance thermometers measure the change in electrical resistance of an electrical conductor. The relationship between resistance and temperature can be mathematically quantified. Thus, if the resistance of a given material is known, the temperature can be calculated (DeFelice, 1998; Met Office, 1981; Strangeways, 2003). Many metals exhibit such behaviour, for example nickel, iron and copper. Platinum is commonly used in resistance thermometers owing to its ability to maintain its characteristics over a long period, resistance to corrosion and relative linearity in response. These factors help minimise deterioration in the instruments' performance over time and allow for a simple calculation to determine temperature from the recorded resistance (Met Office, 1981; Strangeways, 2003). The response characteristic of platinum resistance thermometers is accurately approximated by where R is resistance ( Over the atmospheric range of temperature, the quadratic term in equation (1) is much smaller than the linear term. Therefore, an accurate meteorological approximation to the characteristic is given by where R is resistance ( Thermistors are an alternative type of resistance thermometer. These are semiconductor resistors that display a larger change in resistance with temperature than metals. This change in resistance may be either positive or negative. Thermistor sensors can be engineered to be very small, thus lowering the thermal capacity and time-constant. The characteristic of thermistors can be expressed as where R is thermistor resistance ( It should be noted that this is not a perfect fit and that the error may be up to 5% of the actual resistance. It follows that this equation may be simplified to give the linear relationship (DeFelice, 1998; Met Office, 1981; Strangeways, 2003). This paper describes the calibration of a platinum film resistance thermometer and a thermistor thermometer over the range 0 The values of all constants in equations 1 to 4 will be determined for the instruments used, and the calibration errors quantified. The characteristics of the instruments will then be compared. The platinum film resistance thermometer (PFT) and thermistor thermometer were immersed in a thermostatically controlled water bath. A precision platinum resistance thermometer was also immersed in the bath, and was used to record the 'actual' temperature of the water, required for calibrating the other two thermometers. The PFT and thermistor thermometer were each connected to individual multi-meters, which displayed the resistance of the respective instruments. The resolutions of the multi-meters differed, with the meter connected to the platinum film thermometer having a resolution of 0.001 The temperature of the water bath was raised from 0 Resistance was recorded for the PFT and thermistor thermometer when the precision thermometer indicated that the water temperature was relatively stable (i.e. varying by only approximately 0.01 Resistance", "label": 1 }, { "main_document": "capita share of available jobs\". White, S.\"Liberal Equality, Exploitation\" p.316. The principal objection to this is similar to the criticisms of a UBI as a whole; that it is unfair to those who are working hard to earn a living for themselves because if jobs are taxed then the redistribution should go only to those who are in paid employment. Nonetheless Van Parijs makes a good case for it as those who are taking up the jobs are effectively stopping others from getting them and are therefore are benefiting from a better opportunity than others, violating the equality of opportunity principle. This is also something which could be implemented slowly so as to be careful that those in low paying jobs would not be better off just being unemployed and receiving their UBI. Eventually though, Van Parijs argues that \"once the UBI tax base is accordingly expanded to include these employment rents, it becomes large enough to finance a substantial UBI.\" White, S. \"Liberal Equality, Exploitation\" p.316. Further support for Van Parijs's idea of jobs as taxable assets is that people will not want to give up work anyway because \"[p]aid work offers opportunities for social contacts, satisfying activities, social recognition, and social power which pay without work does not supply.\" Therefore it should only be those who cannot work at a particular time, for example through illness, re-education or waiting for a suitable job, who will not. The more people who wish to work then the more the value of aggregate job assets will increase and the higher the UBI will be. This should lead to a higher consumption power and improve a persons \"ability to get access to jobs with the desirable non-pecuniary features mentioned above.\" Van Parijs, P. \"Why Surfers Should be Fed: The Liberal Case for an Unconditional Basic Income\", Van Parijs, \"Why Surfers Should be Fed\" p.128. It is on the way to finance a UBI where I agree most strongly with Van Parijs although if his method proved to be unsustainable I still believe that there would be enough money from converting the current system in Britain to provide at least something of a UBI. However, despite the excellent work his principles of real freedom have done in opening up such debate, it is the Rawlsian principle of justice as fairness which does most to legitimize the case for a UBI. The pragmatic advantages which follow from either argument such as the vastly increased choice people would have in their lives is a welcome addition to equality of opportunity for all. The availability of choice also goes a long way to discredit many of the alternatives put forward or which are already in use. The main objection of reciprocity is rejected because there is no way of measuring what the good life actually is and people's individual circumstances may mean that they are unable to participate in paid work. Overall everyone would benefit greatly from a UBI as most people would gain directly from it and those who are wealthy enough that they might be disadvantaged", "label": 1 }, { "main_document": "the left-most switch, and SW0 is the rightmost switch. The switches connect to an associated FPGA pin, as shown in Table 4. A detailed schematic appears in Figure A. When in the UP or ON position, a switch connects the FPGA pin to VCCO, a logic High. When DOWN or in the OFF position, the switch connects the FPGA pin to ground, a logic Low. The switches typically exhibit about 2 ms of mechanical bounce and there is no active debouncing circuitry, although such circuitry could easily be added to the FPGA design programmed on the board. A 4.7K These push buttons are located along the lower edge of the board, toward the right edge. The switches are labeled BTN3 through BTN0. Push button switch BTN3 is the left-most switch, BTN0 the right-most switch. The push button switches connect to an associated FPGA pin, as shown in Table 5. A detailed schematic appears in Figure A. Pressing a push button generates logic High on the associated FPGA pin. Again, there is no active debouncing circuitry on the push button. The left-most button, BTN3, is also the default User Reset pin. BTN3 electrically behaves identically to the other push buttons. However, when applicable, BTN3 resets the provided reference designs. The Spartan-3 Starter Kit board has eight individual surface-mount LEDs located above the push button switches. The LEDs are labeled LED7 through LED0. LED7 is the left-most LED, LED0 the right-most LED. Table 6 shows the FPGA connections to the LEDs. A detailed schematic appears in Figure A. The cathode of each LED connects to ground via a 270 To light an individual LED, drive the associated FPGA control signal High, which is the opposite polarity from lighting one of the 7-segment LEDs. The operation of Dflipflop is simple. It has only one input in addition to the clock input signal. The dflipflop used in the program was positive triggered. The D input is sampled during the occurrence of a clock pulse. If it is 1, the flip-flop is switched to the set state (unless it was already set). If it is 0, the flip-flop switches to the clear state. The Figure4 shows the symbol of the Dflipflop and table 7 is the truth table and figure 5 shows the output of the Dflipflop. Binary is Base 2 unlike our counting system decimal which is Base 10 (denary). In other words, Here is an example of a binary number: 1001 is a 4 bit representation of the decimal '9'. Bit is short for Binary Digit, and each numeral is classed as a bit. The bit on the far right (in this case a zero) is known as the The hardware elements featured in the laboratories were: Spartan 3 board, Digilent JTAG Cable, XC3S1000 device, USB cable to power the board. Firstly I connected Spartan 3 board to a PC and powered up board using the USB cable, followed by connecting the Digilent JTAG cable. Digilent JTAG cable is used to transfer data on to the board from the computer. I followed the tutorial given", "label": 0 }, { "main_document": "This paper presents a method for assessing water fluoride concentration utilising the fluoride ion selective electrode. The effects of tea infusion on tap water fluoride levels are studied. We find that tea infusion dramatically increases the average fluoride concentration of tap water from 0.054mM to 0.274mM. A common modern method for measuring fluoride concentrations in water utilises the fluoride sensitive electrode invented by Frant and Ross [ This electrode contains an internal fluoride standard and a LaF LaF The internal fluoride ion activity controls the potential of the inner surface of the LaF The test solution comes in contact with the other side of the crystal, and a potential difference is created across it, proportional to the ratio of ion activities on the inner and outer sides of the crystal [ An external reference electrode is used to measure the potential of the fluoride electrode and thus the activity of F The cell potential, E, is given by the Nernst equation, where Thus a calibration of the electrode using several standard solutions of fluoride allows us to measure the fluoride activity of an unknown solution. Several conditions need to be met in order to measure F Ionic activity coefficients are largely dependent upon ionic strength. Samples prepared for measurement must have a high ionic strength relative to the contribution from fluoride. Stabilising ionic strength will make the fluoride activity coefficient relatively constant, so the measured potential becomes proportional to the fluoride concentration. Competing chemical equilibria present an important source of error. The electrode is sensitive to hydroxide ions, but this is only significant its concentration exceeds that of fluoride [1]. Another pH dependent effect is due to the fact HF is a weak acid. Below pH 5, HF complexes will result in a decrease in measured fluoride activity. Prepared samples must have a pH between 5 and 7. Fluoride complexes with other species present in solution. Si Ideally, addition of a strong chelating agent such as EDTA would minimise polyvalent cation complexion interferences. The average fluoride concentration in local tap water was found to be 1500 600 to 3000 A series of standard fluoride solutions encompassing this range were made from a 0.1M NaF stock solution of 0.4200g reagent grade NaF (Aldrich) dissolved in 100ml distilled water at 295K. Standards of 0.01M, 0.005M, 0.002M, 0.001M, 0.0005M and 0.00005M NaF were made. To 50ml of each of these was added 50ml 0.1M ammonium acetate buffer (pH = 6.83) to give a high ionic strength and control pH. Potential measurements were made using the ISE25F Fluoride electrode (Radiometer Analytical S.A.) against a saturated calomel electrode. Electrodes were cleaned using 250ml distilled water stirred by a magnetic follower at 1200 rpm then tissue-dried. Standard solutions were allowed to equilibriate to room temperature (295K) and a clean magnetic follower was introduced. Electrodes were introduced, the solutions stirred at 400 rpm and measurements taken when the reading stabilised. Fluoride measurements started with the least concentrated solution to minimise cross contamination. Measurements were taken in triplicate. Tap water was analysed in a similar way. 50ml of ammonium acetate", "label": 1 }, { "main_document": "material the scholar is drawing upon. Those analyzing the Amarna Letters, like Na'aman will surely note a significant increase in wealth, because the letters do not mention decline, as they were mainly records of Egyptian exploitation of the region. Archaeological data indicate decline, which was based on the evidence of destruction and movement of population that indicated decline. However, we need to remember about the non-sedentary population which is hard to quantify, but still a part of the ancient society. Therefore, although we inevitably deal with crisis after a major conflict, there is no evidence for decline in material culture. More recently Knapp (1987) restates the traditional view of Albright, that LB Canaan underwent a period of serious cultural crisis, but Liebowitz (1987) argues that the increase of ivories in Palestinian sites in LB II is a proof for a cultural zenith rather than decline. Bienkowski (1989), however, criticizes Knapp's opinion, using Jericho, Hazor, Tell Deir 'Alla as examples of degeneration in terms of architecture and pottery (towns outside Egyptian rules), but indicates also that in Beth-shan, Lachish and Tell el-'Ajjul there is evidence for more elaborate architecture and pottery. Moreover he accuses Liebowitz for taking into account only the evidence from elite palaces (ironically, doing the same thing in his studies) in the towns influenced by the Egyptians, such as Megiddo, South Tell el-Far'ah or Beth-shan and therefore argues that these ones indeed developed, but the ones outside of Egyptian control declined during LB period. This variation resulted from the position of a city-state being either strategic or marginal, therefore outside of Egyptian interest. Archaeological evidence from the major tells (or even more precise their elite areas) reveal a great wealth of palaces, temples and luxurious goods. Settlement and demographic studies, on the other hand, prove a serious decline in settlement and population. Bunimovitz (1994) claims the paradox of prosperity and decline is caused by the elites desperately fighting for power in politically and economically unstable city-states. The decline in human resources is the main root of the general stagnation, therefore looking at the settlement pattern we can examine the shifts of population causing socio-economic implications. However, we need to take into account the regional political and economic circumstances while making such generalization. To sum up it seems like there was a general increase in wealth in terms of architecture in the towns under Egyptian influence, but the ones outside of it gradually declined. Indeed the interest of Egyptians proved to be crucial for those towns under their rules, since they were their source of income. Like in case of all major demographic crisises, the population that is left mostly becomes wealthier because of reduced competition, more living space and resources. Although there is a wide scope of written evidence as well as archaeological data for the urban life in Levant in the Late Bronze Age, the character of this data is often misleading, unclear and incoherent (Gonen 1984). The written sources include the list of Thutmosis III (79 settlements in Canaan) and el-Amarna Letters (24 or 25 entries) from which", "label": 0 }, { "main_document": "looks to have some outliers. To try and reduce the impact of extreme values on the calculation of the mean I have also calculated if any outliers were present and I recalculated the means. However, as I only identified one outlier (PIN=8) the mean of the non welfare candidates only changed slightly from 13.75 to 12. From the data I calculated that out of the welfare group candidates, 80% were women and 20% were men. The figures in the tables agree with what is displayed on the graphs. The mean number of visits to the doctor if a person is not on welfare is a low 13.75 times compared to a welfare person 41.4. Initially, the difference between the two groups seems worrying. This is further compounded by the median and mode values of both sets of data. The standard deviation of each group differs by only approximately 3 visits. As this standard deviation is so low the values of each group are unlikely to cross leading to a more concrete separation of each group. This separation between the apparently unhealthy welfare group and the healthy non welfare group, I believe, has some clear answers The average amount of schooling for the welfare group is only 10.925 compared to 12.375 of the non welfare group. Although a rather presumptuous analysis it can be reasonable concluded that those who have had a longer education are smarter. This intelligence can be argued to include the ability to look after one's self and therefore reduce the frequency of visits to the doctor. A healthy lifestyle includes physical exercise of which non welfare people get more of (9 compared to 8) Despite supporting the theory this data isn't enough alone. I don't believe the difference between the two values is enough evidence to support such a claim. Non welfare people also claim to be in good health more often (13 people) than non welfare candidates (9). This supports the ideas about a healthy lifestyle stated above. A possible reason as to the difference in groups is the fact that 9.523% of all the men in the sample (i.e. 4/42) received benefits. Contrast this to the 27.586% of women (i.e. 16/58) who are on welfare. Previously I stated that women tend to go to the doctor more often than men do. This could be due to the fact that more women are on welfare. Women are increasingly becoming working mothers and so the number of women on welfare will also increase. During this section I have detailed that people on welfare also tend to visit the doctor more often than those that do not. These two statements support one another. Using my existing data I can comment on what I would expect with the new sample from the same population. With the data I calculated which items were outliers and recalculated the averages. In the case of the females I removed any data above the outlier but also one item (64) when the outlier limit was 64.33. I felt this was too close to the outlier limit", "label": 1 }, { "main_document": "levels of compensation in medical negligence. Again the rights and needs of the individual before the court eclipse wider considerations; compensatory justice trumps distributive justice. Yet, as rights discourse becomes more fashionable [66], the compensation culture increasingly diverts limited resources from patient care [67]. The courts recognise this problem but refuse to take the impact on NHS resources into account when calculating damages awards [68]. Lord Irvine has rightly questioned whether this is a realistic approach [69]. Equally relevant when setting standards of care, the issue is whether the courts ought to inject some utilitarianism into their traditional Kantian, rights-based approach [70]. As Brazier aptly puts it: A realistic jurisprudence needs to contextualise rights and duties, what Montgomery refers to as healthcare law rather than medical law [72], looking beyond legal rules to how they impinge on healthcare practice. It is important to note the impact of such jurisprudence on public perceptions and expectations of the NHS, which in turn affects the number of resource-sapping negligence actions taken. Healthcare rights and duties are not absolute. Courts have long recognised this in relation to access to healthcare, taking a quasi-utilitarian approach [73], although it may be argued that current jurisprudence does not go far enough in setting limits on this qualification in the face of the danger of intrusion of the profit-motive in an increasingly privatised health service. Yet when it comes to setting standards of care and awarding compensation in medical negligence, it is clear that the courts have failed to include resources in the equation, leaving us with a somewhat unrealistic and unhelpful jurisprudence. As rationing becomes an even greater imperative, and as courts in other jurisdictions adopt more realistic approaches, this may be set to change. 1 Mason, J.K.; McCall Smith, A.; Laurie, G.T.; p314 2 3 op cit n1 at p365; Newdick, C., (1996), Oxford at p15 4 Newdick, C., (1996), Oxford at p1 5 e.g. 6 s.1 NHS Act 1977 7 8 e.g. 9 unreasonable that no reasonable decision-maker could have come to it.' 10 op cit n1 at p375; North West Lancashire Health Authority Ex Parte A, D, and G,' Medical Law Review, Med Law Rev 2000.8(115) 11 Montgomery, J., 12 On ECJ op cit n11 at p63 On CHR see Brazier, M., 13 e.g. 14 Laurie, G.T.; the traditional doctor-patient relationship 15 16 op cit n1 at p379 17 op cit n15 at p31 18 ibid at p70 19 Syrett, K.,'Nice Work? Rationing, Review And The \"Legitimacy Problem\" In The New NHS,' Medical Law Review, Med Law Rev 2002.10(1); Steiner, H. (Director) (Sept 1993), Harvard Law School 20 op cit n1 at p366; Dingwall, R. (ed), London at p49 21 see generally Stauch, M.; Wheat, K.; Tingle, J., 22 Quality of Life Adjusted Years test. See generally ibid at pp52-4 23 Criticism of QALYs - see McHale, J.; Fox, M.; Murphy, J., Maxwell at p100: its 'inescapable arbitrariness.' 24 Steiner, H. (Director) Michael Mander at p29 25 op cit n1 at p370 26 Stoltzfus Jost, T., 'The Role of the Courts in Health Care Rationing: The", "label": 1 }, { "main_document": "Linguistics, the scientific study of language, is essential in the field of speech and language pathology. It provides the foundation in understanding the nature and causes of communication disorders. In carrying out clinical work, linguistic knowledge is applied in identifying and assessing speech and language disorders in children and adults, as well as in planning and performing appropriate therapeutic interventions. Linguistic theories provided a distinction between speech and language. Saussure, the founder of modern linguistics, contributed the basis of explaining spoken communication. Speech is a physiological act made by an individual, which results in the production of physical sound waves that is considered of having concrete existence. Language is a psychological abstract that does not have substance but exist as a 'form' in the shared knowledge of a linguistic community. Within language, it can be divided into two aspects, language execution (expression) and language reception (comprehension) (Grundy, 1995). Thus, speech impairment can be differentiated from language impairment. Assessment and diagnosis of speech and language disorders is built on the linguistic-descriptive approach. The method is based on describing language, using metalanguage, at different levels of the linguistic hierarchy. The frame work of linguistic components is illustrated in Figure 1 (adapted from Crystal, 1984). The different levels of language, which are briefly defined in the following (Fromkin, Rodman & Hyams, 2003), are important in providing a systematic linguistic analysis. It is important to recognize that although language can be separated into multiple levels, each linguistic level interacts and has influence on one another. A breakdown in any of the linguistic levels will result atypical communication condition. I would further discuss the links between linguistics with speech and language pathology, including its role in assessment and management, with reference to two communication disorders: Cleft palate is a congenital malformation that involves the hard or soft palate or both, unilaterally or bilaterally. There is sub-mucous cleft where the surface of the palate appears intact in contrast with overt cleft. It is a type of articulatory disorder, which may be accompanied with or without phonological consequences. Knowledge in phonetics, especially articulatory phonetics in this case, is crucial in understanding the pathology of cleft palate. Majority of English consonants are oral sounds produced with a velic closure, where a sufficient air pressure is achieved in the mouth; except for the nasal consonants [m], [n] and [ Problem in the velopharyngeal mechanism in children with cleft palate causes hypernasality. Hypernasality can also be influenced by degree of mouth opening, tongue position and relationship of maxilla and mandible (Stengelhofen, 1989). A nasal snort, a disturbance caused by nasal emission may be heard. Due to velopharyngeal insufficiency, with air escaping into the nose, oral plosives may be substituted by glottal stops where manner of articulation is maintained but there is a shift of place as a compensation to achieve plosion. Misuse of larynx in glottal articulation results a harsh quality phonation with uncontrolled loudness (Berry & Einsenson, 1972). Lawrence & Phillips (1975, cited in Stengelhofen, 1989) stated that there is also a tendency for contacts to the back of the mouth.", "label": 1 }, { "main_document": "powerful army in the world at that time and it was due to several more reasons. George Brown Tindal and David. E. Shi. 'America. A narrative history' (W.W. Norton & company, New York, London, 1984) p. 220. Robert W. Tucker and David C. Hendrickson, 'The Fall of the First British Empire - Origins of the War of American Independence' (John Hopkins University Press Baltimore and London, 1982) p. 368 John Williams. 'The American War of Independence 1775 - 1783. 200th anniversary' (Invasion publishing Ltd, 1974) p. 30 Another point one may have to make when examining the question of why the British lost the War of Independence is that of weapons and supplies. The colonists did have access to similar weapons as those used in the British army as they were needed for the colonists own protection when they first arrived in American and were greeted by a great number of hostile locals. During the war the French traded with the Americans a great deal, supplying them with many guns and munitions. Despite the illegality of the early continental congress among the British authorities, it was still able to organize the building of a supplies factory and a limited chain with which to distribute guns and ammunition to troops. The big problem for congress was that of feeding the hungry regulars which they solved by allowing soldiers to march into farms and take livestock and grains in return for a paper claiming the victim would be reimbursed by the state at later date. This showed that although there was a great lack of organization, the Americans were still able to fumble their way through providing for its small regular army. The British on the other hand did run into problems with supplies later in the war after their shipping was continuously attacked by American privateers where in \"1781 there were approximately 450 privateers roving the seas and attacking British shipping.\" The intervention of the French, who caused \"general alarm throughout the British West Indies\" worried that British and threatened her naval superiority throughout the world. Later the war between the Americans and the British became a world war as in 1779 the Spanish and the Dutch entered on the American's side. This caused dismay among the British at home and the large majority of the fleet returned to back home to protect from an invasion by combined French, Spanish and Dutch troops. The British roundly defeated this fleet, mainly comprised of French ships, on the 12 Although Britain once again regained control of the seas, the attacks of the American privateers and the intervention of the French fleet came at a crucial time. The British army in America was neglected as supply lines were drastically reduced during the presence of French ships in the Atlantic waters. This came at a crucial time before the siege of Yorktown upon which the British fate in America was almost definitely set. John Williams. 'The American War of Independence 1775 - 1783. 200th anniversary' (Invasion publishing Ltd, 1974) p. 46 One may also attribute this great", "label": 1 }, { "main_document": "(Ellis 1994: 614). Formal instruction therefore has a limited place in these methods, and learners should favour comprehensible input from the real world over instruction. Both Asher and Terrell's methods rely heavily on comprehensible input as the main form of instruction and the primary focus is not on form, but meaning. In the first stages of Asher's TPR method the learner receives comprehensible input through the aid of physical activity but is not expected to speak. L2 acquisition is treated in the same way as L1 acquisition and learners are initially only expected to physically respond to commands (Richards and Rodgers 2001). When the learner is ready, they will produce output, in the form of one-word utterances. Finally, and again only when ready, the learner produces short phrases until they are at the stage to benefit from real world input (Krashen 1985). The belief is that the language classroom 'cannot provide all acquirers with the specific kinds of input they need to function in the outside world' (Krashen 1985: 71). Terrell's Natural Approach follows a similar pattern. Skills are acquired in the order of listening, speaking, reading and writing, much as they would be acquired in the L1 and to do this the teacher provides comprehensible input in the L2 (Krashen and Terrell 1983). The level of response expected from the student is gradually raised (responses are not expected to be verbal in the very early stages). As the goal is communicative, the syllabus is organised by topic, not grammatical structures based on the theory that 'if goals are grammatical, some grammar will be learned and very little acquired' (Krashen and Terrell 1983: 21). That is not to say that grammar does not have a place, but conscious learning of the rules is considered to be of little benefit to the learner (Chaudron 1988). Meaning-focused instruction is not well suited to older learners as they do not have the same ability to learn a second language through exposure alone as they did when they first acquired their L1. They are therefore able to become fluent but will struggle to become native-like (Long and Robinson 1998). In fact, both adults and children can struggle and often would benefit from increased salience, especially in the area of grammar. This is because if communication has been successful, errors in grammar are not corrected (Krashen and Terrell 1983) but there is then a risk of fossilisation and even after real-world exposure, learners may continue to make the same errors. Meaning-focused communicative approaches cannot therefore easily lead to native speaker proficiency (Richards and Rodgers 2001). As a result of this examination of three types of instruction, each based on a different theory of language learning, I can now view my Persian learning in a more informed light. On the surface it appears to focus on meaning because it is organised by topic and aims to teach colloquial Persian, including culture, with little focus on the written language. However, in the focus on meaning method of instruction, emphasis is on comprehensible input, which was not the main method", "label": 1 }, { "main_document": "realism provides the most powerful explanation for the state of war which is the regular condition of life in the international system. And despite criticisms of the state system, there is yet to be another system put in place. Hence, with the continuity of the state system, it is clear that realism will continue to be a successful approach, if not, an important one. The success of a doctrine also rests on its adoptability, usefulness and feasibility. In particular, Realism has proven its worth to the Great powers, diplomats and politicians. For the former, it is a doctrine based upon and beholden to, the behavioral style of the traditional Great Power. For instance, the prescriptions it offers are particularly well suited to the US rise to become the global hegemon; it taught the US to focus on interests rather than ideology, to seek peace through strength and to recognize that great powers can coexist even they have different values and beliefs. Furthermore, the focus on the primacy of foreign policy in decision making made Realism singularly attractive to professional diplomats and practitioners. Hence its long term success since 1945 till 1970s stems from it being essentially a conservative doctrine, attractive to men concerned with protecting the status quo. In addition, as with any other successful approaches to international relations, it must be dynamic enough to address current geopolitical developments and yet stable enough to preserve its main lines of thought. Realism has been a progressive approach, it has evolved and developed itself since 1945 and the resulting competing predictions of realist theorists make realism difficult to falsify. Almost any outcome can be made consistent with some variant of realist theory. Perhaps it has not been proven unsuccessful because it has been difficult to test realist propositions against evidence drawn from specific cases as realist theorists do not share common definitions of the core concepts they use to construct variables. Individual definitions of national interest, power, balance of power, and polarity allow for an unacceptably wide range of conceptual and operational meaning and make it difficult to test. Lastly, recent events and contemporary political atmosphere influence the acceptability and success of theories. The basic outlines of the study of international politics remain established by the outlook of political realism. This, without doubt, is related to the centrality that realism has always given to war, a preoccupation that surely needs no justification. The emergence of Realism coincided with a general mood of pessimism after the world wars, and continued with the onset of the Cold War till 1980s. Moreover, with the recent conflicts in various new forms - terrorism, civil conflicts, threats of nuclear wars, liberal optimism has receded in the wake of a harsher international environment, the provocative label of political realism have certainly revitalized their powers of enchantment. Walker, R.B.J.(1987) \"Realism, change and international political theory\", Vol. 31: pp. 65. With all the reasons mentioned above, it is evident that Realism has acquired long term success since its establishment in 1945 till the end of the Cold war. However, one should not", "label": 1 }, { "main_document": "to people moving from the countryside and taking protection inside the city, and this in turn was partly responsible for the spread of the plague that occurred in 430 BC. Parker has argue that \"in a sense the plague strengthened the faith of the Athenians, if it is right to see, for instance, the purification of Delos and the introduction of Asclepius as responses to it\" (Parker 1996: 200). Asclepius was a healing god who originated in Epidaurus. Before the plague he was a minor deity who was not a part of Athenian religion, yet after the plague he became a major figure of worship and was adopted into Athenian religion. Therefore we can see that, because of the intensity of the plague that was due to the Peloponnesian War, Athenian society turned to a new god, thus the Peloponnesian War helped to alter Athenian religious practice. The Peloponnesian War also led to a number of theatrical works being produced that took the effects of the war on Athens as a major theme, since poets carried a dual role in Athens, \"to amuse... citizens, and to advise\" (Ar. The problems caused by the war gave the poets a number of issues to rise in their plays and encourage debate about. The plays of Aristophanes in particular discussed the war and its various effects. One theme that occurs in Aristophanes' comedies is an opposition to leading politicians and the conduct of Athenian politics in general. Many leading democrats who turn down peace in favour of war are personally named and taunted in Aristophanes' work. For example, in Another example of this is the attack on Archedemus, a democrat who argued in favour of the execution of the Arginusae generals, whom Aristophanes remarks that \"among the dead men he's the prince of crooks - it's the way they do things now\" (Ar. Therefore we can see how Aristophanes took a negative stance to the leading politicians of the day. Aristophanes also expressed a wish for peace and unity amongst the Greeks, both of which were lacking due to the war. As well as attacking men who pursued the war, the play The disunity between Greeks that had resulted in the oligarchic coup of the Thirty and which I have written about above prompted Aristophanes to write the lines: Aristophanes used his work to tell Athenians that they needed to work together for their common good, or else face defeat. One other issue that the poets raised in their plays was that of religion. In Euripides also reminded the Greeks of the danger of neglecting the gods in Therefore we can see that the effects the Peloponnesian War had on Athens caused a number of plays to be produced that discussed the issues relating to the war in an attempt to advise the Athenians on what they should do. The plays can also be seen to reflect Athenian views on the effects of the war. This success must mean that the plays were popular, and thus the issues they raised must have had some standing", "label": 1 }, { "main_document": "The idea that odontoceti communicate by sound has been around for many years, as Aristotle was aware that dolphins made sound at the surface of the ocean. This idea also goes back thousands of years as Aristotle wrote that the dolphin \"when taken out of water gives a squeak and moans in the air\" (Thompson, 1910, iv. 9. 535b, 31). The most obvious odontoceti to use sound production to communicate are Beluga whales and dolphins. The reasons for them relying on sound are believed to be for communication, navigation, or detection of predators and prey. Odontoceti may use sound to attract mates, repel rivals, communicate within a social group or between groups, navigate or find food. Dolphins emit sounds of 1,000 to 15,000 cycles per second and Beluga whales emit sound at 500 to 10,000 cycles per second. Odontoceti are gregarious animals and usually travel in social groups called pods. Communication between mother bottlenose dolphins and their calves is performed by whistles, and whistles are also used by individual dolphins, killer whales, pilot whales and sperm whales for communication. Dolphins produce their noises by shutting their jaws vigorously, and when they are feeding they sound like they are barking. They also produce air in their nasal passages through their blowholes which sounds like a balloon letting air escape. In the mating season the produce a whining sound and echolocation in dolphins sounds like a rusty hinge opening and shutting. Clicks last from 10 to 25 seconds at 0.25 to 0.5 kHz, with screams up to 2 kHz. Beluga whales make a chirping noise and smack their lips together, whereas pilot whales have been heard whining. The humpback whale is known to \"sing\" which helps it to remember migration paths and communicate especially during the mating season. These songs can last from 8 to 20 minutes and they can sing for days on end, and over so many years these songs change gradually to completely different ones. Humpback whales also use bubbles as a form of defence and to coral fish, by creating a line of bubbles which the fish will not cross. These whales' coral fish also by using a high pitched scream, and they communicate between each other mainly by breathing. Fin whales use low pulse to communicate, and grey and right whales breach to communicate and play. Breaching is also used when whales are injured or trying to find food. Grey whales moan and make bubble like noises when they are migrating and a pulsing metallic sounding signal has come from a captive grey whale. Minke whales make pings, clicks, and grunt-like/thump-like sounds. There are four acoustic categories; tonal pulses and low frequency moans at 12 to 500 Hz, grunt-like sounds at 40 to 200 Hz, chirps and cries at 1 kHz, and click-like sounds at 3 to 30 kHz. These vocalisations have been found in the humpback, grey, minke, and bryde's whales. Other vocal communication methods include spinning, back or flipper slapping, and tail slapping, which may convey a threat or frustration from a whale. For some years, it", "label": 1 }, { "main_document": "Canning is a method of preserving food by first heating it to a temperature that destroys contaminating micro-organisms, and then sealing it in air-tight jars or cans. Canning it relies on high temperatures to destroy micro-organisms and enzymes. Because of the danger of botulism, the only safe method of canning most foods is under conditions of both high heat and pressure, normally at temperatures of 116-121 Foods that must be pressure canned include all vegetables, meats, seafood, poultry, and dairy products. The effect of heat processing on the nutritive value of food is variable. For instance, the vitamin-C content of green vegetables is much reduced, but, owing to greater acidity, in fruit juices vitamin C is quite well retained. There is also a loss of 25 - 50% of water-soluble vitamins if the liquor is not used. Using heat to destroy microorganisms and proper canning techniques it is important because invisible microorganisms are all around us. Many are beneficial and others are harmful. All foods contain microorganisms, the major cause of food spoilage. During the canning process, air is driven from the jar and a vacuum is formed as the jar cools and seals, preventing microorganisms from entering and recontaminate the food. The acidity level, or pH, of foods determines whether they should be processed in a boiling water canner or pressure canner. The lower the pH, the more acidic food is. Low-acid foods, such as vegetables, meat, poultry, and fish, must be pressure canned at the recommended time and temperature to destroy Canning of low-acid foods in boiling water canners is absolutely unsafe because 100 If botulinum bacteria survive and grow inside a sealed jar, they can produce a deadly toxin. Even a taste of food containing this toxin can be fatal. Acidic foods have pH values below 4.5. These foods include pickles, most fruits, and jams and jellies made from fruit. Acidic foods contain enough acidity either to stop the growth of botulinum bacteria or destroy the bacteria more rapidly when heated. Acidic foods may be safely canned in a boiling water canner ( In this practical baked beans and carrots were used for the canning practical exercise. The objective was to understand the procedure involved for the processing of canned products and factors affecting the safety and quality of the products. Flow diagram of the experimental conditions used for processing of canned baked beans and carrots. The recipes of the sauces were as follows Data; Therefore the final moisture content of the bean was 52%. Soft water was used to soak 1kg of beans overnight. Hard water yield a product which will require longer cooking time to make it edible, or with very hard water it may not be possible to secure a satisfactory product from the standpoint of tenderness. Extremely soft water used for soaking and blanching will contribute to the splitting and matting of the beans in the can. It is an important operation that removes surface soil and associated microbial contamination. Incorrect pealing can lead to excessive wastage of sound raw material. This is normally a", "label": 0 }, { "main_document": "universal primers and a number of experiments by combining new and original primers, only one pair of primers seen to work and used for the real-time PCR experiment. This only gives us one locus from one gene to test for a possible methylation maker, when one thinks about there are well over 25,000 genes in the model plant The coupling of restriction enzymes digestion and real-time PCR as a way to detect methylation polymorphism is a novel method. This method not only can locate a gene or a locus with methylation polymorphism, it is also a very sensitive method as real-time PCR is very sensitive (Valasek and Repa 2005). Moreover, it can provide quantitative information about the level of methylation from a very small tissue sample. However, the raw data curve in Figure 3.6, the standard curve in Figure 3.7 and samples C There are two possible reasons for these results. Firstly, real-time PCR is a very sensitive process and the purity of reagents and samples used in the reaction are very important (Terry 2002 and Bar 2003). The DNA extraction process, S.D.S. method used in this experiment might be adequate for most normal PCRs, but it might not be the appropriate method for real-time PCRs. If the DNeasy kit supplied by QIAGEN was used for the DNA extraction, the extracted samples would have been purer and they might work better with real-time PCRs (Smith and Maxwell 2007 and Smith 2005). As the decision of using real-time PCR was made after DNA extraction had been carried out, DNeasy kit was not used. Secondly, even though the combination of original Forward and new Reverse 2 primers of To have perfectly matched new primers, the weak PCR product from the original However, due to time and resource constrains, no PCR products were sequenced. Any analysis of the real-time PCR results should be carried out on data of the initial concentration of the template. As the standard curve (Figure 3.7) is abnormal, we cannot use it to calculate any meaningful data of initial template concentration. Nevertheless, attempts were made to use the C In the one-sample Interestingly, in the two-sample However, caution must be taken here, as the sample size is very small, eight samples for both CL and CH. More importantly, the results of this real-time PCR are very unreliable as well as aberrant. This apparent different between CL and CH samples results could well be caused by other factors or artefacts rather than the methylation pattern different between CL and CH samples. As real-time PCR is relying on the principle that all the samples have the same PCR efficiencies (Kontanis and Reed 2006), any impurities that inhibit the DNA polymerase in the samples would cause the results to be skew. To avoid this problem, the samples should be further purified by using any commercially available DNA purification kits or other purification methods. The primers also have to be fully match with the sample template, but this can only be achieved with template sequence information. The real-time PCR standard should be in 10-fold dilution", "label": 0 }, { "main_document": "A PESTLE analysis can be used to examine Tesco's external environment and Porter's five forces to examine the conditions in its own industry. The political and economic environment in the UK is a stable economy, with low unemployment and an affluent population (with high GDP per capita in relation to other countries around the world). Consumers in the UK have a relatively high level of disposable income to spend on retail and grocery items. This gives supermarkets such as Tesco a favourable environment in which to sell its goods. A possible threat to Tesco may occur if the economic environment moves towards instability. Rising interest rates or an increase in unemployment will decrease the amount of disposable income. As technologies and the supply chain have developed over the previous decades it has become possible for supermarkets to source food items from abroad and ensure consistently fresh and quality products. This has led to an increase in consumer demand for reliable, good quality food products and a demand for variety. Socio-cultural changes have altered the way people shop. Over the last few decades the number of working parents and single parent families has increased. This is one factor which may have led people having to spend less time shopping. As consumers become more interested in convenience they want to buy all their food items in one shop as opposed to going to the traditional butcher, bakery, and grocery shops. Legal and environmental barriers such as planning permission may restrict supermarkets from expansion into certain areas of the country. Activist groups also try to put pressure on supermarkets to inhibit their growth and damage their reputation. Within the retail industry the bargaining power which supermarkets have over their suppliers allows them to push down the price at which they purchase products and sell them at a low price to consumers. Exploiting their economies of scale opens up opportunities to move into non-food retail goods and services. As the article states '(Tesco) only had 5 per cent of non-food - and would be targeting its over-priced high street rivals.' To some extent there were still threats to Tesco in the supermarket industry; 'as a new breed of discount food retailers from continental Europe entered the UK.' However the table below shows how Tesco has grown over the last 7 years and developed a substantial share of the market, making it very difficult for new entrants to pose a threat. The table above indicates some of Tesco's main rivals in the UK. The strategy clock below shows some other potential rivals in the supermarket industry. Supermarkets such as Waitrose, Marks & Spencer and Kwiksave do not feature in the table above because they have a smaller share of the market. However, they are still rivals to Tesco because to some extent they provide the same goods. Tesco is in the hybrid position at point 3 on the strategy clock because there is a simultaneous emphasis on low prices and differentiation. Sainsbury's is similarly trying to obtain a hybrid strategy by offering both an 'economy' and 'taste", "label": 1 }, { "main_document": "the external audit firm enhances the soundness and safety of corporations such as Freddie Mac. Ibid We also feel that the recent initiatives to monitor auditing firms' activities are crucial in restoring confidence to the auditing profession, despite irritating the firms themselves. However, co-operation from the auditing firms is imperative for this to be successful. Additionally, it is widely pointed out that companies should take a more pro-active role in detecting and eliminating accounting misstatements and decrease their reliance on the auditors on the matter. In this case, however, whilst there was clear negligence on behalf of Freddie, evidence suggests that Andersen was partly liable. Finally, the Freddie case, explicitly reveals the need for all US companies, private and public, to be governed by the same rules. Scandals at Fannie Mae, Freddie's sister company, further emphasise that no company or institution should be given favourable treatment. In conclusion, auditors should realise and embrace their social responsibilities, thereby increasing their detection of material misstatements, aiding the identification of fraud. We believe the issues of accountability raised by the Freddie case and other scandals are embodied in the expectations gap; auditor responsibilities are unclear therefore when fraud can scandals occur, it is difficult to point fingers at who should be held accountable for misconduct. Before the auditing profession can successfully tackle the problems, it must first establish the objectives of auditing. However, objectives evolve through time to meet market demands and clearly, much of society expect external auditing to detect fraud. Scandals such as the Freddie case have underlined the recent weaknesses in the auditing profession. Although scandals continue to surface to this date, there have been credible efforts to address these problems. However, in what direction auditing develops in the near future will be determined by the strength of the market to influence the audit objectives and how the profession responds to such pressure.", "label": 0 }, { "main_document": "The Far East has always provided an attraction for European powers. It was often perceived as an exotic area, rich in luxury goods, none less so than India. The pre-Colonial Indian economy relied, almost entirely on tribal, subsistence agriculture, a typical attribute of any developing country. Indeed, Neil Charlesworth saw India as 'an important and populous less developed country...[with] low levels of output but possessing a tradition of high quality handicraft production and merchant enterprise.\" The British sought to modernize India, politically, socially and economically: they wanted to create a colony worthy of being part of their expansive empire. This process effectively saw Britain incorporate India, and take full responsibility for its governing. Neil Charlesworth, Gramscian theory of colonial rule often falls into two categories: colonies are ruled either by coercion, in which case they are dominated by force, or by consent, which is representative of a hegemonic colonial power. Burton Stein, M. E. Chamberlain, To assess whether British presence was positive or negative, it is necessary to examine what the British brought to the colony and whether or not it ultimately benefited the Indians. Typically there are two schools of thought on British rule in India. Karl Marx believed that 'England has broken down the entire framework of Indian society' Key to modernisation was a movement away from an agriculturally dependent economy and the creation of state infrastructure such as railways, canals and irrigation. These advancements would also, it was hoped, help deal with famine which had increasingly become a problem in India. Avineri Shlomo, Sir Percival Griffiths, The creation of infrastructure was highly important for the British in order to develop India's economy. Marx believed that 'the railway system will...become, in India, truly the forerunner of modern industry.\" This was certainly represented in its uptake, with nearly 7,000 miles of track being laid between 1858 and 1878. While the creation of these routes was important for industry, a secondary objective was to counter famine problems by allowing the movement of any surplus food to impoverished areas. Irrigation too was constructed with the idea of providing water outside of the rainy season. As with railways, the uptake of irrigation was quite spectacular: 'by World War two, over 32 million acres...was irrigated from state irrigation schemes, the largest in any country in the world.\" Despite this philanthropic nature of these schemes, the British were harshly criticized over the provision of revenue for development. The British government was of the opinion that if the schemes were to benefit Indians, then Indians should pay for them. Even during times of famine, when the Indians could not afford to feed themselves, revenue still had to be met. Avineri Shlomo, H. M. Hyndman, Chamberlain, Ibid, p. 124. The agriculture industry was also modernised by the British, with positive and negative outcomes. The introduction of large scale cash crops such as cotton, jute, indigo and tea allowed India greater scope for export and therefore competition in the world's markets. Such a transformation obviously could not happen overnight, but the foundations were certainly laid. However, with agricultural reform", "label": 1 }, { "main_document": "government intervened in the housing and stock market under the faith in \"positive non-intervention\". And the theories of market-friendly and flexible policy can describe that Singapore government reduced the state intervention under its faith in the government involvement. But the theories still have some problem. Some of the components in the market friendly strategy (including ensure adequate investment in people, provision of a competitive climate for enterprise and openness of international trade and stable macroeconomic management) are too vague. The stable macroeconomic management is a way of the provision of a competitive for enterprise. In fact, the provision of a competitive climate for enterprise can be interpreted in many ways. Although the market friendly strategy fairly explains the HK and Singapore, it can't explain Japan which is still not openness of international trade. If the study is expanded into more than two countries, the theories will get the same problem with the neo-classical or statist view which can't reflect the reality. But, overall, the theories are relevance to Hong Kong and Singapore's situation. Different countries have different historical background. According to Path dependency theory, outcome of a process depends on its past history .So it is difficult to greatly change Hong Kong and Singapore in the faith in less intervention and government involvement respectively because of their historical background. But their economies are still successful, since the degree of intervention is not a key factor to affect the economy. The more important thing is how the state intervenes. From the Singapore experience, the government intervened in market oriented way. And then they adjusted and reduced the government intervention according to the situation. The other developing countries which wanted to follow the model of Singapore or Hong Kong should consider their background. When they follow a certain degree of intervention, they should be flexible to adjust it. To conclude, the successful stories in East Asia are worthy studying. But when other countries want to copy from them, they should consider their own situation first.", "label": 0 }, { "main_document": "centre of Ralswiek on the island of Rugen, where ships have been excavated dated to the 9 Although the lands inhabited by the Western Slavs lay more in central Europe, their inhabitants had also their input in the formation of the Baltic trade zone. The ship finds discovered on the southern shore of the Baltic contain wrecks of cog and clinker-built boats, as well as many artifacts associated with the ships' function, weather it was war or more commonly trade. The dugout and raft would appear to have been the prototype of the Slavonic boat. The second stage covered the evolution of the oar-sail and sail-oar boats fitted with a side-rudder. Those finds include ships such as Orunia I (I, II, III), Mechlinki, Charbr The large extent of influence of the Viking culture is evident even in such remote places like America or Constantinople. In north-west Europe the traces of their artistic styles are visible in almost every place they visited. Such elements as rune stones or sagas are well-known examples of the spread of Viking culture. However, the shipwrecks on which these people traveled also inevitably contribute to our understanding of the spread of their culture as well as cultures of other societies that interacted with each other at that time in one way or another. Towards the end of Viking Age, the societies in Europe were undergoing a transition to feudal society. Because of its remote location Scandinavia was always slightly behind Europe in terms of social and religious changes such as feudalism, spread of Christianity or European trade. These changes, however, reached Scandinavia at last and the way by which they 'arrived' is visible in shipwreck evidence. Skuldelev 5 represents an interesting example of adoption by the Danish society of a system of relation between the king and local communities. It is thought to have been built as part of the laidang system, where local community had the duty to keep one ship for the king's use. The elements of construction of Skuldelev 5 suggest, that it was built from very cheap local materials and repaired many times to prolong its seaworthiness (Gardiner 1996). Ship finds are often used to support the vast amount of written evidence that we have from the Viking period. Numerous sagas, as well as chronicles reported about the everyday life of contemporary people and their contacts with foreigners. It has been suggested, that the two women buried in Oseberg ship were Queen Also the Bayeux Tapestry is a source of many information about the Viking-age shipbuilding as well as everyday life of Norman people. It presents activities from early stage of construction of a ship, to a full representation, together with their cargo and crew. This brings the debate to the issue of spread of shipbuilding techniques throughout north-west Europe. The remains of the Slavonic boats allowed reconstruction of number of early medieval vessels used by Western Slavs. A good example might be the replica of Ralswiek 2 ship, Bialy Kon, which helped to shed some light upon the technicalities of Slavonic boatbuilding as", "label": 0 }, { "main_document": "for a discussion). For instance, in the case of Argentina the resignation of the former PresidentDe La Rua in late December 2001 precipitated the collapse of the BS (see Gutierrez and Montes-Negret 2004, pp.7-8) while in the case of Indonesia the resignation of the former President Suharto in May 1998 also deepened thebanking distress (See Lindgrenal. 1999, p. 20). However, as thecurrency of denomination of such contracts was principally the US dollar this process is usually indicated asdollarization. It refers to a situation where a particular variable continues moving even though the original impulse does not hold anymore. For instance, often characterized as hysterisis is the inflation generated in an economy once a stabilisation plan is applied which consist of pegging the exchange rate while reducing monetary expansion to zero. Such a kind of inflation is often driven by the contracts fixed in previous periods which do not incorporate from a forward-looking viewpoint the new economic arrangement (for further discussion see Heymann (1987) referring to the application of the Austral Plan in Argentina in 1985). For this reason, an adequate renegotiation of the approximately US$ 100 billion in defaulted bonds seems to be a necessary condition for improving the financial position of the banks and thus the health of the BS.", "label": 0 }, { "main_document": "is no objection to Kant that he claims that what is sublime is absolutely great. In other words, the concept of magnitude implies a relation of the object to the power of judgement of the subject. In the case of aesthetic judgements the measure can be selected by the subject - always implying the relation of the object to the subject as determinative of what can be said of its magnitude (whether it is great or small etc.) up to a certain magnitude. Beyond that magnitude the subject supplies infinity automatically as the sublime. The object is absolutely great means 'I can not grasp an aesthetic measure for this object, it exceeds the capacity of imagination for presenting the world to me'. In fact, from this formulation we can see that the infinite can not be in the world. If that was so our idea of the infinite would be a concept to fit a synthesis of representations of the imagination. It would thus be a product of the understanding, not reason. Kant, I. (2005) I have used the standard pagination for Kant's work throughout. ibid. p.250 ibid p.248 ibid The negative moment of the mathematical sublime occurs when the imagination is revealed as inadequate to grasping the magnitude of vast objects. The positive moment is the supplying of the idea of infinity by reason. Thus reason shows itself to exceed the ability of imagination for estimating magnitudes. Through reason we can think more than the world can ever present us with. I shall postpone further critical discussion to look at the dynamic sublime. The dynamic sublime concerns the judgement of nature as having no dominion over us despite the immensity of its power. When confronted with examples of nature's might - storms perhaps - I am confronted with my physical inability to defend myself (the negative moment). However, Kant believes that this leads me to reflect on the inadequacy of the power of nature to damage my moral free choice (the positive moment) which results in a feeling of pleasure. Despite my physical impotence in the face of nature, my moral nature is not subject to its might. In such cases we consider nature to be dynamically sublime. Budd finds a problem with how the imagination enters into the dynamic sublime. Is Kant saying that we attempt to think a measure of the power of nature as in the mathematically sublime? However, I think Budd is just confusing our customary notion of the imagination with Kant's. The imagination enters just in presenting us with representations of the object of nature. There is no need to talk of us 'imagining' a measure or 'imagining' ourselves being at the mercy of the storm - this fear is indeed there, but this is not the central Kantian meaning of the imagination. The imagination is that faculty which brings sensibility to concepts - or at least attempts to. The imagination presents us with the powerful object of nature. However, our adherence to the moral law within us through the rational faculty is shown to be", "label": 1 }, { "main_document": "into the definition of a combatant (member of armed forces inside an armed conflict). Armed forces are defined by Article 43 of the Additional Protocol I to the Geneva Convention IV as Criteria defining armed conflicts depend on the intensity, the number of active participants, the number of victims, the duration and the prolonged character of the violence, the organization and the discipline of the parties, the capability to respect IHL, the collective, open and coordinated character of the hostilities, and the direct involvement of governmental armed forces Clearly, terrorist groups who committed the attacks of New York, Madrid, London and Bali were not armed forces part of an armed conflict. M. Sassoli, The status of persons held in Guantanamo under IHL, JICJ 2, 2004, 96-106, p. 100 IHL is there in a dead-end. The qualification of armed conflict is a matter of threshold and intensity. If an armed attack does not meet all the criteria, then it is not an armed conflict and the persons who take part in it are not concerned by IHL. How then to qualify the attacks of 9/11 which inflicted massive destructions, the death of almost 3000 people, and which required sustained and intense funding and logistics. I believe that IHL should apply to terrorists, in a very specific way which calls for special provisions codified in a specified treaty. What answers the category of unprivileged combatants brings. This category defines the civilians who take part in a conflict. They may be attacked during the hostilities, because they unlawfully participated in the conflict. They can be punished for unlawful participation and accused of war crimes. Their rights can be denied for security reasons. This is actually the status of the terrorists affiliated to Al Qaeda captured in Afghanistan. The difficulties encountered by the international community to reach a consensus about the definition of terrorism are due to the fact that 'one's terrorist is another's freedom fighter'. We cannot treat people who fight for freedom against an aggressor (the French Resistants under the Nazi occupation of France during WWII) in the same way as we should treat people who bombed two buildings with the intention of killing people and spreading terror in a society. The difference is a matter of legitimacy. The actions of Al Qaeda may appear legitimate for a part of the Muslim community. The attacks of the Twin Towers as a symbol of capitalism and consumption might have been seen as legitimate. The subjective element of a just cause should not enter the definition of terrorism at all. Indeed, does a good cause justify the use of ultimate means of violence? The motive of action of terrorists is irrelevant to qualify it as a terrorist act. We should treat terrorism as we treat torture, like a taboo, something nobody is entitled to do whatever the purpose The Indefinable Concept of Terrorism, G. P. Fletcher, JICJ, OUP, 2006, pp. 1-18, p. 13 Since only states can be parties to the Geneva Conventions, IHL does not apply to conflicts between a state and a terrorist group.", "label": 0 }, { "main_document": "author applied a rating (see Appendix F) according to the extent of impact each of the forces is expected to have on the organisation. Direct competition and threat of new entrants is seen as most threatening, hence a strategy has to be designed accordingly in order to retain competitiveness and future success. Tourism for Kenya is as for many other Third World countries an important revenue source (Akama, 1999:7 and Boniface & Cooper, 2001). Furthermore, based on the business environment as seen before, one has to recognize that the KTB supports private sectors (Boniface & Cooper, 2001:246). The authors further state that the KTB attempts to diversify the tourism product and to attract high-income, low-level tourism. The modern resort developments at Malindi with the gateway city Mombassa close by attract increasing number of sun package holiday tourists while visits to wildlife parks are in decline (2001:246). Some of the political and cultural problems identified earlier can be reduced by involving local community in the business where possible, hence transmitting some benefits derived from tourism. Interviews with marketing executives of travel agencies (as seen in Chakava, 2003) reflect the findings above. Additionally they believe that Kenya is misrepresented in the media. Although they admit problems i.e. with security and political instability, they believe that media, especially in Britain \"blow things out of proportion\" (Nyangali, 2003 as seen in Chakava, 2003:76). They believe that Kenya is a highly attractive market due to its assets which it has to offer. Better marketing in general in the media therefore should enhance Kenya's image and attractiveness. Kenya is a tourism market with huge capacity to grow (Boniface and Cooper, 2001) meaning its life-cycle is still in the developing phase (Butler, 1980). In first instance a joint venture is advisable accepting several drawbacks as identified by Kotler (1997:142). This is in order being able to better deal with the above named constraints and to minimize the financial risk for the Lakeside Group, as for example to overcome exit barriers as seen before. A joint venture also would include the incorporation of a local owner and local staff. Doing this, financial support and easier approval from Kenyan government and local authorities are sought (McGee, 1994). Top management, however, has to come from the home country in order to ensure that the high levels of service quality and standards are constantly provided. Because of the fact that the Group is an asset driven organisation (see Appendix A) future ownership of the property is desired. Long journeys of six or more hours are becoming more affordable (Middleton and Clarke, 2001:59), hence global travel and tourism is increasing heavily. According to Levitt (1983), markets will no longer have regional or national differences; organisations have to understand that the world will be one large market. Chisnall (1985) in contrast identified, that people are different in their tastes, needs and wants, mind-sets, lifestyles, family size and composition. The author therefore stated that: Plog (1974) differentiates between two types of psychographic groups: psychocentric and allocentric. Psychocentric travellers prefer the familiarity of travel destinations with sophisticated", "label": 0 }, { "main_document": "condemned if they occur among particular groups. Indeed perhaps even if we admit that social control is an important function of modern medicine, it does not necessarily have to be a negative one, as medicalisation can result in health improvements, such as declining maternal death during childbirth. In conclusion then, I would argue that while social control has become an important and certainly present function of modern medicine, perhaps to use the term 'prime' in relation to it is to over-emphasise the coercive nature of medicine as an institution. Seeing medicine's primary function as the control of social beings is to deny them their ability to negotiate the boundaries of medicalisation and the moral discourse surrounding health.", "label": 1 }, { "main_document": "Repeated wind erosion of land on exposed sites strips off the topsoil and then removes the natural deposits made by past cultures. Extreme natural events are excellent at making sure that a site survives, the best example of this being the destruction and subsequent survival of Pompeii and Herculaneum when Mount Vesuvius, the volcano in the south of Italy, erupted, both being preserved almost perfectly due to their coating in ash. Along with volcanic activity, mud slides and rock or snow avalanches can preserve a site incredibly well due to the fact that they cover everything in a protective \"blanket\" almost. It is important for an archaeologist to recognise the context in which the artefact in question was deposited. There are three types of depositional context; primary, secondary and tertiary. Primary is rare because it means that the article was deposited where it was used, such as a spear being left where it was used to kill an animal while hunting. Secondary is more common for this is when the artefact has been left away from where it was used, such as in a rubbish dump. Tertiary contexts refer to re-deposition of secondary contexts, such as the use of rubbish to infill a ditch or building. The understanding of these is crucial to the recording of an artefact for it becomes meaningless without its context and correct interpretation of the circumstances. Site formation is affected by climate. Severe climate conditions such as water logging, for example bogs, or very arid places like caves are very good for preservation of archaeology. In the majority of cases temperate climates, such as in Europe, are not suitable for this but tend to increase the rate of decay. Tropical climates are the worst, the heavy rains and warm temperatures combined with high acidity levels in the soil destroy a site very quickly. The rate of growth in tropical rain forests is particularly detrimental as it is so fast. Deserts, due to their aridity, are beneficial to archaeology, in particular organic materials such as bone and flesh. Some of the best examples of this are the tombs of the Egyptian Kings, especially the tomb of Tutankhamun (Renfrew and Bahn 2000: 62). This was discovered in 1922 by Howard Carter and Lord Carnarvon and has been preserved remarkably well owing to the dry air in the tombs and in the surrounding climate. Everything the King was buried with survived, including wreathes placed on his coffin and a large number of papyri. The bodies of the King and his two stillborn children were also preserved due to their mummification. Water logging of sites such as rivers or bogs are also favourable to preserving archaeological remains. Organic material survives particularly well in these conditions, a common find being wood which means that archaeologists often discover boats or wooden huts when exploring underwater sites. Many boats have been found in the recent excavation of the banks of the Severn Estuary. The pH of the soil affects what is preserved. According to Schiffer, and more recently Evans and O'Connor, an acidic soil", "label": 1 }, { "main_document": "to terrorist activities. It is important for computer professionals to understand cyber - terrorism for the benefit of themselves, their profession, and society. Now all the countries have started thinking toward this issue, as it also has the same importance as the fear from deadly bombs. A worst case of hacking was found in France when a hacker found the means of remotely reprogramming the rates of exchange of an Automated Teller Machine. By this one can change the rates of any currency rates available in machine. Like rate of exchange of 5 dollars for 1 franc and thus can be changed for 100 francs. But the hacker did the other way round, which was 5 francs for 1 dollar and turned over and changed his dollars and got loads of money.(4) There are many other cases which show the act of cyber terrorism One of the case which drew many people attention was when Robert Morris Jr, graduate of University of Harvard released a worm on ARPANET. The worm was transmitted from machine to machine exploiting bugs in the mail systems. The result of this was slowing down of all network communications, which in emergency times could lead to a very serious problem (5). Most of the hackers like these once explore computer system through simple curiosity and for intellectual challenge. It is not an easy thing to hack a computer Systems if it looks it needs a lot of thinking as how to get try to get connected to a modem on a computer which any one tries to attack .Now to save the data of the military operation all the computer System are kept isolated from being attacked as they contains the information which if changed may take life of many people which is a threat to most of the countries as they think that this might be the place which terrorist group would like to attack in order to bring any country into a big problem. Now new military leaders are being selected based on their experience with information warfare and cyber-terrorism. Gen. Richard Myers was appointed Chairman of the Joint Chiefs of Staff in part due to his experience with a task force to prevent hacking attacks (6). As the threat is becoming more real, corporations and government agencies have hired contingency planners. (Schwartau, 2002) In 2000 President Clinton asked for $100 million to protect from cyber terrorism. (Brand, 1999) The United States is such a large target not only because they are wealthy and control nearly 1/2 of the computing power in the world, but the United States is a superpower and people in other parts of the world feel resentment for that thing. The government, corporations, and private citizens are all learning to put up defences against the threat of a cyber attack. There is no perfect shield besides removing the target, but we cannot shut systems down permanently as a defence because we already rely on them too much. So every country should try there best in order to save themselves from cyber terrorism", "label": 0 }, { "main_document": "pesticides, insecticides and herbicides, such as DDT, highlighted the problems associated with their over use in the industry, resulting in resistance, toxic residues on food, soil and animals, undesirable deaths of nontarget organisms and damage to the environment (Emden & Peakall, 1996). With the general public's view on farming and it's effect on the environment (sometimes irrational and uninformed), it may then be a quick assumption that biological control would be a good thing to integrate into the organic system of agricultural production found in the UK today. However, little information can be found regarding this integration in the literature, for reasons explained later. Hodges (1982) defines organic agriculture as \"a system of agriculture that attempts to provide a balanced environment, in which the maintenance of soil fertility and the control of pests and diseases are achieved by the enhancement of natural processes and cycles, with only moderate inputs of energy and resources, while maintaining an optimum productivity.\" Whereas Lampkin (1990) defines organic agriculture as \"a production system which avoids or largely excludes the use of synthetically compounded fertilisers, pesticides, growth regulators and livestock feed additives. To the maximum extent feasible, organic farming systems rely on crop rotations, crop residues, animal manures, legumes, green manures, off-farm organic wastes and aspects of biological pest control to maintain soil productivity and tilth, to supply plant nutrients and to control insects, weeds and other pests.\" These two definitions both share the aim of growing crops and raising livestock with non-reliance on man-made, synthetic inputs and agree that the requirements for crops and livestock should be sourced 'naturally'. However to successfully achieve this and expect the same results as conventional farmers is not possible. Due to the restrictions organic farmers face on inputs, problems arise in combating disease and the following from Lampkin (1990) highlights other difficulties: Following the restructuring of the farming subsidy scheme between 2002 - 2004 (entry level, organic entry level and higher level), the options for growers to go organic has been widened and the money being provided through the subsidy, for all stewardship schemes, is better targeted towards environmental farming practices. The list of options available for farmers on the organic scheme (OELS) includes buffer zones, hedgerow management, beetle banks, under sowing cereals, and conservation headlands, Defra, (2005). It is clear from the above that the potential for biocontrol is present, as beetle banks and conservation headlands are known to host natural predators which could be exploited for control of pests in the crop, and the correct choice of species sown in the headland could attract pests out of the crop and provide alternative food for predators (Root, 1973). This method of biocontrol can typically be termed as 'conservation biocontrol', as this is the creation and improvement of habitat for hosting already existing pests or predators, contrasting with the actual application or addition of biocontrol organisms, which would be brave in broad acre agriculture as the effects would be unpredictable and questions would be raised regarding the economic viability and environmental safety. Thus, the use of this technique in glass houses", "label": 1 }, { "main_document": "the jobs themselves than how to reward employees for doing the job. It has very strong links with the early motivation theories, such as satisfaction theories, which suggest that a satisfied worker is a productive worker. This has been largely disproven (Lawler, 1973) although there does seem to be a link between increased satisfaction and lower absence, so increasing workers satisfaction may alleviate the firm's absence problems, but is unlikely to solve the issue of autocratic management and low trust. Other early extrinsic motivation theories centre on incentives and reinforcement, through rewarding desired behaviours. This has a strong link with behaviour modification (BM) and operant conditioning, as Skinner found (Skinner, 1961) - by waiting for a desired behaviour and rewarding it, the behaviour is reinforced. Studies by behavioural psychologists in this area have suggested that this links to management by objectives and is a powerful method of motivation. By identifying the desired behaviours, setting objectives based on these and rewarding success through applying both positive and negative reinforcements, Hamner (1976) and Luthans (1974) argue, people learn the required behaviours, exhibit them to gain the rewards, and performance improves. There are a number of implications of the extrinsic motivation approach on this firm. From a technical point of view this approach sounds attractive. It advocates little change from the current operational working practices other than introducing management by objectives and reinforcement methods. However, from a social point of view this is precisely the problem. There is no change to the repetitive nature of the work or the autocratic supervision and it is unlikely to lead to the introduction of the socio-technical organisational culture. Trust is unlikely to increase as a result of employing extrinsic motivation techniques, and so therefore from a social point of view extrinsic motivation is unlikely to address the root of the issues. Furthermore, critics of behaviour modification and reinforcement techniques have suggested the approach itself is flawed. Cognitive psychologists argue that people are not machines - they think before they act and therefore BM is fundamentally about pure motivation rather than reinforcement. More general criticisms are that it is often difficult to identify which extrinsic rewards are the motivators, and that rewards cannot always be controlled by line managers (as in defined benefits structures such as local authorities). Further, demands of the social group at work may conflict and limit the use of BM. Finally, there are financial implications of implementing extrinsic motivation. Broadly, it will increase costs. From the organisations perspective, firms that manage by extrinsic motivation, such as the Caudwell Group and the Cobra Group, operate on the basis of a calculative psychological contract - the firm must pay for any increases in effort/performance by the employees. Therefore, adopting an extrinsic approach is likely to increase costs for the organisation, whether it ends up solving the problems or not. It should be noted that some theorists have suggested that increasing pay and extrinsic rewards is far cheaper in the long run than investing in intrinsic motivation techniques. In terms of the employees' finances, there may be more", "label": 1 }, { "main_document": "Essentially, there is a distinct difference between motifs of war being beautiful and inspiring as opposed to tragic and distressing. Principally, since the First World War, a handful of governments in Western Europe especially have attempted to find a balance between these seemingly incompatible effects. However, both commemorating soldiers- without making war appear heroic and desirable- and reminding society of the utter ugliness of conflict- without causing offence to the dead who sacrificed their lives so courageously- is extremely difficult to capture in one symbol. Arguably, time plays a crucial role in reducing the opportunity to find an equilibrium; to future generations that share no relationship with the lines of graves covering the Somme, memorials are rarely painful experiences, but instead quite the opposite. Visitors are normally attracted to the splendour and impressiveness of the architecture. Emotionally, the effect can be uplifting, arousing a sense of pride in the nation just by seeing the headstones of fallen brothers who fought and died side-by-side in one of mankind's prime historical events. Fundamentally, memorials become part of the treasured past and the landscape that surrounds us- 'Over the years, passing by in a bus or a bike, I have seen the Cenotaph so often I scarcely notice it' Indubitably, as memory of the atrocities fade, the painful impact of war lessens and the mysteriousness and grandeur of it grows. Ultimately, this essay firstly attempts to identify whether the assertion of all war memorials eventually leading to the glorification of war is necessarily true. Secondly, it challenges the parameters of the question and considers whether the genuine reason for memorialisation is to warn against war: perhaps memory and the manifestation of memory through objects is fashioned by the interests of contemporary society and is a 'potent ideological weapon' Geoff Dyer, Norman G. Finkelstein, As figure 1 demonstrates, the Beth Shalom (\"House of Peace\") Holocaust Centre is grand and striking. It could easily be interpreted as a symbol of Jewish strength and survival, possibly leading visitors to sympathise with and even admire the success of the post-World War Two Jewish state, Israel. Taking a cynical point of view, why had the owners not opened the centre until half a century later if they were really concerned about preserving the memory of roughly six million European Jews? As Norman G. Finkelstein indicates, it is one exemplification of how the Jewish community has utilised its previous anguish to deflect criticism of any of its current policies; such commemorations that have arisen since the Arab-Israeli war in 1967 constantly remind society of the dangers of anti-Semitism and as a result, have possibly 'been used to justify criminal policies of the Israel state and US support for these policies' Finkelstein, However, Finkelstein's scepticism can be countered by the claim that Beth Shalom rather warns against any form of conflict than justify anymore, especially as it teaches about other persecuted minorities and its walls are covered with quotations emphasising the need for reconciliation and forgiveness. Indeed, the correct balance is attained by juxtaposing beautiful gardens with dark, musty corridors documenting the", "label": 1 }, { "main_document": "help China's economy. All Soviet technicians were removed from China and Russia cancelled their aid program which only exacerbated the shortages and poverty across China. This went some way to contributing to the fact that by the end of the GLF \"roughly thirty million people, primarily the very young and the old, starved to death\" Becker, Lieberthal, p. 108. Further influence from the Soviet Union came from Trofim Denisovich Lysenko who was a Soviet agronomist, and was the leader of the Soviet school of genetics that opposed Mendel's laws and maintained that acquired characteristics can be inherited. His theories received official support; they were taught in biology courses in the USSR and incorporated, with sometimes disastrous results, into Soviet agricultural programmes. China had their own Lysenkoist in the form of Luo Tianyo and in a purge of Party members in the 1942 rectification movement \"Luo enthusiastically persecuted those who believed in genetics\" It has been argued that Mao used this scientific reasoning for the GLF and that in 1958 \"Mao personally drew up an eight-point Lysenkoist blueprint for all Chinese agriculture\" Becker, Becker, It was not simply scientific and economic issues which led to the failure of the GLF but socially the peasants resisted some of Mao's policies. \"Many peasants resisted communal living and the confiscation of private plots\" In addition there seems to have been a lack of competent personnel to administer the communes and the size of them proved to be a problem. The existence of communal places to eat, where theoretically one was allowed to eat as much as you liked, meant that none of the excess food was saved and when the leaders demanded sufficient food to feed the cities, neither the people in rural or urban areas had enough to eat. Saich, p. 37. Furthermore Mao and the Party began to see the peasants as the enemy and not as a group that desperately needed help and freedom from the restrictions of the GLF. Party documents at the beginning of the GLF show that officials believed peasants to be standing in the way of progress and Mao even believed that the peasants were hiding their grain so he refused to open the state granaries to ease the crisis. To add to the situation \"over the three years from 1958 China doubled her grain exports and cut her imports of food\" Overall the rural sector was severely damaged by a host of policies \"designed to subordinate agriculture to the drive for rapid industrial growth\" Becker, p. 81. Joseph, \"A Tragedy of Good Intentions.\" p. 434. Even if the Party had acted more quickly as the failure of the GLF policies began to emerge they still would have faced major difficulties due to the state planned structure of the Chinese economy not being able to cope or adapt to the situation. The GLF put unbearable pressure on the poorly developed transport system, \"thus inducing bottlenecks and the overstocking of goods at critical transit points\" This imbalance within the structure of the national economy, combined with inevitable bottlenecks meant that", "label": 1 }, { "main_document": "certified and/or preferred suppliers(for example long-term purchasing agreements, licence agreements, partnership agreements, co-design agreements). Decisions related to adopting a supplier strategy based on multi-versus single-sourcing. Major investment decisions(in buildings, equipment, computers). Decisions with regard to backward integration, i.e. decisions to participate financially in supplier organizations in order to safeguard the future supply of critical materials. Decisions related to policies concerning transfer-pricing and inter-company supplies. Decisions related to policies on reciprocal arrangements, counter-trade and barter-deals. This list illustrates the long-term, strategic impact that purchasing and supply decisions may have on the company's competitive position. The tactical level encompasses the involvement of the purchasing function affecting product, process and supplier selection. Examples of purchasing decisions at this level are: Agreement on corporate and/or annual supplier agreements. Preparing and developing value analysis programs, programs aimed at design-review and product standardization. Adopting and conducting certification programs(including audits) for suppliers in order to improve the quality of incoming goods and materials. Selection and contracting of suppliers in general, and programs aimed at supply-base reduction, in particular. Decisions on these issues often have a medium-term impacts(one to three years). They are cross-functional in the sense that dealing with them effectively requires the coordination and cooperation of other disciplines within the company, including engineering, manufacturing, logistics, quality assurance. The operational level addresses all activities related to the ordering and expediting function. This level of activities incorporates the ordering of materials, monitoring the deliveries and setting quality disputes on incoming materials. More specifically, the operational activities of the purchasing function include: The ordering process(release of orders corresponding to already concluded agreements with suppliers). All expediting activities related to released orders. Troubleshooting: solving daily problems on quality, supply and payment in the relationship with the supplier. The monitoring and evaluation of supplier performance. Table 1 presents the relationships between the three defined task-levels and a number of purchasing activities. In this situation at the corporate level, a central purchasing department can be found where corporate contracting specialists operate at the strategic and tactical level(see figure 3). Decisions on product specifications are made centrally(often in close cooperation with a central engineering or R&D organization), and the same goes for supplier selection decisions; contracts with suppliers are prepared and negotiated centrally. These contracts are often multi-year agreements with pre-selected suppliers, stating the general and specific purchase conditions. The operational purchase activities are conducted by the operating companies. General Motors Europe and Volkswagen may serve as examples of companies which have centralized their strategic and tactical purchasing operations to a high degree. Other examples are Xerox and the Ford Motor Company. The main advantage of this structure is that, through coordination of purchasing, better conditions(both in terms of prices and costs and in terms of service and quality) from suppliers can be achieved. Another advantage is that it will facilitate efforts towards product and supplier standardization. The disadvantages are also obvious: the management of the individual business unit has only limited responsibility for decisions on purchasing. Often the problem is that the business-unit managers are convinced that they are able to reach better conditions", "label": 0 }, { "main_document": "Obesity epidemic is a constantly growing, serious social problem. Many institutions and organizations all over the world joined together in order to combat the obesity wave, which has already been present in Europe. As obesity is a very complex phenomenon multi-factorial and multi-stakeholders actions are being undertaken on all the levels: global, transatlantic, regional, national, state, provincial and local. The World Health Organization and the Consumers International represent main bodies fighting with obesity problem on the global level. In Europe, the European Commission, the European Food Safety Agency, the European Consumers' Organizations (BEUC), the European Associations of Advertisers and representatives of European food industry such as the Confederation of the Food and Drink Industries of the EU (CIAA), try to combat the problem. Representatives of the food industry say as they are part of the problem, they want to be a part of the solution (3). Also multinationals such as Coca - Cola are participating in the action. Recently Coca - Cola has declared to withdraw fizzy drinks vending machines form all the primary schools all over the Europe till the end of 2006 (8). On the national level, in particular in Britain, where the problem is more serious but also better highlighter in the battle against the obesity participate: government and government bodies such as the Food Safety Authority (FSA), the Department for Environment, Food and Rural Affairs (DEFRA), as well as NGOs such as the National Heart Forum and the Consumers' Association known as 'WHICH?', local councils, and many others. In Poland, where the problem still seems to be new and no much data is available on obesity and overweigh among children, Polish government has just undertaken work on the implementation of WHO strategy on Diet, Physical Activity and Health (DPAS) (9). In Poland works conducted by government are supported by contribution form state institutions such as the National Food and Nutrition Institute, consumer organizations such as Consumers' Federation as well as representatives of the Polish food industry - Polish Federation of Food Industry, which at the early beginning of 2006 started running a programme As the obesity wave is moving from the West to the East many alliances have been created. The best example of trans Atlantic cooperation aimed at combating obesity epidemic on both sides of the Atlantic is the Trans Atlantic Consumer Dialogue, which on the beginning of December 2005 organized together with BEUC a conference ' Presently, Central and Eastern European countries should be taking precautions and learning from its western neighbours. On the European level, the European Commission lunched The European Platform on Diet, Physical Activity and Health, whose main duty is to bring together all the stakeholders and keep works against obesity on the track. Issues raised in the Commission's Green Paper on 'Promoting healthy diets and physical activity: a European dimension for the prevention of overweight, obesity and chronic diseases' are supposed to be commented on till 15 Especially questions such as marketing, advertising and labelling of food commodities aimed at children are of crucial importance. Also all the actions aiming at increasing", "label": 0 }, { "main_document": "(or the ''non-economic'') from the cold, while still assuring the hegemony of economics'. This suggests that the post-Washington Consensus is simply the old Washington Consensus plus the so-called social capital. As to development policies, social capital is a hazardous concept, since it involves quantifying qualitative variables such as trust, civic engagement, horizontal associations and network of connections. Despite some attempts of measurement, some argue that these variables should not be quantified at all. The current measurement of social capital consists of unsystematic evidence from many sources using different definitions. So, our effort to show that social capital is not an analytically useful concept in relation to development has to proceed through speculations about the results already available. Our findings show that, if it is important to 'put the social right', as has been propagated by international institutions that adopted social capital as their main approach, it is even more important to 'put strong institutions right' in developing societies. Moreover, state and citizens are complementary, not exclusive. We report two case studies where state and institutions play a central role in development. The first case comes from Judith Tendler's work in northeastern Brazil. Tendler (1997, quoted in Harriss 2002, 66-8) studied a government health reform in the state of Ceara. Although the state citizens can hardly be regarded as an example of civic engagement, they possess connections and relations that are important for their daily lives. Contrary to the normal assumption, however, the social capital in this case was not truly beneficial, for the connections and trust relations in which those citizens were involved were bounded by parochial loyalties, especially in relation to public services. To counter that, the central government shifted power from local authorities to the state sphere. Mayors and local politicians lost the power of booking health appointments, a measure designed to avoid political cronyism and rent-seeking. More than 7,000 health agents were hired by the state government and treated as private-sector employees (following a customer-centred and problem-solving approach) and performance results received media publicity. The public knew what they should expect from a health agent and had channels to make complaints in case of failure. The agent, on the other hand, was publicly praised for his effectiveness and was no longer dependent on local power-holders. This example shows that decentralisation is not always the best solution to development: a central state may be necessary to the success of policies. The government played an important role in changing aspects of the social capital owned by people in order to promote more equal access to public health service. Rather than using the existing social structure, the state implemented policies that, in the end, reshaped the structure of social capital. New relationships of trust between government and citizens were built based on efficiency and accountability. The second case illustrates the importance of institutions to promote trust relations and thus 'social capital'. Knack and Keefer (1997, 1251-2) analysed World Values Surveys' indicators of trust and civic norms from 29 market economies. First, they found that membership in formal groups, one of Putnam So,", "label": 0 }, { "main_document": "the social acceptance and support, another arises. During the XX century, there was always a \"new purpose\" to regulate this subject. At first, a need to create a legal framework for transplantations universally, then defining fundamental medical terms connected with this subject on legal grounds- death, donation, consent, patient interest etc. For this reason the UK's Human Tissue Act 1961-in general-was passed for; then to provide system of consent, preventing from abuse (especially from dead bodies). Every year the demand of organs is getting higher, but the amount of donors does not increase. Looking for other solution the new difficulties arises; is it ethical to clone embryos for therapeutically purposes? Whatsoever should organs sale have to be legalize, now completely banned by The Human Tissue Act 2004 s 32 in Great Britain, but - in other countries not ? Increasingly important would be setting rules of commercial exploitation of human body. Moreover, to decide whether it should be allowed only for the \"research purposes\" or maybe- some others? It is possible even to say, that this number would decrease, because medicine can save more lives now, than before and probably would be still able to resuscitate next? \"Textbook on Medical Law\"; Michael Davies, Blackstone Press Limited, Second Edition 1996 It is always possible to look at the Iran's example were it is lawful to sell organs. Indeed, it might be shocking that when the potential recipients' dialyze starts to fail, he starts looking for the suitable live donor. \"I put an advertisement in the paper for a kidney, and a donor came straight to me. We reached an agreement on the price quite quickly. In these cases, the recipient usually takes care of the donor afterwards. So I still visit my donor and help him out.\" says Gholamreza, a 44-year-old man from northern Iran \"Big question\", somebody can say, that should be discussed not only by special institution as The International Forum for Transplant Ethics is for example, but also by Member States. \"Your part or mine?\" Organ transplants ;18 November 2006; Ibid. In the UK the section 32 of Human Tissue Act 2004 prohibits the\"commercial dealings in human material for transplantation\". In fact, it would be reasonable to say, that the sale of human body is regulated completely and understandable, but is it fully obeyed and acceptable by citizens? Organs sale is bounded in many countries; therefore black market occurs in most of them Moreover, probably would exist because still the organs demand is higher than the available amount of them. A person, who is aware of inevitable death in short time, would try to do everything to survive, even commit an offence, but- stay alive. Some people say's, that what the western world care about is nothing more than \"mere dollars\". Maybe, but in my opinion many people do not realize how far they would be able to go to saving their own or love's life. From another side it is still possible to find people who would be able to sell organs- because of extremely poverty or just to", "label": 0 }, { "main_document": "tunnels. There is also hazard reduction when using CAD compared to prototype testing and many CAD promoters suggest that using computer aided design is more durable. One of the largest benefits of using CAD is the ability to use the software to visualise the results of the analysis easily. Not only is prototyping expensive but it raises scaling problems associated with using models that are smaller than the actual design. CAD overcomes these problems as it is possible to draw the sketch to the actual size of the component. Scaling problems arise in prototyping as using models of different sizes affects the Reynold Numbers achieved. With a different Reynold Number for the prototype this can lead to results of the prototype being different than those that will be achieved when the final component is tested, this can potentially be dangerous as the component could fail and cause injury as the correct prototype results were not obtained. CAD generates more information than other methods of analysis. This can be both an advantage and disadvantage to the design process as a large amount of data can be harder to analyse, as the operator needs to be able to extract the correct information that will be useful for evaluating the design. It is necessary that the engineer using the software is skilled enough to evaluate the results but also configure the parameters. An over dependence of the engineer on the software/computer in configuring the parameters can lead to results having no value within the design problem Once CAD and analysis has been performed on a design it may be necessary to validate the results obtained, this is when manual techniques can provide useful. Within F1, teams still use wind tunnels to analyse the final design before going into production, to validate the results obtained from their computational fluid dynamic analysis. F1 teams overcome the scaling problems by using a full size model of the car within the wind tunnel. This however is expensive, so for other products it might not be feasible to test at full scale. CAD must be used appropriately by an experienced engineer to be of value within the overall design process. Training and experience is invaluable for an operator to ensure that the software is optimised and produces relevant information. I believe that CAD can be used to produce better products but designers and engineers must still use their knowledge and insight and not become over dependent on the software.", "label": 1 }, { "main_document": "be used to maximize advantage within the organization. In the need of precise forecasting aiming at minimizing costs and maximizing revenue, the accounting skills of financial controllers are still of leading importance (Burgess, 1995). They offer valid and beforehand notice of future action to the management team of the organization. The financial controller, over the future, is more and more likely to play the role of a mediator between the various departmental heads and senior management. This will anticipate and assist developing the financial stability of the hospitality organization (Burgess, 1995). Over the past three decades the role of the controller and his image has changed significantly. From being previously considered somewhat of a book keeper, focused merely on financial results and data collection and reporting (Burgess, 1993), he has become an active part of the management team, taking an important role in both daily and strategic decision making. The development of technology and the computerization of management systems has therefore, played an important role in broadening the duties of the controller. It has made him more involved in departmental management as well as superior administration (Geller and Schmidgall, 1984). Therefore, education and training are thus essential for the controller to carry-out his new tasks effectively. Apart from technical knowledge in accounting and finance, experience in operations is also vital in assisting the proficient assessment and analysis of the data gathered, facilitating valuable communication between the management team and contributing to the general efficacy of financial control (Burgess, 1994). In conclusion, Burgess (2000) puts forth that the role of the controller in the future will heavily be dependent on hospitality trends. Communication skills are no longer a plus but a definite requirement; the controller will have to be increasingly well prepared, trained and educated, informed of the market and able to forecast future trends.", "label": 0 }, { "main_document": "generate first four global peaks. Also it is utilized in the main body when mutation function is called at each generation. The first four values in the sorted array should be the ideal output. However, it does not work. The reason considered is that the searching area that is between 0 and 2*pi is too narrow so that the possibility of finding suitable points through random approach is too low. When extend the searching area from 2*pi to 20000*pi, still there is no output. So the approach applied in task 2 is not successful. Task 3 is to embed GA into robot racing. The software of robot racing is a competition for programmers and an on-going challenge for practice of Artificial Intelligence and real-time adaptive optimal control. It consists of a simulation of the physics of cars racing on a track, a graphic display of the race, and a separate control program (robot 'driver') for each car [3]. GA is applied to search the best values of some specific parameters which are defined already for cars. So GA can figure out the optimized values of those parameters which will influence the speed of car to realize the best performance for each one. In one of the present car files, BURNS, all the related parameters are list. CORN_SPD_CON which determines how fast to take corners is selected to be evolved by GA. Compared with the basic application of GA in task 1, the input is corner speed and output is lap time which can be returned from main function when processing the racing programming, moreover the lap time returned can reflect the influence of GA on racing corner speed. So following the approach in task 1, set lap-time which is output as fitness. If the lap time is shorter, then the corner speed will be fitter. So depend on the fitness that is set to each value, all the values in the current generation should be sorted at the ascend order opposite to the order in task 1which is at descend order. All the rest GA processes are same as what have been done in task 1, such as selection, crossover and mutation. When finishing last generation, the first individual which is the evolved value making the lap time be shortest is optimized final corner speed. The lap time is getting shorter with the growing of generation, and finally the optimized corner speed can be found. Furthermore, not only corner speed can impact the lap time, but also all the rest of parameters have vital contributions on the racing speed. So, all the parameters can be evolved by GA to get a group of optimal parameters to improve the performance of car to be perfect. From the statement of theory and three applications on practical problems in this report, it is obvious to discern that EC is a powerful tool to solve problem in a wide variety of scientific and engineering research area. It has been developed to a field which is importing biological technology into computation design. As the most popular evolutionary", "label": 0 }, { "main_document": "This module had four assessed assignments, each completed fortnightly. The main programming language for me was Fortran 95. The programs were compiled using either the 'Intel For assignment 4, Python 2.4 was used to write the GUI to interface to the compiled Fortran code. The intended outcome of the assignments was to find the minimum energy configuration of a set number of 'atoms' experiencing a mutual Lennard-Jones interaction of form This was achieved using Monte Carlo methods to pseudorandomly shift the position of each atom and see if the energy was acceptable according to a Boltzmann distribution, with energy proportional to From this you can understand that the temperature of the system, The program picks four random positions for the atoms inside a suitably sized sphere. The energy of this state is calculated using two loops to find the distance between distinct pairs of atoms and putting this distance into the Lennard-Jones potential model. This energy was summed over all distinct bonds. Every atom (four in this case) is in turn moved slightly (within If the energy is favourable then the change is accepted, else the atom is placed back. At the end, the program returns the final energy per atom along with how many moves were accepted. The final configuration is also written to disk in xyz format, so as to be read by external programs. For these 4 atoms, looping 500,000 times with This energy is normalised so that the minimum energy a 'bond' between two atoms can have is -1. The computing facilities (2.6 GHz Pentium4 Xeon with 2 GB of memory) were provided by the Centre for Scientific Computing of the University of The program was modified to output the energy per atom averaged over the last 90% of the moves loop as well as just returning the energy of the final configuration. Also the energy change caused by the shifted atom was calculated more efficiently by only summing over the bonds that actually moved. These changes were tested and shown to give compatible results with assignment 1. OpenMP was then added to the code. The two DO loops that calculate the energy (the full and efficient versions) were parallelized with a static schedule. In tests on multi-processor machines this schedule was the quickest followed by guided and finally dynamic. Further testing showed that the Parallelized ( Simulated annealing was introduced to the code. The 'temperature' variable was reduced over 50 iterations from 1 to 0.00001 according to the equation Correspondingly, the maximum small change each atom could be shifted by was calculated to be Each iteration ran the entire Monte Carlo procedure, taking in the configuration that the last iteration arrived at. The temperature, value of By the end of this annealing procedure the atoms were in a minimum energy state. For The average energy was -1.499993 per atom. For The average energy was -3.409741 per atom. The results were reproducible. The 13-atom simulated annealing run was performed on an SGI Altix supercomputer, a large shared memory machine with 56 x 1.6 GHz Intel Itanium2 processors and", "label": 1 }, { "main_document": "lower incomes or unemployed and tended to be more likely to suffer from ill health and social exclusion. Research proved the hypothesis correct. The method used to generate the research would have been a quantitative method. That is one which looks at the macro (bigger) picture, involves a lot of large scale research and can only look at generalities. Despite these generalities, it is the way, rightly or wrongly, that government policy is created. In this case the result was to introduce the \"Skills for Life\" programme which aims to improve the literacy and numeracy levels of 1.5 million adults by 2007 (Niace 2003). Therefore it can be seen that by using the deductive theory the choice of quantitative data collection follows and the decisions based on the results put into practice via new policy initiatives. On a similar vein, looking at one of the many government statistics from the Social Trends 33 survey it states that \"there were just 6,000 adoptions in England and Wales in 2001, an increase of 39 per cent since 1999. Despite this rise, the number of adoptions is still substantially below the peak in the 1970s\". Upon these statistics policy will be made, but quantitative data does not answer broader questions such as what are the experiences of mothers who give up their children and those who chose to bring them up themselves - for this the qualitative method would be required. From this it can be seen the value of feminist theories to promote qualitative methods whilst empiricists would argue that data collection provides all the answers. But despite the choice of theory or chosen method of research, it can be seen how the theory, method and practice are very much inter-linked. One other theory worthy of investigation is that of Marxism. Marx argued that inequalities existed in society whereby the lower classes of society (proletariat) could be exploited by those who owned the assets (capitalists) to the extent that different rules applied to the two groups (Giddens 2001). An example of this can be seen by comparing the recent case involving Major Charles Ingram and that of the growing number of women in prison for first or petty crimes. Major Ingram was only charged with an 18 month suspended sentence after being found guilty of conspiring to cheat the show \"Who Wants to be a Millionaire\" (BBC News 2003) out of the top prize. Whilst The Guardian (2003) reports a doubling of the number of women in prison for small crimes since 1999. The most common crimes committed by women are those of theft and handling stolen goods, crimes that are of low risk to the public and women are likely to be driven to crime due to personal circumstances. Women are twice as likely to be imprisoned for their first offence than men (Carlen 2003). In this example, theory and practice are interrelated, be they Marxist or feminists in that the outcomes have been influenced by the theory and the legal system which is predominately male based and elitist (Griffith 1997). Kuhn (1970", "label": 1 }, { "main_document": "the bacteria must resist lysis from complement and find a suitable cell in which to replicate. The lipid capsule prevents lysis by complement and phagocytosis by macrophages. Macrophages are activated by the foreign The bacteria were only identified in the blood sample containing macrophages confirming the prediction Upon encountering a macrophage, Usually, upon infection a macrophage will be activated by CD4 T cells which produce cytokines. This causes the vesicles containing the Its genome encodes an acid phosphatase enzyme which is capable to preventing this fusion and the resulting respiratory burst. The respiratory burst is the release of a mixture of oxidising chemicals such as nitrous oxide and hydrogen peroxide. It is thought that Recent research suggests that these acidic conditions can help obtain iron from molecules such as transferring indicating that iron is important for Studies concluded by Golovlior In particular, a protein of mass of 23-KDa was shown This prevents inflammation and the attraction of other immune cells to the site of infection. The 23-KDa protein released by Another virulence factor is encoded by the gene The 29-KDa protein it encodes is suspected to act as an ion pump for toxic radicals MinD is therefore able to pump put free radicals when a macrophage is activated and a respiratory burst triggered. Baron and Nano identified genes They are an operon which regulated the release of the four proteins which appear to be unregulated when It is thought that these genes are shock genes, stimulated by the reduction in nutrients. After It appears that tularemia cannot be resolved by the action of the immune system alone. Only if infected macrophages are stimulated early enough foe example if alveolar macrophages are stimulated by INF-Y before the As such, antibiotic treatments are required. Tularemia is diagnosed if evidence suggests an individual may have been in contact with ticks, rodents or lagomorphs in countries where the disease is endemic. Recent threats of biological warfare could also suggest manifestation of tularemia if a relatively high number of individuals are admitted with similar symptoms. Thankfully, An immunoassay test will indicate that an individual has tularemia as antibodies against the bacterium will increase by four times. Such a titer of 1:160 or greater is confirmation that the individual has tularemia. Although PCR could be used, Grunow et al reported that some human samples contain substances capable of inhibiting PCR therefore as such it is not a reliable diagnostic method. The other method of diagnosis is through the culture of It produces distinctive colonies as shown by Figure 8, a culture of Tularemia responds well to antibiotic therapy. Streptomycin is the drug of choice for adults and should be administered at 7.5-15mg/kg intramuscularly twice daily for ten days It works by inhibiting protein synthesis by binding to the 30S subunit of the ribosome in Children should also be administered with streptomycin but at a higher dose of 15/20mg/kg intramuscularly twice daily for ten days Other antibiotics such as gentamicin tetracycline and chloramphenicol can be used but are less affected. Tularemia is not known to be transmitted between people", "label": 1 }, { "main_document": "For the purpose of this assignment, I have chosen to discuss two observations of the same family attending a clinic for Attention Deficit Hyperactivity Disorder (hereafter referred to as ADHD). This service is provided by the child and family community mental health team (CAMHS) and is designed to offer support to families living with a child with ADHD by helping them to deal with the associated behavioural difficulties alongside treatment with medication. In accordance with NMC guidelines (NMC 2004), every effort has been made to keep the child and family anonymous. Throughout the case study, I will be seeking to use attachment theory to hypothesise about the behaviours observed and the resulting possible implications for the child's future mental health. I shall also discuss the child's development using Erikson's developmental theory (1980) as a framework. A genogram has been included in the appendices to illustrate the complexity of this family system and this will also be discussed later in the essay. Prior to the first observation, the family had already attended an assessment appointment where they had met the two nurses running the clinic; also present at this initial meeting was the client's older sister. The meetings took place in a private room within the CAMHS offices. Present at both meetings were the two nurses facilitating the clinic, myself, a student nurse, the client (a 6 year old boy diagnosed with ADHD) and his maternal grandmother and her husband; his legal guardians whom he refers to as Mum and Dad. At the start of the first meeting, the boy was offered the opportunity to go off and play as he had done on the last visit with his sister; he declined, choosing instead to stay in the room and cuddle up to his Mum. The child was very endearing - small for his age with an elf-like face and very large eyes. I warmed to him immediately. During the meeting he spent a lot of time hiding under his Mum's coat and peeking out over the top to look at the other people in the room, then diving back under. Mum mentioned that he had been ill the week before and thought that this may be why he did not want to go and play was more subdued than usual. I was conscious of trying to focus on what was being said in the meeting rather than on the child, which appeared to be what he wanted since he was making deliberate eye contact with anyone he could and pulling faces. Whilst this could be interpreted as disruptive behaviour, it was actually thought to be very endearing by myself and the other nurses. The discussion in the meeting focused mainly around problems with getting ready in the morning and bed time although Mum was also concerned about the child's impulsivity and the idea that he may harm himself. Throughout the meeting the family were very positive about the child, attempting to include him in the conversation where possible and giving him lots of cuddles, in fact, Mum referred to him as \"darling\"", "label": 1 }, { "main_document": "time it only offered vocational training for local students in medicine, law and theology. It was always competing with the abacus schools which Florentine merchants favoured more highly. Humanist education did not become important in Florence until the second half of the fifteenth century and then it was predominantly for elite families. Porter and Teich (eds.), Brown, During the Renaissance, the Medieval Latin language and spelling was rejected, as there was a desire to return to the purer Latin of ancient Rome. The leading centers of the rebirth of Latin in Italy were Padua, Arezzo, Bologna and Verona. It did not begin in Florence, only reaching it by around 1400. By the mid-fifteenth century Rome was dominant in the development of the new language. However, Florence did hold some importance because of the arrival of the Greek Manual Chrysolaras at the turn of the fifteenth century who taught and inspired many students including Leonardo Bruni and Poggio Bracciolini, who were to become important Renaissance thinkers. Bruni produced new translations of Aristotle's Greek text Latin was derived from Greek so Chrysolaras, being Greek, was knowledgeable and influential. By the end of the fifteenth century, Florence became dominant because of important Florentine thinkers such as Angelo Poliziano. In the late fifteenth and sixteenth century, Florence lost much of its influence as great artists such as Leonardo and Michelangelo were tempted away from the city and produced many of their masterpieces abroad. Leonardo had invented a number of military machines. After Florence had ended the war with Naples in 1480, Florentine politicians were only interested in keeping the peace and so Leonardo left Florence in 1482 to sell his war machines to other cities such as Milan. Although other places in Italy such as Rome and Venice had important roles to play in the development of the Italian Renaissance, 'the names most clearly associated with the Renaissance remain those of... artists from Florence' The conditions in Florence at the time made it the ideal place for artists and scholars to work. There were many wealthy patrons, a large collection of talented men, and a system of government that encouraged cultural development. As Hunt observes, 'Florence is certain to remain at the heart of any serious study of the origins and development of the Italian Renaissance' The importance of Florence in the development of the Italian Renaissance has not been exaggerated. 'Every important Italian City had its Renaissance, but the Florentine, because it was the most complete and the most influential, has a special claim to be considered representative of the whole' Hunt, Hunt, Cronin,", "label": 1 }, { "main_document": "scenario equations 1 and 2 become: Scattering occurs across a range of angles between the two extreme situations mentioned above. Hence a continuum of electron energies would be received by the detector, ranging from The electron energy distribution takes the following form: There is a noticeable gap between the maximum Compton energy and the photon energy This energy gap is given by This analysis is based upon the notion that the electrons are free. In detectors where the electrons are bound, the binding energy of the material may alter the shape of the resultant Compton continuum. This process involves the creation of an electron-positron pair from the incident photon in the intense electric field near the protons in the nuclei of the absorbing material. The photon disappears entirely and, provided that an energy of at least This energy distribution can be modelled as follows: The contribution of pair production is only significant for energies greater than the 1.02MeV threshold and so this interaction mechanism will have less of an effect on the emission spectrum than photoelectric absorption and Compton scattering. For this reason this form of interaction is not particularly significant in this experiment. Values calculated from 19 repeat integral readings (discounting result number 2 as an outlier): Error in integral, Error in mean integral, Error in The experimental value for the standard deviation is in agreement with the theoretical value. There is a discrepancy of only 1% which is encompassed by the error bounds of the experimental value, however there will always be a difference in the two values due to the statistical nature of the problem. The 2 values will converge as more readings are taken but they will only be equal when As This is due to the fact that the error in measurement = For example to half the magnitude of the error A scintillator crystal is a means by which to linearly convert the kinetic energy of a particle into detectable light such that by measuring the light from the crystal, the kinetic energy of the particle may be ascertained. With a refractive index close to that of glass (~1.5), the crystal can be connected to a photomultiplier tube in order to magnify the light signal for clearer analysis. A Sodium Iodide Thallium-doped ( Therefore it has high intrinsic detection efficiency. It has a high light output which maximises the quality of the output signal. They also have a smaller decay time - the lower the decay time, the more frequently the light pulses can be sent and so measurements can be taken at a greater rate. A device is required to convert the relatively weak light output of a scintillation pulse into a strong electrical signal that can be analysed more easily. The photomultiplier tube is particularly adept at achieving this, converting light signals of only a few hundred photons into a usable current pulse without adding large amounts of noise to the signal. The entire system is sealed within a glass envelope which maintains the vacuum conditions required for optimal acceleration of the low-energy electrons", "label": 1 }, { "main_document": "The doctrines of consideration and intention to create legal relations are two essential elements in the formation of contracts. Consideration can be defined as the act or promise to be performed by the promisee The reason why it is very common to find that \" Such act or promise is usually in the form of the promisee incurring some detriment by giving away something in order to get in return what is being promised by the promisor in his offer. This is where the Detriment and Benefit Analysis stems from: a benefit to the promisor and a detriment to the promisee. Intention to create legal relations, as the term itself implies, is simply the state of mind of the parties as to their willingness to be bound by the terms of the agreement. In general, the importance of these doctrines is that they are both tools used by the Court in determining the validity of a contract, the enforceability of the contract terms and the liability of the parties under that contract. But there has been much criticism around both doctrines particularly about their reliability, usefulness and effectiveness. Originating from the concept that consideration must always move from the promisee. J.C Smith, The Law of Contract, 3rd edition, Sweet & Maxwell, at pg 64. Consideration is used to distinguish between those promises which are enforceable from those which are not. In other words, once the Court has been able to find consideration we are already in the realms of liability. But how far is this doctrine a perfectly adequate test of liability thus undermining the doctrine of intention to create legal relations? Let us take the example given by J.C Smith of the purchaser of a fish from a local fishmonger He points out that \" Indeed, in such types of transaction where consideration is the act of effectively purchasing the commodity If such cases are assessed subjectively, we will obviously not find any intention to create legal relations because it is impossible to say, as to the buyer's state of mind at the time of purchase, whether he has given some thought to the possibility of taking actions against the seller in case the fish was not of satisfactory quality. But we will still have a binding contract here because consideration provides adequate proof of liability. Intention to create legal relations can only be inferred on an objective basis that is, in the mind of the reasonable man. The Law of Contract, 3rd edition, Sweet & Maxwell, at pg 107. Ibid Consideration through performance. It is often said that consideration often evidences an intention to create legal relations. The latter can be inferred from the former, thus making it superfluous to have a separate doctrine of intention to create legal relations. Liability can be established as soon as consideration is found. In the case of Thus, the purchase and the inconvenience of using the product was enough consideration given by Mrs.Carlill in response to the offer. And from this, an intention to create legal relations can be inferred because she would", "label": 0 }, { "main_document": "The saddle shown above will be the object of this short account. It will be based mainly on the interpretation of the author, although some major pieces of information from written sources will also be incorporated, as will be the materials from the files of the object held in the Museum of English Rural Life, by which the object is currently owned. The scarcity of these materials, however, do not allow using them to a larger extent. The lack of archival evidence might have been caused by the fact, that the saddle was acquired directly from its previous owner and user, which did not allow the whole elaborated pile of paperwork to be compiled. The existing files included the correspondence with the Godmans' farm from Horsham who donated the saddle to the museum in 1958 (information derived from the object file). This particular saddle is now held by the Museum of English Rural Life, which in obvious way determines the way of interpreting it. The museum has divided its collections according to the materials the artefacts are made of, hence the metal, wood, straw or leather section, a part of which the examined saddle would have formed had it been in the main exhibition area. Instead a similar saddle is presented as a part of a larger revealing the aspects of using leather on a farm, mainly to do with horses and their harness. The analyzed one is meanwhile kept in a store, maybe because of the incomplete form. There is probably a cushion missing, as there is one on the rest of such saddles and as one needed to be used while putting the saddle on a horseback not to cause any damage to the health of the animal. At this stage it is crucial, therefore to explain the construction of such type of saddle and the way it was used in the past and is still used nowadays in some parts of the world, which will be analyzed further in this work. A cart saddle consists of a pad or a panel, which in other words are two cushions, lay parallel to form a long gap between them and above the spine to lay the burden of the cart on the muscles on the sides of a horse instead of its spine and withers to protect the animal from injuries. A wooden frame or a tree is attached to the pad, which is then covered with leather housing. Leather part actually plays mainly decorative and protective role here, but it is undoubtedly dominating the outlook of the artefact which is possibly why it is allocated to the leather section. A bridge is a part of the wooden frame and mostly a metal protection is incorporated in it, in a shape of a channel, in which the chain is placed. A girth strap passes under the belly of the horse to keep the saddle in position. Straps along the back of the horse, the meeter strap attached to the collar and harness, and the crupper strap along the horse's rump, kept", "label": 0 }, { "main_document": "English East India Company faced challenges from 3 prominent ruling houses in India to its establishment of political hegemony. The strongest out of these were the Marathas, Hindu rulers of an independent kingdom stretching from Maharashtra encompassing most of the Deccan Plateau in south and south west India. The company got itself involved in a conflict over succession, and supported vested interest in the Maratha court. Initially suffering heavy losses, the company finally managed to isolate the warring segments among the Marathas and defeat them one by one in three battles in 1775, 1803 and 1817. Kalyan Chaudhuri, The prosperous kingdom of Mysore had been problematic for the English East India Company for a variety of reasons. Its rulers Haider Ali and later Tipu Sultan had proactively sought European, more specifically French, help in building up their armed forces. French traders also held a position of prominence in the kingdom vis- Moreover, a strong lobby of Hindu traders had an important role to play the company's decision to invade Mysore because it had previously failed to convince the native administration to provide access to is markets. Over a period of 30 years between 1760 and 1792 Mysore was slowly but surely brought down over three stages of conflict, albeit not without rugged resistance. Sugata Bose & Ayesha Jalal, Kalyan Chaudhury, The final power which had to be overcome before the political supremacy of the English East India Company would be unparalleled and unmatched in India was the Sikh kingdom of Punjab. The company failed to make inroads into this small but powerful and rich kingdom primarily because of the realist and strong leadership of Ranjit Singh, who lived until 1823. Upon his death the Sikh kingdom suffered from shortsighted leadership, and provided the company ample opportunities to make itself an irritant. It forced Punjab to enter into a tripartite alliance with the company and Afghanistan, guaranteeing the mutual borders. It could be argued that previously the company's mindset was purely that of a merchant trading company, but now it was changing to resemble imperialism. The East India Company was contemplating a Russian attack from India's northwestern borders, and hence the insistence on Punjab agreeing to defend the integrity of the borders. However, coupled with some disastrously myopic military moves made by the Sikh rulers and the advent of Lord Dalhousie on the Indian political scene, the company found the opportunity to invade and annex Punjab in 1848. Kalyan Chaudhury, There was a decisive shift in the priorities and policies of the English East India Company as the Industrial Revolution set in Britain. At the forefront of the revolution were the cotton mills of Manchester and Lancashire. The China opium trading network and Indian J.W. Wong, in his Opium from Bengal and Malwa was exported to China, where it was sold to generate revenues to purchase silk, tea, silver bullion and other valuable commodities which were used to purchase Indian Karl Marx has aptly summarised it, \"The homeland of cotton was inundated with cotton.\" Sumit Sarkar, A further source of increased revenue was", "label": 0 }, { "main_document": "This essay will seek to understand how Grotowski's approach to theatre that resulted in his practice gaining the title of 'poor theatre'. I will attempt to explain the unique aspects of Grotowski's practices in the theatre and highlight the reasons behind Grotowski's theatrical beliefs, including his influences and aims for his own work for Grotowski, as its creator, was directly responsible for the definition of poor theatre. At the same time I will explain what is understood by the description of 'poor theatre' as a theoretical and a practical approach to performance in the theatre by looking at the different performances Grotowski was a part of. Grotowski's theatre career really began with his establishment of the Theatre of Thirteen rows with Ludwik Flazzen in Opole, a town sixty miles from Auschwitz in Poland. It was here that he lay down the foundations for 'poor theatre' but his greatest achievements as a practitioner would not take place until he relocated to Wroclaw on the first of January 1965. It was there that made his intent clear by giving the Laboratory Theatre the status of the 'Institute of Actor's Research'. The two titles are indicative of Grotowski's desire to explore theatre and push its limits. Grotowski's aim was to find the essence of theatre and classify it. This resulted in a systematic removal of anything that was not necessary in performance for it to remain theatre. He went on to say that 'By gradually eliminating whatever proved superfluous, we found that theatre can exist without make-up, without autonomic costume and scenography, without a separate performance area (stage), without lighting and sound effects etc. It cannot exist without the actor-spectator relationship of perceptual.' (Grotowski, Towards a Poor Theatre, Pg.19) Grotowski understood that the only thing necessary for theatre to function was a core of actor in a space performing to an audience. Everything else that could not be generated by the actors and their interaction with the audience was not necessary and therefore could and would be taken away from his work. As Grotowski wrote 'No matter how much theatre expands and exploits its mechanical resources, it will remain technologically inferior to film and television. Consequently, I propose poverty in theatre.' (Towards a Poor Theatre, Pg.19) Grotowski's work was given the label of 'poor theatre' as it was intentionally stripped down to its very essence and this is its most distinguishing feature. With all other aspects designated as 'superfluous' to theatre Grotowski formed a great interest in the relationship between actor and audience. He believed that the text of a play did not make theatre itself; only when performed by an actor to an audience did it then gain the title of theatre. An essential part of this relationship was the audience's ability to lose themselves completely in the spectator aspect of the performance just as the actor must lose himself completely while giving himself to the audience. The actor's technique was therefore absolutely essential to Grotowski's vision of the theatre. He saw the body as an instrument which must be capable of more than", "label": 1 }, { "main_document": "The approach taken in this assignment is to understand the marketing mix used by a top Indian company in the soft furnishings market that exports its products to the UK and to analyze how they use the controllable variables to their advantage. The firm chosen is Fabindia. The company has been chosen on the basis different parameters like brand name, specialty of products etc. Though Fabindia is a well-known name in India and to some extent around the world, it has only recently ventured in to the export market, which gives us the opportunity to analyze and maybe predict its future course of action. American entrepreneur John Bissell founded Fabindia, expanded to fabulous India, in 1960. The company is unique as all its products are sourced from \"7500 craftsman and artistes\" (fabindia.com, 2005) from all over India, mainly rural parts. Through this unique feature Fabindia has been able to keep alive India's traditional textile industry while creating a distinct style of its own. The product range of Fabindia includes furniture, lights and lamps, stationery, home accessories, pottery and cutlery. Only from September 1 2005, Fabindia has extended exporting its products to 33 countries around the world including UK. The above adage cannot hold truer than in the case when a company is trying to export its products. Identifying plausible markets and planning your foray into them can be an onerous task. The product is the most important element of the marketing mix of Fabindia. Right from the time it was founded in 1960, its product offering is how Fabindia differentiates itself from the competition. Its product mix and branding and how Fabindia uses them to market effectively is discussed in the following section. The first of three levels of the product (Strategic Marketing notes, 2005) is the core product that deals with what the buyer is actually buying. For Fabindia, providing the core benefit translates to providing furnishings to decorate homes. It is at the second level, when the actual product is formed and attributes like quality, features etc. are incorporated, that Fabindia has done exceedingly well. Fabindia aims for people who want \"fashionable products at reasonable prices \" (Fabindia.com/presskit) hence, it has made its (Levitt, 1980). This differentiation appeals more to Fabindia's customers. The third and final level, augmented product is where Fabindia can make its presence felt in the UK. It has just started exporting via its e-commerce site, which has very basic offerings on warranty, delivery and credit aspects. Now, in order for Fabindia to do well it has to come up with attractive offers like delivery slabs of 7 days /10 days/15 days and charge accordingly. This way it won't lose the competitive advantage it gains through it offerings at the actual product level. Evaluating Fabindia's product mix on four parameters width, length, depth and consistency (Kotler, 2000), it is found that, - The width consists of two product lines. - The length consists of four products - The depth, though varying for various products, is still substantial, with cushion covers having 126 variants (including 18 types and", "label": 1 }, { "main_document": "we draw upon previous experience, thus arriving at 'a series of commonsense constructs [which we] have pre-selected and pre-interpreted [from] this world which [we] experience as the reality of their daily lives' (Schutz in Coulter 1979: 9). Beyond this, the analogy of \"making sense\" extends further in the sense of serving the purpose of creating structural coherence and predictability in our perception of the world that we experience. A convincing set of behavioural rules such as the convict code is necessary to order the residents' forms of social interaction, which is, in fact, 'knowledge [that] is itself inherently unstable, something which is created anew in each encounter' (Craib 1992: 102). The precariousness of resident-staff interaction is pointed out by its strong dependence on contextuality. With \"the code\" in mind as an interpretative tool, all actors involved (which in this case may involve the residents, members of staff and the observer), continually scan incoming social stimuli for recognisable modes of behaviour which may contextually seem logical. In doing this, variables as the particular social actors, the setting and the timing play an indispensable role for practices of indexing, i.e., \"sense-making\" (Cicourel 1973: 101, Wieder 1974). In this context, staff's reaction to residents' 'problematic acts' were not confronted as subversive behaviour; rather was their character now converted 'into instances of a familiar pattern' (Wieder 1974: 151). In this way, the facility's staff utilised the convicts' code in order 'to formulate a recognizable coherent story, standard, typical, cogent, uniform, planful, i.e., a professionally defensible, and thereby, for members, a Unsurprisingly, Wieder sees \"telling the code\" in the first place as a means of justification of the chosen course of action. By \"telling the code\", residents explain and justify why they will not cooperate in certain situations or answer certain questions posed by staff. By accepting the code, staff, in turn, account for residents' uncooperativeness in activities and rehabilitative procedures. Interesting is Wieder's reference to Durkheim: in \"telling the code\", all social actors in the facility render types of social interaction institutionalised, thereby attempting to (though not actually) reifying them (Durkheim's \"social facts\"). With this, however, the door is only just opened to further and more penetrating interpretations of action-motivation. Ethnomethodology points out that, in order to gain real understanding that means something, 'intentions and the actor's views are always potentially relevant and must be taken into account' (Pitkin in Coulter 1979: 12-3). Indeed, by assuming interpretation ends at normative causality, sociologists inevitably miss out on the politics underlying social action (Coulter 1979: 11, Silverman 1972: 175). The darker, more precarious side of social interaction lies in the tensions of power underneath its surface. Ethnomethodology does not deny the existence of a \"common culture\", and this would be an absurd claim to make (Alexander 1987: 262); however, the precariousness of social interaction is exactly that what this \"common culture\" is not: individuals may challenge the existing order by either directly confronting it, or, more subtly, by manipulating its general traits to their advantage. Whether this challenge is pursued consciously or not is, strictly spoken, not relevant;", "label": 0 }, { "main_document": "find elsewhere in the market. Thus numerical flexibility cannot be applied here as the application of numerical flexibility pre-supposes the existence of trained labour in the market. On the other hand organizations that have taylorised work systems for all its employees and thus are merely concerned with reducing costs and not differentiation seek numerical flexibility. For instance, there is a significant rise in the part-time workforce in call centres (Arrowsmith and Sisson, 2000, p.290) as employment of part-timers results in wage cost savings. Sectoral differences also play an important part in the implementation of flexibility. Skilled engineering workers are pre-dominantly male, seeking full-time secure employment and thus employment of part-timers or temporary agency workers does not come into consideration. However, the service sector constitutes a large proportion of people who are secondary earners in their household and who are only willing to work part-time due to other domestic responsibilities.(Arrowsmith and Sisson,2000, p.301-302) Thus the highest number of part-time workers(55%) are found in hotels, restaurants and other privatized services whereas the lowest are found in electricity, gas, construction, water (0%) and manufacturing and public administration (1%).(Cully et al, Table 2,1998,Arrowsmith and Sisson,p.296) The implementation of different forms of flexibility thus depends namely on the focus of the organization, nature of the organization and the operating sector it is located in. The process of multi-skilling and team working aims at benefiting the employee via diversifying their skills through job enlargement and greater involvement and benefiting the employer to streamline the organization's hierarchical structure without having to recourse to the External Labour Market. However, in practice since these processes involve over utilization and underutilization of the employee's capacity at different points of time, it is said to have a negative impact on job satisfaction, commitment and motivation in the long run as both over utilisation and underutilization of capacity prove dysfunctional for the employees. Functional flexibility can also be capable of engendering territorial and role conflict and an increased tendency of social loafing since team working makes individual contributions less significant(Beukel and Molleman,2002,p.484).which proves to be detrimental to the overall well-being of the organization. The biggest hurdle to the implementation of numerical flexibility in practice is the vast range of newly formed European Commission Regulations for the contingent workforce. The implementation of the Council Directive 96/34/EC (The Part-time Workers Directive) (McColgan, 2000, p.125) and the Fixed Term Work Directive 99/70/EC in 2002 (Slater, 2003, 15) in the UK law have undermined the demand for part-time and agency workers in the previously unregulated UK labour markets. The aims of the EC directives are as follows: (i)To eradicate all discrimination between part-time and full time workers concerning all Terms and Conditions of employment inclusive of the pension schemes and access to internal job vacancies (ii)To enable facilitation of part-time work on a voluntary basis and to bring about improvement in the quality of part-time work which would require employers to consider requests of transfers of employees from full-time to part-time work and provide measures for access to part-time work at all levels of the organization. (Jeffrey 1998,", "label": 0 }, { "main_document": "result of the fall of the Berlin Wall (which seemed to have confirmed liberal democracy as a dominant ideology) and the globalisation of world politics. These factors have urged a re-conceptualisation of state-sovereignty (in post-Westphalian The emergence of various other governing actors in the arena of world politics avoids a monopoly of power so detrimental to democracy. State sovereignty is undermined since states are sandwiched between local (sub-national) and supra-state actors empowered through the processes of globalisation. Political power is being diffused downwards and upwards while state power appears now only one dimension of political power and not the single relevant one as realists have argued. Charles Beitz makes an interesting distinction between social and cosmopolitan liberalism and between institutional, individual cosmopolitanism on the one side and cosmopolitan liberalism on the other. While social liberalism is about fairness of states, cosmopolitan liberalism is about fairness of individuals. For more see Charles R. Beitz, \"Social and Cosmopolitan Liberalism\", 3, 1999, pp. 515-529. John, Rawls. For more see Richard Devetak, \"Signs of a New Enlightenment? Concepts of Community and Humanity after the Cold War\", in Stephanie Lawson (ed.) The Peace of Westphalia is viewed as the origin of the modern system of sovereign states, consisting of an agreement among European powers at the end of the Thirty-Years War (1648), to accept the idea of the continent as divided into independently governed states that should not interfere in each other's domestic affairs. For more see Andrew Linklater. On the impact of globalisation on world politics see Jan Aart Scholte, But is there any agreement that we live in an age of globalisation? Realism, for instance, views the transformative power of globalisation with more scepticism in what is essentially still an inter-national (as opposed to a transnational) environment. States are still the main sovereign actors, struggling for power. There is no \"invisible hand\" to regulate the global market, but rather an obvious fist that pressures the weaker to imitate and thus serve the stronger. Neo- Marxists and other leftist critics see globalisation as just another stage in the pathological expansion of market capitalism, mainly beneficial for core states and detrimental for poorer ones (in the periphery). On the debate between sceptics and globalists see, for example, David Held and Anthony McGrew (eds.), See Alejandro Bendana, Kenneth N. Waltz., \"Globalization and Governance\", James Madison Lecture, 4, December 1999, pp. 693-700. In line with the realist outlook, even the so-called altruism of foreign aid (or international assistance) is only the ideological Cold War product of self-interest, which continues to be used in disguise as altruism. In reality, national interest will always prevail. Moreover, unilateral action is back or maybe has always been in fashion. The recent naked display of American unilateral action undermines the respect for international law by creating a precedent of taking the law into its own hands. For instance, Archibugi was criticising the US politics of rejecting an International Criminal Court and preferring ad-hoc military tribunals. The arms of laws and institutions instead of leaders should judge on crimes against humanity. See Daniele Archibugi, \"Terrorism", "label": 0 }, { "main_document": "In other words, at least one of the slopes of the regression coefficients for lecture attendance and number of A-grades at A-level are significantly different for the four sub samples in the unrestricted model compared to the restricted model. Exam performance for students, both generally and with Quantitative Techniques, depends on a range of variables. This dataset includes 40 different variables, however, some of them are mutually exclusive, some are so highly correlated that multicollinearity might become a problem and some are not significantly affecting exam performance. First, when stating a model there is always a risk of omitting relevant variables. This might have severe implications, as it makes the regression biased, unless the omitted variable is unrelated to all the other included variables in the model. Consequently, with biased coefficients, the standard errors, t-ratios and therefore all hypothesis tests are deficient. Second, there is also a risk of including irrelevant variables in your preferred model. This has generally less severe consequences, however, if too many irrelevant variables are included this normally boosts the standard errors compared to the true model. This clearly makes the t-ratios smaller, hence some coefficients might be found insignificant even though they are truly significant. Bearing this in mind, I will start to investigate the impact of the variables on exam performance. My answers to questions 3, 4 and 5 jointly give the perception that the coefficients differ among the four sub samples of years. Since we have found significant differences in the coefficients for the four years, this suggests that we should include the dummy variables for the different years in the model of exam performance. Logical sense and empirical evidence Since the correlation between the attendance measures is high Moreover, it is convenient for the analysis to divide the AttCL variable into percentiles of attendance, with 0-20 % as the reference group This allows us to investigate the relation between not attending classes and lectures with the different quintiles of attendance. See for example: Romer D. Should they? J Econ Perspect; 7: 167-74. As shown in the appendix. These dummy variables are: DAttCL20to40, DAttCL40to60, DAttCL60to80 and DAttCL80to100. To get an overview of the determinants of exam performance, I screened selected variables in the survey to check whether they are significant in determining the exam performance. This is clearly not the only way to determine what is affecting exam performance, but it is a good start to see whether my intuition of what is important determinants are totally wrong or not. The regression outputs of some of the regressions are included in the appendix and the first variable I found to be highly significant and positively related to exam performance was Only students with a single-honours degree in Economics ( The other types of degrees seem to have a negative (although insignificant) relationship with exam performance, which may be grounded in the fact that single-honours Economics students often have a more quantitative background and/or interests compared to the other joint degrees. The only significant (and negative) expenditure coefficient is ExpAlc Females seem to do worse in", "label": 0 }, { "main_document": "In this report the Annual Financial Report of Renold PLC from the year 2004/2005 was analysed in order to determine if it provides a true and fair view of the company's performance. Also the reliability of the Annual Report has been considered in terms of past and future performance, in......... In this report the past performance of the engineering company Renold PLC will be analysed through the use of financial techniques such as ratios and peer analysis. The Annual Financial Report will be analysed in order to determine if it provides a true and fair view of the company, taking into account the various accounting adjustments that have been made within it. At this point, using the previously explored information, it will therefore be possible to suggest to a shareholder in the company what they should do with their shares. The predicted future potential of the company will also be discussed, as suggested by the Annual Report, and other sources of information. Finally, the reliability of the report to a shareholder will be considered. In order to analyse Renold PLC's past performance, ten performance indicators have been chosen that best illustrate how the company has achieved in the past year, and have then been related to a shareholder's position. The performance indicators chosen are: In the following financial ratios, either data from the Annual Financial Report or the FAME database have been used. In some of the cases these calculated values have been compared to ratios released by the FAME database. The purpose of this is to illustrate the variation that can occur in financial ratio calculation. This is due to the fact that different companies and countries use different variations of the same formulae, as a matter of preference or financial standards. ROSE, according to the Business Development Bank of Canada \"measures the rate of return the shareholders receive on their investment\". ROSE can be calculated by finding the ratio between Net profit (loss) after tax and Ordinary Shareholder's Funds. In the following graph there are two different lines, which is due to the fact that in 2004, Renold had to change the way it stated financial documents to include FRS17 regarding Pension Benefits. This graph shows a fluctuation throughout the years between 2001 and 2005, with a recent dropping to - 8.56% in 2005. This negative value gives reason to the fact that no final dividends were paid in this financial year. However, it should be remembered that the Pension Deficit was added in this year due to FRS17, which would have influenced this result to a high extent. This graph would appear to be a bad sign for a shareholder due to the fact that it shows an undesirable return on an investment. According to Karen Bradbury from the module Finance and Accounting at the University of Warwick and Dyson, \"Return on capital employed measures the relationship between the amount invested in the business and the returns generated for the investors.\" It would therefore be desirable to have a percentage ratio that is larger than that which you would", "label": 1 }, { "main_document": "indication of the flow of shipbuilding techniques across Europe. McGrail (1993: 98, in Greenhill 1995) claims, that 'during the Viking period and up to the mid-twelfth century.... the evidence is thus overwhelmingly that the Dublin ships and boats were in the mainstream of the Viking tradition, in their sequence of building, in their form and structure, and in their method of propulsion (and probably also steering)... The Dublin wood working techniques are in the mainstream, with little, if any, sign of regional variations'. The Graveney boat also provides the first archaeological evidence for what is called Crumlin-Peredsen believes that in Scandinavian shipbuilding this feature was introduced with the use of sail, as it improved the stability of the ship (Greenhill 1995). Large number of drawings representing Although an exchange of technologies between societies which are in frequent contact might seem an obvious thing, archaeological material from shipwrecks themselves can reveal some unknown aspects or regions of such exchange, as the Slavic example has shown. Therefore, we should not regard shipbuilding as a fully exploited area of research, but apply new technologies on known data in order to draw conclusions that could not be reached before. Well before and during the Viking period, Christianity was spreading widely across Europe. While the Franks or Anglo-Saxons was already Christians, in the central parts of Europe, as well as in Scandinavia the local pagan belief systems were still predominant. For this reason, it is fair to suspect, that the Vikings having reached and settled in so many locations would have had a frequent contact with Christianity. And indeed, towards the end of the Viking period we have evidence for wide adoption of Christianity within the Scandinavian society, together with more centralized power and political organization. Weather pagan or Christian traditions, they surely must have spread by means of ships, similarly to other cultural aspects such as artistic styles. Significant evidence proving this thesis is the spread of Scandinavian burial rites. Ironically, many of the available ship finds today come from inland contexts, such as burials. The well-known examples include the pre-Viking Sutton Hoo in Britain or Oseberg and Gokstadt ships from Norway. This indicates the secondary use of the ships and may bias slightly their interpretation, since it cannot be entirely certain to what extent are they representative of the contemporary ships used at sea. However, these finds are a valuable source of information about the burial rites in north-west Europe. The tradition of ship burials seems to have originated in Scandinavia, from where it spread to the Germanic and Frankish people, as well as Britain. On the territory of north-western Slavic tribe of Vielets in Menzlin, a number of burials have been discovered, which resemble the Scandinavian fashion of ship burial. Although the graves did not contain ships themselves, the grave cut was shaped into boat-type cuts laid with stones. This is an evidence for either Scandinavian settlement in that region or strong cultural influence (Duczko 2000). It is justified to suggest, therefore, that the ship finds can reveal a lot of information about burial", "label": 0 }, { "main_document": "Evolutionary computation (EC) as a subfield of artificial intelligence means design and application of computational model of evolutional approach which is based on the Darwinian theory. It refers a term of some computational techniques dependant upon the evolution of biological life in the natural world. Involved with combinatorial optimization problems, many kinds of EC models have been developed by some metaheuristic optimization algorithms, such as evolutionary algorithm (EA) which is a subset of evolutionary computation, including evolutionary programming(EP), evolutionary strategies(ES), genetic algorithms(GA), genetic programming(GP) and learning classifier systems. EC model can improve the electronic devices more intelligent to program itself without human preprogramming what was happening and without human intervention. It is widely used in the science and engineering area, such as innovative design, optimization, machine learning and flexible and adaptive system. Genetic algorithms (GAs) which is one of the most important EC techniques have been applied to solve practical problems in the rapidly growing field. Through three experiments of two maths function and a Robot Racing software which are all implemented by GAs method to evolve the parameters, it discern GAs have the positive impacts on the efficiency of searching optimized solutions to some specified problem. With the rapidly development of computer science and electronic engineering subjects, more and more advanced instruments those have the close relationship with human are invented to cause a digital resolution. They are changing the world and the human life. More and more hi-tech products are appearing among a variety of areas, from design of integrated circuit (IC) even to the application of artificial intelligent (AI) technology which is playing an important role in the modern world. AI is no longer only a movie which can not only be watched in its ever expanding influence to each corner of the world. The science of creating machines which can solve problems and reason like humans is usually referred to as artificial intelligence. AI can depend on different external situation to make a final decision like a reasonable human. Around us, it is easy to find that AI gives final opinions to help people make judgement on many issues in every day life. The most interesting application in the current age, is embedding AI technology into robot. However, most robots currently could only be considered as machines in our life but not intelligent. As stated by Murphy: \"While robots are mechanical, they don't have to be anthropomorphic or even animal-like.\" For example, robot which delivers hospital meals to patients looks like a cart, not a nurse [1]. So the robot associated with AI technology should have the ability to solve some problem without the preprogrammed by engineer. Moreover, the ability of learning can not be ignored on AI robot. It refers that robot can feel the influence of environment automatically and program itself to search the optimized solution so that it could cope with the unpredictable issue it met. Subsequently, each behaviour of AI robot causes it to contact the external world, and perceive the information of feedback about the change of the world through some instrument like", "label": 0 }, { "main_document": "Capital investment decisions are critical as they entail substantial amounts of cash and time that are usually irrecoverable. They reflect the long-term strategy of the firm. Compotech Industries Plc is facing a choice between three capital investment projects involving either an expansion of a current plant in Santa Clara, the conversion of a phased out one in Waltham or the construction of a new one in Ireland. A critique of the net present value model used to evaluate each alternative is put forward whereby the most significant advantages are its use of cash flows, its account of the time value of money and its value-additivity and ranking properties. The most highlighted disadvantages are the uncertainty in the estimation of cash flows and the difficulties in obtaining an appropriate discount rate. In a 5 year span, using a 20% discount rate, the NPV In a 5 year analysis and applying a 10% cost of capital, the NPV Using a 10 year time horizon and a 10% opportunity cost of capital, the NPV Solely on financial grounds, Ireland should be selected in a 10 year analysis and Waltham in a 5 year one. - Prior to making a decision Craig Thomas should engage in more detailed cost benefit analysis exploring issues such the experience of operating in Santa Clara, the complexities of building a greenfield plant, the uncertainty of operating abroad, the reliability of the variable inputs (demand, price and variable cost), inflation and tax implications and the possibility of applying other capital appraisal techniques. Capital budgeting decisions are critical because they are irreversible, (or costly to do so Unless they include real options, as will be explored later. Compotech Industries Plc (CIP) wants to expand production in order to cater for the increase in demand for the Flexi-Connector. It can either expand the plant where production currently takes place at Santa Clara, or can convert the manufacturing technology of another plant in Waltham or build a Greenfield plant with a novel and more efficient technology that is being Beta-tested. These three alternatives will be considered under three different scenarios using net present value (NPV) as the financial analytical tool to aid in the decision making process. NPV is the difference between the present value (PV) of the future expected cash flows and the initial investment outlay. The NPV method is a sound tool for project appraisal because it considers all expected future cash flows during the life of the investment and takes account of the time value of money (TVM). Cash flows are more verifiable and less open to manipulation than profits and they are also observed in other corporate finance issues The NPV method acknowledges that a pound today is worth more than a pound in the future because of inflation, risk and the opportunity cost of investing that pound today at the real interest rate. such as dividend payments, other capital budgeting methods and interest payments. All cash flows apart from the initial investment are assumed to occur with certainty at the end of the period. A firm will have accepted", "label": 0 }, { "main_document": "us believe. By rendering women invisible, several power relations that sustain the international political system, get masked too. Thus, for a comprehensive understanding of the functioning of international relations , says Enloe, women need to be made visible so that the horizontal and vertical power relations that enmesh the domestic, national as well as inter-national levels of the system, may become available for study. (Enloe 1990, pp.2-3 ). Keohane uses Hannah Arendt's concept of power as \"the ability to act in concert\" to argue that such redefinitions of power enable us to look at areas of world politics, where collective effort could be made to cope with common problems (Keohane 1989, p. 246). Thus, these reconceptualisations of power, from feminine perspective(s), afford not only a better understanding of the concept itself, but also allow this central mechanism of international relations to be put to more than the conventional uses of controlling and confining. With the final redefinition --- of the concept of objectivity--- the argument moves to the aforementioned issue of epistemological redefinition. Objectivity is defined as a \"perspectiveless gaze\" by theories based on the positivist model. Feminists however see everything as socially constructed and refuse to accept this claim to impartiality, made in the positivist definition of objectivity (Peterson 1992, p.12). These false claims to objectivity-as-neutrality compel the aforementioned theories of international relations to adopt very dispassionate and disconnected approaches to the study of the subject. On the contrary, feminist theory proposes the method of conversation as the ideal way of studying one's subject. It discards the definition of objectivity- as- neutrality and instead defines objectivity on the lines of Evelyn Fox Keller's concept of \" 'dynamic objectivity'\" (Keohane 1989, pp.247-8; Murphy 1996, p.527).\" 'Dynamic objectivity... uses consciousness of self in the interest of a more effective objectivity'\" (Keller in Keohane 1989, pp.247-8). This conception of objectivity provides a more accurate approach to studying international relations, for it does not make faulty claims to neutrality but acknowledges its partial status. Thus, feminists believe in not However, feminist theory does not stop at discarding objectivity for objectivit(ies) but also formulates ways for enmeshing (and not merging) these various objectivit(ies) to construct a more comprehensive view of international relations . Sylvester suggests the process of \"empathetic cooperation\", for instance. She defines it as: Such a dialogue, between objectivit(ies) unaccustomed to conversing with each other, reveals unnoticed links in the network of international relations. These revelations create opportunities for a more nuanced and less naive comprehension of the field, as opposed to the fragmented and incoherent understanding afforded by aforementioned established theories of international relations. While discussing \"empathetic cooperation\" Sylvester argues that the conversational processes involved act as \"vehicle[s] of disturbance\" that make it \"more difficult to think of According to Enloe, this starting place for theory is practice. And since theory for her means critical theory, she says that every time a woman articulates a complaint as to how her government is trying to \"control her fears, her hopes and her labours such a theory is being made.\" Thus, discarding the false dichotomy", "label": 0 }, { "main_document": "In 1999 Professor Ahmed H. Zewail was awarded the Nobel Prize in Chemistry for \"his studies of the transition states of chemical reactions using femtosecond spectroscopy.\"( This meant that structural features of the transition state (denoted Before this breakthrough only kinetic and thermodynamic information was known about the transition state. () Bratos, S. A transition state has a lifetime of less than The transition state lies at an energy maxima of a reaction (see figure 1.1). The bond breaking and forming occurs in the transition state, on a time scale of 10-100 fs, which before the advent of femtosecond lasers was unobservable. () Zewail, A. H. It is only the development of the laser since it was first developed from the maser in 1960 that this has been made possible. The first laser was the ruby crystal solid state laser Since their conception they have been used and revolutionised Science especially some areas of spectroscopy to the extent they are now essential. () Maiman, T. H. Spectroscopy is the branch of science devoted to discovering the chemical compositions of materials by looking at the light they emit. The origins of spectroscopy lie in quantum mechanics, and the implications it has for the energy levels in molecules. The Schr From Young's Double-Slit experiments it was shown that light possesses both wave and particle properties, this was further defined by Planck, when he theorised light was composed of photons possessing energy E; where h is Planck's constant = 6.626 x 10 Absorption spectra show only a few discrete wavelengths of light are absorbed (or emitted) by atoms and molecules. This is since the incident photons interact with the atoms or molecules; exciting the system from one energy level to another. The energy of the photon absorbed is the energy difference between the two levels: where E Hence only certain frequencies of light can be absorbed or emitted by atoms or molecules. Part of the Born-Oppenheimer approximation allows these discretely spaced energy levels to be separated into three components; electronic, vibrational and rotational, (see figure 1.2 from ( In electronic energy levels, electrons can be excited from one energy level to another, or relax from one energy level to another and emit a photon of the corresponding energy. However in vibrational and rotational energy levels the mode or frequency of vibration or rotation can increase or decrease by the absorption or emission of energy matching the difference between the respective energy levels. () Mackenzie, S. R. 2004. The development of the laser has allowed the probing and discovery of the fine line structure of these energy levels, especially the vibrational and rotational levels, and their respective transitions governed by selection rules. The remainder of this essay will look at the basic principles of lasers, their uses in creating high resolution Absorption Spectroscopy and the Raman Effect and how lasers have allowed us to use it for Spectroscopic purposes. The word laser is an acronym for: Lasers create intense, coherent (one wavelength of light), highly directional light sources of high intensity. In most cases however a", "label": 1 }, { "main_document": "from the oil bath and cooled in an ice bath, forming red crystals. These crystals were then filtered off on a small sintered glass filter with suction, washed with 15mL of 95% ethanol, then 15mL of diethylether, and dried on the filter. The IR spectrum of the product was obtained (figure 3), its magnetic susceptibility was determined (table 1) and its melting point was found to be 234C. X where l is the length of the sample in sample tube, C is the calibration constant (1.275) R is the reading from the evans balance, R The corrections used are: 13x10 There are two possible co-ordinations for four co-ordinate complexes: square planar and tetrahedral. The crystal field splitting diagrams for these two geometries are shown below filled with 8 d electrons as in the complexes in this experiment: As these diagrams show, the square planar geometry is often more favourable for d However, the terahedral geometry is sterically much more favourable, because the bond angle in this case is 109, compared with 90 for the square planar geometry. Steric factors are much more important when considering first row transition metals, because the metals are much smaller, so the repulsion between the electrons in metal-ligand bonds will be increased. This means that bulky ligands will usually force the complex to adopt a tetrahedral geometry. The energy difference between the e and t The magnitude of The magnetic moments of the complexes synthesised in this experiment (see in table 1) indicate that CoCl(PPh This means that CoCl(PPh From this it is clear that steric factors out weigh the benefits of the low energy square planar configuration in the case of the CoCl(PPh Conversely the Ni(NCS) Geometric and linkage isomerism are possible for this complex: The co-ordination of the (NCS) ligand can be determined with the infrared spectrum of this complex (figure 3). If the ligand co-ordinates to the metal atom via the sulphur atom, the stretching of the CS bond appears at 700cm If the ligand co-ordinates to the metal via the nitrogen atom, however, these stretching frequencies are different; approximately 800cm In figure 3 the absorbances at 2084 and 692cm As there are only two absorption bands for each of these bonds, I expect that the complex adbots a trans geometry. This is because on the asymmetric stretch for a For a This is consistent with steric arguments, as the This means that the Ni(NCS)", "label": 1 }, { "main_document": "and death processes. Based on metapopulation dynamics theories, however, dispersal rate has to be taken into account. Therefore, 'dispersal rate' was added into the MatLab files as one of the processes in Monte Carlo simulation. Before entering dispersal event into the files, a control run without dispersion was produced by running 'dosims05' via MatLab software. There popped up four graphs, which only the first graph of each series graphs was interested because of the targeting outcome. Twelve iterations sub-plotted in the diagram addressed by repeating function D. (Figure 1) For a better visualization, there designed another graph for summarising 12 repetitions into 'dosims05'. (Appendix III) It illustrated the mean of twelve means of population size over time without dispersion. (Figure 2) The model was initailised with population growth rate i.e. birth rate and death rate, initial population size (IC), number of populations (Npops), number of repetitions (Nreps) and maximum time period (MaxTime). In 'fsim05', 'DspR = pops .* Dispersal Rates' was coded following same equations for birth and death rate that indicated they were all calculated at population level. Dispersal rate was also added as part of total rate of events by inserting code 'sum(DspR)'. (Appendix II) Since once a dispersal event takes place in one subpopulation, there is always another subpopulation has the opposite influence in population size, dispersal rate was subtitled as leave rate and arrive rate. By ignoring the speed of dispersion, it was considered that leave rate and arrive rate happened in one subpopulation were always at the same time point. Therefore, leave rate and arrive rate were programmed into 'fsim05' within one case, noted as case 3. (Appendix II) The random numbers were produced to determine time to next event, which event to occur and which population to occur. Since leave event and arrive event were discussed separately in case 3, totally 4 random numbers were required. To change the code of random numbers in 'fsim05' 'randnums = rand ( 1, 3 )', modify '3', the second number in bracket into '4', while the first number meant the value of random numbers were within zero to one. (Appendix II) The first random number represented time interval to next whatever event would happen. The second random number was responsible for choosing one process to take place from birth, death and dispersion. It also determined which case it would be switched on. The third number decided which population that the selected event would occur. If dispersal event was picked, the third and the fourth random numbers were used to find two random populations to which leaving and arriving would occur, respectively. In case 3, when leaving event occurs, the population size decreases by one accordingly, meanwhile arriving is vice verse. To enable the computer recognize the change in population size, for example, 'pops (index ( 1 ) ) = pops ( index ( 1 ) ) - 1' was coded for losing one individual in that population. This can also be learned from the codes standed for the occurrences of birth and death. (Appendix II) The accomplished MatLab files", "label": 0 }, { "main_document": "were strengthened in the late fourth century, due to an imminent Macedonian assault, with the construction of the It is likely, however, that in this case Travlos generalises from evidence for a moat found in a single excavation, while the emphasis he puts on the military constructions of Hellenistic Athens is heavily influenced by the accounts of war given by numerous ancient writers. One would be more inclined, perhaps, to agree with Wycherley (1978: 20-21), who draws his evidence from Hellenistic inscriptions. These inform about occasional maintenance work on the Athenian walls with the The White Poros Wall, which succeeded the Although the White Poros Wall is thought to date from the late third century the down dating of associated pottery may actually bring its original date to about 175-150 BC (Conwell, 1996: 97). This may provide some evidence that even at later periods Athens was not particularly prosperous. The troubled historical events between Athens and Macedon during the third century are thought to be responsible for the lack of new public buildings. For this reason, the large rectangular Arsenal to the northeast of the Hephaisteion stands out as an oddity (Camp, 1986: 166; Pounder, 1983: 245). It remains unclear why this building was constructed during a time of uncertainty and tight finances, 270-260 BC (Pounder, 1983: 244, 246), and the identification as an arsenal is not without its limitations. Any understanding of this structure has been based largely on its foundation trenches and similarities between its ground plan and another arsenal in Peiraieus (Pounder, 1983: 233, 238). Building activity clearly resumed in the second century and this has been attributed to external stimuli because various foreign kings competed to adorn Athens with elaborate architecture (Camp, 2001: 170; Camp, 1986: 169; Green, 1990: 52). In the Agora (Figure 3, Plate II) the construction of long porticos (Stoa of Attalos, Middle Stoa and South Stoa II) has been interpreted as an attempt to order the haphazard layout whilst making Athens resemble the Hellenistic plan of cities in Asia Minor (Camp, 2001: 182; Camp, 1986: 180; Travlos, 1993: 86; Tomlinson, 1992: 64; Chamoux, 2003: 272). Therefore, it is implied that foreign benefactors assisted Athens in finding its rightful place in the Hellenistic world by transplanting elements of the Hellenistic However, the vast new porticos were self-contained units and did not resemble a typical Hellenistic marketplace with Ionian colonnades (Wycherley, 1978: 80). Although the Middle Stoa divided the square of the Agora into two unequal sections (Camp, 2001: 180; Camp, 1986: 175) a great deal of open space still remained (Wycherley, 1978: 82). Moreover, as Shear (1981: 360) points out, the Agora porticos respected the ancient tradition of the open square. The Middle Stoa and the Stoa of Attalos were placed at right angles with their approaches pointing to where the Panathenaic Way crossed the square. These appear to provide evidence against the complete revamping of the Agora according to external Hellenistic prototypes. This can be substantiated further by comparing the Hellenistic Agora to that of the second century AD (Figure 4, Plate II) when", "label": 0 }, { "main_document": "all countries could pass prescriptive extraterritorial legislation. As a fact, many actually did, perhaps relying, for this purposes, on the Lotus Case Available at As summarized by A. Qureishi ( In order to better figure out this division, let us pick an extreme example. Let us think of a statute enacted by a little fictitious country which provides for tax claim on the income of all persons, natural or juristic, everywhere in the world. Absurd as it may seem, such a law could at least be enacted validly within the territorial limits of an individual state. It might be argued that such a law would offend against international law - firstly, because states should refrain from exercising jurisdiction over people and things entirely unrelated to it Nevertheless, the authorities of this state might invoke the authority of Qureishi In that respect, ICJ former President Jimenez de Arechaga - cited by A Qureishi, ibid, Obviously, the example of our \"anti-tax haven\" is most unlikely to happen, as a worldwide revenues arising thereto would be impossible to collect efficiently and reasonably, as they would have to be provided from taxpayers whose identity or abode our imaginary country ignores. Such a statute would also be unfair, as our imaginary nation can provide nothing to persons who never set foot on its territory, let alone elected representatives in its Parliament to pass tax laws. It would in fact violate at least tenets one \"Tenet One - Statutory: tax legislation should be enacted by statute and subject to proper democratic scrutiny by Parliament.\" \"Tenet Four - Easy to collect and to calculate: a person's tax liability should be easy to calculate and straightforward and cheap to collect.\" \"Tenet Nine - Fair and reasonable: the revenue authorities have a duty to exercise their powers reasonably.\" Institute of Chartered Accountants for England and Wales, Published May 2000. Reference: Tax Guide 2/00, page 5, available at Op. cit., page 19. In any event, most problems would only arise when our imaginary \"island in the rain\" tried to enforce its overtly far-reached tax statute abroad - for, until that moment, the content of such an extraordinary decree may be ignored by most of its targets. Leaving aside any practical considerations, why is this rule so absurd? Chiefly because it attempts at setting up tax liability upon foreigners lacking any connection whatsoever with the country who enacted it. Having dwelt on the legal grounds underlying the right for countries to tax persons and companies and the reach of this right, we should analyze how individual countries have been dealing with their potentially rival claims on the assets of individuals and countries. International law has traditionally accorded to every country a right to tax people and companies within its borders and its nationals everywhere, in each case as a manifestation of sovereignty links between a country and his national. It also held by many It might be argued that our fictitious country, recognizing the importance of such link (not only out of its will to abide by international legal standards, but because it", "label": 0 }, { "main_document": "Sensory evaluation of different brands of chocolate from different producers was done with the use of profiling technique. Profiling is a sensory technique which uses a group of panelist to systematically translate and discriminate the nature and intensity of sensory characteristic of a food product. During the practical, before assessors tasted different four brands of chocolate to differentiate different attribute, the panel discussed and agreed on the criteria/ terminologies to be used to describe different attribute which were measured by using unstructured scale method. Chocolates analyzed were from Waitrose Belgian milk, Cadbury's dairy milk, Asda milk and Galaxy milk. The panel consisted of fifteen assessors and samples of four chocolates were eaten whilst discussing the attributes. The panelist were not experts but have received basic training in detection of different sensory attributes, and being able to assign an appropriate value of intensity of attribute being looked at. The aim of the experiment was to train the panel on the ability to discriminate different attributes of milk chocolate tasted, the panel to demonstrate ability to communicate information about character and intensity of sensation and to use the data obtained to determine the accuracy and reliability of assessors with the use of statistics package for sensory analysis and to demonstrate ability to interpret data/results obtained. In sensory evaluation of milk chocolates, there were 15 assessors to discriminate the nature and intensity of sensory attributes. The analysis was done twice to replicate the data. Firstly the panel tasted all chocolate blindly in the sensory laboratory while chocolate samples were labeled 1-4. The panel compiled list of vocabulary in which was thought described chocolate best as it was eaten with the help of panel leader. Descriptive words were discussed in the panel so as assessors understood and being able to use them to describe attributes of the chocolates. Descriptive words were grouped into three main categories as follows; With the use of unstructured line scale, attributes were evaluated for each of chocolate brand by use of computer. The panel while were tasted the chocolate samples in sensory laboratory, they sat in individual booth and rinsing their mouth in between sample tasting. Table showing different means for different 4 brands of chocolate determined by 15 assessors in duplicate. Also least significance difference was calculated to test which means are significantly different in Analysis of Variance (ANOVA). Method used for scoring was that of an unstructured scale i.e. open line scale with maximum and minimum marked on the ends of the line. This scaling was done on the computer so that data could be analyzed electronically. Analysis of variance (ANOVA) was used in the two way ANOVA technique to calculate whether or not the scores had any significance difference, if there was a difference in the assessors and the least significance difference. The significance differences between the scores are shown on the table above. Using the data on the table and star diagrams above, the differences between attributes for different brands of chocolate can be seen clearly. Galaxy was generally the highest score and by a long way", "label": 0 }, { "main_document": "lowest melting behavior. On the other hand, Galaxy and Cadbury's are quite similar to each other due to their high degree of shininess in appearance and strong melting coating effects of mouthfeel. A critical appraisal of the QDA method can be summarized as below: This method is very precise, as it is broken down to quite a few procedures. Particularly, the use of a graphic scale, which reduces that part of the bias in scaling resulting from the use of numbers; the statistical treatment of the data; the separation of panelists during evaluation; and the graphic approach to presentation of data are all the beneficial points of this methods. The panel, because of lack of formal instruction, may develop erroneous terms. For example, in this practical, just a couple of students developed the terms of bitterness in the after effects, however, there might be confusions among other students on this term. As the nature of bitterness can be various, some of them may imply artificial note, while some of them can imply other notes, such as medical note. In this case, different people might not have the same understanding on a particular term. Lack of definition also may allow a senior panelist or stronger personality to dominate the proceedings in all or part of the panel population in the development of vocabulary. The \"free\" approach to scaling can lead to inconsistency of results, partly because of particular panelists evaluating a product on a given day and not on another, and partly because of the context effects of one product seen after the other, with no external scale references. The lack of immediate feedback to panelists on a regular basis reduces the opportunity for learning and expansion of terminology for greater capacity to discriminate and describe differences. On a minor point, the practice of connecting \"spokes\" of the \"Spider diagram\" can be misleading to some users, who may expect the area under a curve to have some meaning. In reality, the sensory dimensions shown in the \"web\" may be either unrelated to each other, or related in ways which cannot e represented in this manner.", "label": 0 }, { "main_document": "parent culture (Clarke et al 1995:19). Early subculture theorists argue that defiance shown by working class young people against middle class 'norms' created groups who became synonymous with deviant behaviour (whether real or perceived) (Raby 2005:155/6; Cohen P 1972:91; Hodkinson 2004:137), and were especially noticeable at times of major social change (Downes and Rock 2004:161). More recent subcultures, or neo-tribes 'neo-tribes' (Maffesoli 1988, cited in Malbon 1998:280), are influenced by wider issues than parent culture alone (Malbon 2005:185; Maffesoli, cited in Malbon 1999:26; Kelly 2000:301). The world has become more global, more risky and faster changing (Lupton 1999:9; Beck 1992, cited in Critcher 2003:165), whilst the restrictions upon young people have become greater (Muncie 1999:6; Muncie 1997:67). In order to find coping strategies they have become less visible, by creating their own 'clubs', adopted fluid and adaptable cultures and consumption has become one of experience rather than style (Malbon 2005:184). The 'experience' is enhanced by activities frowned upon by the adult world - loud music, staying out late and the consumption of (usually large amounts of) alcohol and recreational drugs (Kerrigan 2001). This makes it easy to maintain the negative labelling of young people, easy for negative images of young people to be accepted as the truth and difficult to see the significance negative foundations. It is unclear how differently young people may be treated if the image of them were positive. The reasons for subcultures may have developed as theories have changed, but few new understandings have evolved. Despite adult concern, possibly for themselves and their future than for young people and extensive research, there seems little attempt to understand the desire for young people to find their own identities, rather the focus is on enforcing stricter controls and surveillance in order to bring them back into line. Furthermore, insufficient consideration of external factors such as boredom (Matza, cited in Downes and Rock 2003:153) 'poverty, addiction, unequal access to education' (Hesmondhalgh, cited in Bennett 2005:256) or employment (MacDonald and Marsh 2002:30; Downes 1972) will do little to support young people who have little accrued knowledge or experience to call on. Adult misunderstandings and more restriction of young people's behaviour and actions will only lead to more outrageous behaviour on their part (Becker, cited in Bennett and Kahn-Harris 2004:4) and will do little to remove the negative image of young people dealing with the 'storm and stress' of adolescence. Until young people are allowed to integrate into the adult world and assigned rights as individuals, the negative labelling will remain (Smith 2003:187). It is heartening to see the UK Youth Parliament manifesto (2005:5) calling for an end to the negative portrayal of young people and the promotion of more positive images by the media, but it is difficult to say how long it will take to effect this momentous change.", "label": 1 }, { "main_document": "During the second millennium B.C. Cyprus was involved in an interregional trade network that spread from the Levant and Egypt in the east, to the Aegean and Sicily in the West. Maritime links throughout the region enabled the import and export of exotic goods such as ivory and lapis lazuli; as well as raw materials including Cypriot copper; and other commodities like spices and oil. Evidence of these trade routes can be ascertained through studying the distribution of regional pottery styles, and scientific provenancing of raw materials. Further information can be gathered from historical written sources such as the Amarna letters (Karageorghis 2002: 30) or the Mycenaean Linear B tablets from Knossos (Cline 1994: 60) and from material recovered from shipwrecks like those at Cape Gelidonya (Bass 1991) and Uluburun (Pulak 1997). Cyprus was engaged in a number of different aspects of the Late Bronze Age Mediterranean trade network. The Cypriot communities were involved in the production and consumption, and the import and export of both raw materials and finished products. In addition, the Islanders also played a key administrative role and acted as an intermediary unit trading goods between different countries involved in this interregional trading system. The exploration of this subject also encompasses wider ranging issues beyond economic concerns, including social and political complexity, settlement hierarchy, and contemporary ideologies. Broodbank has effectively summarised it thus: Through the investigation of the archaeological evidence and critical evaluation of current research, these ideas will be developed, with the aim of explicating the role of Cypriot communities in Eastern Mediterranean trade in the second millennium. The second millennium BC marks the transition between the prehistoric and protohistoric Bronze Ages, a definition based on the existence of contemporary written records which refer to Cyprus. The period under investigation here is the Late Bronze Age which, within the geographical confines of Cyprus, can also be referred to as the Late Cypriot (LC). Figure 1 presents the currently accepted chronological sequence for this period in Cyprus. The Late Bronze Age in Cyprus is characterised by an increase in population leading to the establishment of settlements in new areas and the development of urban centres with public and monumental architecture. Two other major developments that took place during this period, which are particularly relevant to the question under consideration here, are the intensification of copper production and the development of extensive trade and exchange networks within the eastern Mediterranean including links with Egypt, the Levant, Minoan Crete and Helladic Greece (Knapp 1994: 282). This marks the change from \"an isolated village-based culture into an international, urban-orientated complex society\" (Knapp 1994: 271) - that is, a society controlled by an emerging elite. It has been said that Cyprus played 'a pivotal role' in long-distance trade (Steel 2004: 169). The following discussion examines the nature of this role and begins by exploring some of the reasons why this might have been the case - notably its strategic position in the Mediterranean geographical context. Geographical factors such as the location of Cyprus and its proximity to other major participants in the", "label": 1 }, { "main_document": "tweed jackets, they dreamed new fangled dreams of real aeroplanes and one day, perhaps even riding in one themselves. George Bean's ambition to create an amusement park which would \"make adults feel like children again\" The Pleasure Beach is fuelled by fun and a sense of desire mixed with danger. Wyatt, M, I stand by a dustbin shaped like a large white rabbit and see a girl smothered in fake tan squeal as her boyfriend chases her with an ice cream. A toddler gazes suspiciously at a swaying pink balloon tied to his pushchair. He has no idea what it is but it's pink and shiny and he decides he rather likes it. Two Chinese women are having their photograph taken in front of The Big One. They point to the top, their faces twisted in mock terror, the man with the camera laughing hysterically, imagining their mother's reaction when they take the picture home. A small girl looks up at her father who is eating clams from a cone. He gives her one. She cradles the strange sea creature in her hands but temptation overwhelms her and after testing its flavour on her tongue, she gobbles it down. To sit in the food court is to experience several time periods in countries all over the world. The plastic tables and chairs are surrounded by a seafood counter designed in the shape of a medieval barn, a creperie pasted all over with Eiffel Towers, Tudor like houses selling sweets and ice creams, a red telephone box from which hot jacket potatoes are served, a bar in the shape of an erupting volcano and a curry house flanked by palm trees. It's unreal but it's happening. Back outside in the summer heat an elderly couple sit on a bench beside large figurines of Alice and the Mad Hatter. They smile as the train of the Big Dipper flashes past them, remembering another time when people, dressed in their best, paid a shilling to ride it for the first time. As another train- load of people reaches the summit of The Big One, a thousand pairs of eyes squint through the sunshine to watch its descent. Riders in their jeans and t-shirts throw up their arms in exalted triumph. From time to time, a lone voice amongst the crowds is heard to say, \"What's all the fuss about? It's only a roller coaster ride with a big drop - it's not really magical\". But everyone else ignores him. They want to believe in the magic, determined to ride happily ever after.", "label": 1 }, { "main_document": "highlighted in the aforementioned article itself. It is clear that Charles Dickens's Although there are no clear signposting of trust issues in Charities are themselves a form of trust (charitable trust) - \"the trust is the legal form most widely adopted to provide that institutional structure.\" The Victorian society was much concerned with philanthropy, morality and religion. Hence charities fulfilled these aims and benevolent purposes; individuals could satisfy their altruistic needs. This in itself is indicative of the dominant air of patriarchy in those times. However it is to be acknowledged that charities played a vital role in supporting the welfare of the needy in Victorian society. \"The enactment of the Poor Law Amendment Act 1834 re-established a deterrent system of state poor law, imposing conditions on those who sought relief.\" Ergo help could be sought from charitable institutions but only by the 'deserving poor' as expressed by Moffat. This was yet another lamentable scenario. From creating the ruinous case of The use of trusts in these cases were not in itself harmful but coupled with the interference of the Chancery, trusts were turned into creatures of despair and destitution. This is in direct contrast with the statement by William Holdsworth who expressed trusts as a magnificent and ingenious concept; \"...nobody had any idea of the magnitude of the juristic feat which English lawyers had accomplished when they invented [the trust] concept. No one had any idea of the large space which that concept has filled, and still fills, in our public and private law.\" The fact that the trust concept was lauded as a pinnacle of Chancery's success was sorely diminished by harsh realities in practice. Lobban, Michael, 'Preparing for Fusion: Reforming the Nineteenth-Century Court of Chancery, Part I', 5. (online) URL: (Accessed 6 April 2007). Cross, Geoffrey and Hand, G.J., Alexander, Gregory S., 'The Transformation of Trusts as a Legal Category, 1800-1914', Haskett, Timothy S., 'The Medieval English Court of Chancery', Watt, Gary, Op.cit. note.8, p.136. Op.cit. note 6, p.43. Moffat, Graham, Ibid. p.885. Holdsworth, William, Goodhart, A.L. and Hanbury, H.G., London: Oxford University Press, 1946, p.179. Focusing on the trust concept we find that the law did not necessarily correlate with the practice. \"The creation of the trust and its effective management was both personally and legally a matter of conscience.\" This statement applies to the role of the trustee. But on closer inspection this phrase; specifically the words \"legally a matter of conscience\" should surely apply to the Court of Chancery. The Lord Chancellor and Master of the Rolls were the persons, who adjudged cases and produced final judgments according to equity (encompassing conscience). However the law does not correspond to the reality of litigation in the Chancery, where it can be said that the conscience of the Lord Chancellor and Master of the Rolls were not bound. As aforementioned, by the seventeenth century rules and principles of equity were beginning to harden. Thus the reality of practice was that judgments were being 'churned out' according to established precedents and not according to conscience. This is reflected in", "label": 1 }, { "main_document": "rest of the world would definitely benefit from if they were interested in protecting the environment. Liefferink, D. & Andersen, M. (1998) \"Strategies of the \"Green\" Member States in EU Environmental Policy-Making\", Over the past two decades the EU has spent a large amount of its budget on helping member states to improve their environment, for example after the southern enlargements in 1981 and 1986, which were \"accompanied by a substantial increase and refocusing of the Communities' financial support schemes for environmental investments.\" In 1993 the Cohesion Fund was launched and half of its budget was to be spent on environmental projects whilst more recently the LIFE programme (the Financial Instrument for the Environment) funds research into environmental issues and schemes to protect the natural habitat or the environment. According to the Union's official website LIFE has a budget of 317 million Euros for 2005-6 and even countries on the border with the EU are eligible for grants such as Romania and Turkey. The money which has been put into environmental protection in Europe will be of benefit globally not only as a role model for other organizations or countries but as a means of improving environmentally friendly technology and resources which can be sold to the rest of the world. Lenschow, \"Environmental Policy\" p.314. Within the Union the European Parliament is particularly green and the Environment Committee has been described as \"exceptional among the major 'legislative' EP committees in continuing to produce a considerable flow of 'own-initiative' reports.\" This is because the Committee has been extremely pro-active instead of waiting simply to react to formal Commission proposals. The Parliament has also been particularly susceptible to Non Governmental Organisations (NGOs) campaigning for the environment. For example, in 1971 the Parliament was petitioned by animal rights groups concerning the hunting of migratory birds and immediately asked the Commission to take up the issue, finally pressurising them into a directive in 1976. The success of green parties at European level is surely an indicator to the rest of the world that it is a popular choice to pursue environmentally friendly strategies alongside economic growth. Judge, D. (2002) \"Predestined to save the Earth: The Environment Committee of the European Parliament\" in Andrew Jordan (ed.) London: Earthscan Publications Ltd. p.121. Judge \"Predestined to save the Earth\" p.122. The success of NGO lobbying at the EU compared with other countries will also be of benefit worldwide as it will encourage NGOs to lobby elsewhere especially if they may have support from sections of the Union. The largest NGOs such as the European Environmental Bureau (EEB), Friends of the Earth and the World-Wide Fund for Nature actually receive funding from the Commission to help with their regular operations. In fact the EEB was set up in 1974 with help from the Commission and is now a federation of 132 organizations from 24 European countries. However, although there has been an expansion in environmental group activity at the European level, \"there are still relatively few environmental groups that maintain a permanent representative in Brussels.\" Lenschow, \"Environmental Policy\" p. 318.", "label": 1 }, { "main_document": "The aim of this essay is to explore the roles these stratifications play within the women's movement, and would argue that it is possible for women's movement to work with these intersections, if identified, to fight women's subordination and promote women's empowerment. To do this, the essay will discuss the global women's movement, and explore women's movements in a national context. It will then discuss the differences within the women's movements, applying global arguments to Uganda. The essay will examine the key concepts of ethnicity and class, and analyse their roles within women's movements using case studies from Uganda. Furthermore, it will attempt to identify the factors that led to successful mobilization and draw lessons from relevant strategies adopted. Finally, it will recommend strategies for future engagements leading to successful mobilization towards women's empowerment. To answer the question effectively, it is important to define the key concepts, which are, class, ethnicity, women's movement and social change. Although these concepts have been widely used by scholars, their definition is always contested and problematic. This essay will however not address these issues in detail, due to constraints on number of words. On the other hand Marxist socialist see class divisions as grounded in the different relations of groups to the means of production, which provides a group's class determination (Anthias and Yuval-Davis, 1992). For Philip(1992) classifying women, proved a difficult task, as 'middle' and 'working' came into focus, because of roles allotted to women; gender roles therefore play a significant role in determining class allocation for women. Thompson's definition seems relevant for discussions on women's movement as it affirms that class defines people of the same social and economic level, this is in line with Maggie Humm's definition (1989/1995:39). Where group membership is natural although there may be other ways of joining, for instance ethnicity can be constructed outside the group by material condition and social representation by other groups. Fenton (1999:10) however, argues that ethnicity is a social process that involves moving boundaries and identities, which people, collectively or individually draw around themselves in their social lives. Central to this process is the production and reproduction of culture. Both definitions are relevant to discussions in this essay as group membership can be natural or socially constructed. This definition implies that women's movement is multifaceted. Basu (1995:1) argues that while women's movements share certain commonalities, they differ along many dimensions. They have always been arguments among feminist scholars of what really constitutes a women's movement and what criteria to use to judge if a certain movement is successful or not. This essay will however not address the various debates, but will acknowledge that women's movements exist in most countries mentioned in the essay, though their forms and issues may differ depending on the context within which they operate. Hernes (2001:14,223), however defines social change under two sets of assumptions that change can consist of either the actors changing (cognitive or attitudinal change) or the structures changing (change of states, their relationship or distributional aspects). Both definitions are relevant to the essay as they both", "label": 0 }, { "main_document": "reason for Athena being born from Zeus; he states that he \"swallowed her [Metis] down into her belly.\" By swallowing Metis, Athena's mother, Zeus takes all childbearing responsibilities from her. Could it be argued that this lack of intimacy with her mother caused Athena to have no interest in having a husband or lover? Even when Hephaestus pursues Athena, she does not want him and rejects his advances. Hesiod Hestia is the goddess of the hearth and stayed a virgin throughout her life. She rejected the advances of Apollo and Poseidon and she vowed to live as a virgin. She often gained precedence at banquets for being the eldest child of Cronus and Rhea. One hymn details, \"For without you, there are no banquets for mortals where one does not offer honey-sweet wine as a libation to Hestia.\" Hestia's choice of remaining a virgin was reflected on her priestesses who had to follow her 'role' As Hestia's duty was the patron of the hearth and home, it could be argued that she did not want to 'pollute' or 'corrupt' the idea of a respectable Greek home Grant, M. and Hazel, J. Lastly, Artemis was the goddess of hunting and archery. \"In Classical Greek literature she was characterised by a deliberately chosen and forcibly maintained virginity.\" She was known to punish those who violated this state and her followers were to be virgins too, just like Hestia's. This punishment can be seen when Actaeon sees her bathing, and as she was worried he may boast at seeing her naked, she turns him into a stag. Artemis is mentioned many times in Euripides's He chooses to follow her by remaining a virgin himself, which later causes him his destruction. One argument for this punishment of Hippolytus is because he remained a virgin and did not complete his 'transition process'. It has been seen that mortals had to go through 'stages' in their life, such as becoming the age of marriage, marriage itself, and also having children. At these stages, gifts were often given at temples to Artemis, who would overlook the transition to the next stage. As mentioned above one of these transition stages would be the union of marriage with a partner and the consummation. It can be seen in the play Grant and Hazel, (2002) pg. 50 Another example of individuals failing to complete their transition is the myth of the fifty daughters of Danaus. They were given in marriage to 50 other men and each given a dagger to kill them on their wedding night. Forty-nine of the daughters did kill their husbands, and as a result of not completing their 'marriage' and 'consummation' stages they were \"punished in the underworld by eternally having to fill water jars, through which the water leaked away.\" Both these myths show the consequences for not following the transition process in which they are supposed to follow. Like mentioned above these myths would have acted as 'lessons' for everyday society. Additionally, the myth of Hippolytus demonstrates the reasons why Gods and Goddesses should not be", "label": 1 }, { "main_document": "hand, it is worth noting that new perspectives emphasizing the strategic utility of the East Asian bloc are emerging in Washington. The protagonists argue that East Asian regionalism is not necessarily inimical to the U.S. national interests albeit the united East Asian bloc might be a more formidable competitor to the U.S. economy (Bergsten 2001, 20). Moreover, it will prompt new liberalization of trade, and notably put the stepping stone in the process of market-friendly economic reform of China. They also argue that East Asian bloc without U.S. could not undermine the existing the U.S.-Japan and U.S.-South Korea alliance and further propose the establishment of collective security regime in this region (Curtis 206-208). The six-party talks on North Korean nuclear weapons give a good model for regional security dialogue in Northeast Asia. The proponents argue this collective security dialogue enhance the transparency of China's foreign policy. Other signals of changing attitude are also being detected. Above all, tremendous aversion to collective security dialogue to ease military tension in Northeast Asia seems to be lessening. The Bush administration, for example, has continued a six-party talk over two years in order to look for the breakthrough of stalled nuclear conflict in East Asia, which hosted by China and comprising North Korea since August 2003. This could be regarded as a political change in terms of the Bush Administration's East Asia initiative. It is premature to anticipate how the United States will response in the event of emergence of 'Asian Great Wall' in East Asia similar to the imaginative 'Fortress Europe' in the past. There can be little doubt, however, the impetus to hasten the institutionalization of economic cooperation is shifting from the United States to East Asian people. Thirdly, the economic disparity within ASEAN+3 is still serious and it impedes the rapid progress in economic cooperation. Notably, the gap in economic volume and performance between Southeast and Northeast Asia is a decisive element to hinder the progress in economic cooperation. According to functionalist perspective, the successful economic integration increases if the economic gap is low among the member countries. Latin American Free Trade Area, for example, could not realize due primarily to the presence of three different-level groups in their economic volume within potential members (Mattli 1999). As we examined above, the individual size of the currency which might be respectively provided by the currency swap agreement also demonstrate the serious imbalances between Southeast and Northeast Asian economies. This extreme asymmetry can cause the debate over 'who is eligible to wield the leading power to promote the economic cooperation'. In addition, economic cooperation within the ASEAN members might give rise to limited trade creation effect by the elimination of tariff since the economy of scale is less likely to be realized than that with China. Needless to say, the ASEAN's most urgent aim is to obtain preferential market access toward 'huge' China. One of the EU case studies shows the strong co-relationship between the order which eventually join the unified EU and the dependency of trade on bigger market (Baldwin 1995, 35). In practice,", "label": 0 }, { "main_document": "was probably integrated within the wider religious sphere. Given that the centrality of the \"wanax\" beyond the ritual domain is questioned, various elites may have been important in shaping the state form with their decisions. If the existence of heterarchical structures can be ascertained by future research, then surely this will challenge the prevailing image of a strictly hierarchical organisational form. The emergent state form in mainland Greece was not dictated by systemic variables but was underpinned by issues and relations of power and authority as well as by strategies for legitimating the If various elites managed the state as a political form through decision-making, then the emergence of the early state and the form(s) it assumed were not accidental. Therefore, although it is possible to challenge the extent of centralised control and envisage diverse and interrelated power bases affecting state organisation, individual leaders may have been instrumental in defining both when the early state emerged and the kinds of form it took.", "label": 0 }, { "main_document": "produce over its milk quota as we will be paying for the production but may not get a return on the product. The pressure groups will affect what demand there is for certain products, for example if a large scale campaign was launched by a vegetarian group trying to promote vegetarianism then the demand for meat may fall affecting farmers prices and may meaning they have to change what is produced. Also the status of the business in the local and national community can be affected by pressure groups, if a farm is exposed as miss treating animals or using chemicals on crops that could endanger human health consumers are highly unlikely to buy from this source. If trade barriers are established by either UK or International governments then it may affect the import of UK produce to other countries and therefore lower the demand for the product, and this will also lead to a decline in the price of the product. An example of this is the EU ban on UK beef exports which was established after the BSE crisis in the UK. This saw the UK beef exports fall from Imports into a country will affect the price of the home produced goods; if the import of goods is too high then the producers in the U.K. will end up with no market for their products. The purchasing power of the consumers will affect what the business can produce. For example if the consumers have a low purchasing power then they will not be able to afford to purchase higher costing goods such as organic food or specialist food. The exchange rate between countries will affect the price of the product and also the amount of import and export that occurs. For example businesses in Europe are more likely to trade to those who use the Euro as there is no need to exchange currency. Most economic factors are shaped by government or EU policies The beliefs of the consumers will affect what is produced and how the goods are produced. For example Halal meat which is consumed by Jews and Muslims and is killed in a specific way. Farmers markets and farm shops have become more popular as people become more aware of the impacts of buying food that has travelled hundreds of miles and are starting to buy more local products. There is also an increase in novelty shopping occurring, this means that more people want something different from their shopping experience and so visiting the farm shop or farmers market makes the shopping experience different and more enjoyable. The development of new machines and techniques will help to make the farm more efficient, such as more precision farming that will increase farm yields. This will probably increase the profit of the business. The use of G.M. crops will most likely help to increase yields, and cut input costs. This will help the business as it will increase the profit that it is making. A stakeholder is an individual or group that is either within or outside the", "label": 1 }, { "main_document": "The legal positivist tradition can be traced back to the work of Jeremy Bentham, who in turn drew upon the writings of Thomas Hobbes and Jean Bodin. It traditionally opposes the theory set out by natural lawyers, who hold that law and morality are inseparable. Positivism can be characterized by two major tenets; firstly that law and morality are two distinct entities, and secondly that the validity of any legal system is ultimately determined by reference to certain social facts. This essay will focus on the positivist approach as developed by Austin using his 'Command Theory' of law, and the weaknesses associated with this theory. I will go on to consider Hart's reaction to Austin, and the way in which Hart remedies the positivist approach in order to account for the problems he believed were inherent with his predecessors theories. Austin developed his command theory of law off the back of his contemporary Jeremy Bentham. Where Bentham had merely outlined the theory, Austin approached the subject in a much more analytical manner, aiming to capture the essential nature of the law as a science and thereby eliminating all the common uses of the word 'law' which are not relevant to jurisprudence. Austin's starting point in determining the exact province of jurisprudence was to exclude all the meanings of law which were not translatable into a command. For Austin, a command can be broken into three components; Austin's classification of a command helps to illustrate the exact nature of the type of law he is seeking to define. In 'The Province of Jurisprudence Determined', Austin uses his conception of a command in order to distinguish between different kinds of law. His first step is to differentiate between what he calls 'laws properly so called' and 'laws not properly so called'. Laws not properly so called are those which are often referred to as laws but are, in terms of legal science, inappropriate. Examples of these laws are what Austin terms 'Laws by analogy', which include things like the rules of etiquette and international law, and 'Laws by metaphor', which covers things like the laws of nature. Although both of these 'laws' can be construed as commands, they lack a variety of qualities necessary in order to fall into the category of proper law; These laws 'not properly so called' do not lie within the province of legal science. Having drawn this distinction, Austin goes on to break down his classification of laws 'properly so called'. Within this category Austin places two sub-groups; laws set by humans and those set by God. Although the laws set by God are properly speaking laws, they do not, according to Austin, fit into the province of jurisprudence. Laws set by humans are of two types; laws 'strictly so called' and laws 'not strictly so called'. Laws 'strictly so called' \"consists of (a) laws set by men as political superiors to political inferiors; and (b) laws set by men as private individuals in pursuance of legal rights\". Laws 'not strictly so called' are those which do not derive", "label": 1 }, { "main_document": "ERP vendors, from PeopleSoft's Enterprise Performance Management Workbench to modules by Oracle and Sap (Norris,al. 2000, p176)). For example, the marketing staff will give a better feasible e-business consideration after sharing the web enhancement technique concepts with webmaster, and otherwise, the webmaster will set up good knowledge of marketing technical on time and within budget after well communication. The intercommunication and multi-comprehension facilitated by e-business will encourage a harmonious company culture. Moreover, ERP system supports the management of accounting and financial data. Although it requires person to analyse the problems and make decisions at last, computer software with backup database will make the statistical processes more quickly and accurately. Furthermore, in addition to paper documents management, an effective information management is required by EPR system to control the capacity of information and to capture the competent evidence in the first place for the organisational success. The personalised web side under privacy policy requires a sign-on, which is valuable to give individual information and special authorities to access separate information level. Financial report forms can be updated instantly after accountants finish their input work. Within ERP, the resulting financial effect of a physical transition will be shown to provide decision support for corporate leadership, produce strategic measure for performance, and hold a strategic cost management (Norris,al. 2000, p31). Furthermore, e-business can help to audit the internal information flow-- internal control - in a management information system (MIS). This also is part of knowledge management in business to facilitate the decision making, which make an effort to transfer the various intangible data and information to tangible knowledge can be shared by other ones. The management of the information is broader than maintaining a centralized control on files registry and records management, multimedia and electronic objects (those digital storages, such as CD records and Floppy Disk and Fixed Disk) that combine image and audio are also evolved to reduce the physical forms requirement. The risk of financial records missing or mess also can be avoided if database is duly copied. Good information management system, with an efficient management of both the quality and quantity of the information which links the organisation's routine processes - logistic, manufacturing, sales and marketing, will ensure interoperability as being capable of independently of location and time (Deserno & Kynaston, 2005). The office workplace redesign, about shared service centers (SSCs), also take account of a range of alternative techniques, such as a standardized allocation of space--the universal footprint which help to configure a flexible setting to meet the alternate needs of individual workstyles. The tools (the software such as CSCW (Computer Supported Collaborative Work), teleconferencing) support cooperation over an distant in virtual offices - which refer to a non-territorial office, shared assigned spaces, and alternative touchdown areas --- afford a new way to work without the limit of location and time, provide an opportunity for home working and flatten management hierarchies, empowerment and customer interaction. (Jackson & Suomi 2002, pp14-15). E-technologies and the Internet facilitate an efficient and effective re-engineered process for business. This work focus on outlining those cost-and-risks-reduction and", "label": 0 }, { "main_document": "does not go the whole way in upholding this criticism because he then emphasizes that \"in considering the institution of divorce glib generalizations are somewhat dangerous.\" Thus this highlights his objectivity while discussing this sensitive issue of divorce through the avoidance of formulating overly normative questions. The theme that flows through this article is that divorce is indeed reprehensible if misinterpreted and abused by certain individuals. Divorce as revealed to Prophet Muhammad was not meant to create hardship. But through human nature, customs, cultures and traditions the various methods of divorce have been somehow moulded into a different 'creature'. This aspect is clearly highlighted in the text by Fyzee. The discussion and analysis of the dissolution of marriage is located in both the Sunni and Shia teachings; there is no strong emphasis towards a particular school of thought. Furthermore this is meant to create a generally balanced overview in relation to the topic of Islamic dissolution of marriage. It is also apparent that the discussion of divorce is placed within the South Asia region which covers India, Pakistan and Bangladesh. Although the Hanafi doctrine is predominant in this region Fyzee has not focused solely on this particular school of thought. This also illustrates the aforementioned point regarding the creation of a balanced overview and avoidance of a one-sided analysis. There are brief expositions of the dissolution of marriage in the English common law jurisdiction which includes case law. This suggests that Fyzee is aware that the South Asian community is inherently intertwined with the English legal system. Furthermore most cases concerning the relationship between dissolution of marriages and the legal provisions of the Dissolution of Muslim Marriages Act 1939 were adjudicated by English judges. Indeed there is no escaping the colonialist remains of the past. A great deal of the text is also devoted to the discussion of apostasy and its legal effects with respect to the dissolution of marriage. These aspects are examined in detail. Ergo this highlights Fyzee's recognition that individuals in India, Pakistan and Bangladesh can choose to embrace other religions such as Christianity and Hinduism. Thus the right to religion is not inhibited. The fact that this is illustrated demonstrates Fyzee's awareness and acute perception of the realities of life. Indeed the institution of divorce cannot be analysed within an outdated and static environment. This is supported by Fyzee's statement in relation to apostasy, \"that since the rules were formulated in Islamic jurisprudence, social conditions have changed so completely that a blind adherence to some of the rules, torn out of their proper context, would lead neither to justice nor to a fair appraisal of the system...\" In short this article is concise and fairly digestible by individuals who possess some knowledge of Islamic divorce. The intended audience would therefore be individuals who are keen to further explore this area of Islamic law; individuals who wish to avoid unnecessary complexities regarding comprehension. Fyzee, Asaf A.A. (1974). 4th ed. New Delhi; Oxford: Oxford University Press, p.147. Ibid. p.184. This article is based upon an analysis of three different cases", "label": 1 }, { "main_document": "The program created should help a company to keep track of their computer equipment. The company wants to tract the following information about each item: When the program starts, the first thing that will happen is displaying the menu. In the case of this program, the menu has eight different options. These are: Each option should be activated when the user input an entry in the menu. Also, if the user enters an invalid entry, the menu should loop and offer the user the opportunity to re-enter a value. The database should have a capacity of up to 15 products and the program should not let the user enter a 16 If he tries to do so, an error message should tell the user that the limit was reached. Also, when adding an item, some checking of the information entered will need to take place. First, the program has to check that the ID entered is in the range 1000-9999. Secondly, it also needs to check that the product room entered is in the range 100-399. Finally, when entering a value for an item, the program should check that the value is higher than 0. When this option is chosen, the program should ask the user to enter the ID of the item to remove. The program will then search the records to find and remove the appropriate item. If the program cannot find it or the database does not contain any items, the program should tell the user that the item doesn't exist or that there are no items to remove. Again, when this option is selected, the program will first ask for the ID of the item to find. Once the use inputted it, the program will search the item in the records and display its information on the screen. Although, if it cannot be found or if the database does not contain any items, an error message should appear. The program should simply depreciate all the values of items stored in the database to 70% of what they were before. For this option, the program should first sort all the items with their ID numbers then display them on the screen. If there are no items to sort, the program should just display an error message and go back to the menu. For this option, the program should first sort all the items with their Room numbers then display them on the screen. If there are no items to sort, the program should just display an error message and go back to the menu. If this option was chosen, the program should display on the screen all the items that have a value that is smaller than 40 pounds. If there are no items in the database, the program should just tell the user that there are no items available. Finally, when the user decides to quit the program, the program should simply end. First of all, in order to store the records and meet the assignment specification, a record type and an array type will need to be", "label": 0 }, { "main_document": "TLR 483, CA Bendall v McWhirter (1952) 2 QB 466 Lord Denning, The Due Process of Law The Lords in National Provincial v Hastings Denning crossed the line from judge to law maker, stating that despite absence of legal or equitable interests wives have a \"licence coupled with equity,\" Purists and colleagues objected, the LQR Judicial freedom is necessary to achieve justice, e.g. literal interpretation is notoriously unjust, yet for law to be a certain, democratic machine judges cannot be allowed free reign. The value was acknowledged, not the justification. Indeed, Formalistic theories (e.g. Dworkin) stress the importance of judicial segregation to create a cohesive, democratic system. National Provincial Bank v Hastings Car Mart Ltd (1964) Ch 665 as per Lord Denning M.R. C.A. in National Provincial Bank v Hastings Car Mart Ltd (1964) as per Lord Denning M.R. C.A. in National Provincial Bank v Hastings Car Mart Ltd (1964) Law Quarterly Review July 1952 (68 LQR 379) by R.E. Megarry Law Quarterly Review July 1952 (68 LQR 379) by R.E. Megarry Yet Denning actions were true hallmarks of justice; it was ingenious, providing practical solutions decided \"on principle.\" It was elastic, and like his other creations it was expanded until restrained by the ever conscious Lords. Finally it afforded women the dignity and respect they deserved, illustrating the inadequacy and gender inequality of legislative provisions. Perhaps Denning was justified, a Royal Commission Denning himself believes that it may have taken \"40 years\" From Feminist perspectives the decision has immense value, acknowledging the male paradigm and deconstructing the patriarchy that subordinates women in the home. In a liberal society, where public and private spheres of life are divided, existing proprietary rights excluded women, assisting their suppression. Why should women surrender their rights, acknowledging male superiority? Denning assured law did not fail women. The ethos continued: Lady Summerskill introduced a Bill in 1964 reinstating provisions, becoming the Matrimonial Homes Act Denning was principally correct, providing temporary justice, provoking creation of suitable legislation. Royal Commission on Marriage and Divorce (Cmnd. 9678 (1956), p.180) Paragraph 664 Lord Denning, The Due Process of Law Matrimonial Homes Act 1967 Denning made judgements concerning proprietary estoppel; where an estate owner expressly or impliedly creates informal assurances concerning existing or future land rights, and then attempts an unconscientious withdrawal from the representation which another has relied to their detriment. Inwards v Baker Denning applies estoppel as it is the only plausible solution: the law disfavours informal creation of rights (no deed/registration), gratuitous promises are unenforceable (not contractual), voluntarily rendered services are not compensational and consensual occupation is not adverse possession. Denning imposed equity: it is \"a licence coupled with equity\" This was just, and was expressly affirmed by the Privy Council. It was a pragmatic solution to a commonplace problem: families rarely create formal agreements; illustrating Denning as grounded and understanding: not the stereotypical judge. Inwards v Baker (1965) 2 QB 21, CA Lord Denning in Inwards v Baker (1965) 2 QB 21, CA Lord Denning in Inwards v Baker (1965) 2 QB 21, CA The issues were whether", "label": 1 }, { "main_document": "Duality in reasoning is the claim that humans have two distinct separate reasoning styles, a concept proposed by Sloman (1996), and Stanovich and West (2000). Both papers present a different perspective of reasoning which in turn leads to a difference in the interaction between the systems presented, viewing the interaction differently influences the implications this produces for everyday life. Sloman (1996) proposes that humans have two systems of reasoning because ultimately it feels as if humans do; one reasoning system is driven by rules and the other associations. Stanovich and West (2000) claim that human development has evolved a rule based 'evolutionary' reasoning as well as a second rational reasoning style. Stanovich and West's (2000) paper does provide a clearer and more intuitive argument, yet both Sloman (1996) and Stanovich and West's (2000) theories raise problems which mean that a complete acceptance of duality in reasoning cannot be obtained from these theories alone. A study conducted by Osherson and colleagues showed people use different parts of the brain for inductive and deductive reasoning, inductive reasoning is when conclusions are generalised from one premise to another, and deductive reasoning is when certain conclusions are necessarily reached because particular premises have been processed. (Eysenck and Keane, 1995). Induction and deduction are linked by Slomon (1996) to the two process active in his theory of reasoning. The Osherson study tested participants with reasoning tasks whilst administering a PET scan, revealing areas of the right hemisphere where involved with deductive reasoning, whilst areas of the left hemisphere and frontal cortex were active during inductive reasoning tasks, (Smith et. al., 2003). This study supports the claim that there are two systems of reasoning which carry out different functions; as separate parts of the brain appear to work for different types of reasoning tasks. Both Stanovich and West (2000) and Sloman (1996) present a 'system one' style of reasoning which is similar in both cases, with both papers presenting a large body of evidence to support their stance. This evidence suggests that a rule based system is in place, for example Sloman (1996) claims that humans have a system of rules which can be used to solve problems, such as modus ponens, so when a person is presented with one stimulus an exact response will follow each time, if P then Q for example. When participants had to judge the validity of arguments their Stanovich and West's (2000) 'system one' uses the same automatic rules as Sloman's, yet as well as evidence provided for the existence of a 'system one', they produce a reason for this system to be in place. This reason is that humans have evolved with a these rules as a survival mechanism. This evolutionary explanation appears to give a good grounding for the need of these rules, because human survival would appear to greatly benefit from having a quick reasoning process in place, which prevented danger and so aided the continuation of the genes. However, the main problem Stanovich and West (2000) face with their proposal of an evolutionary theory is the problem all", "label": 1 }, { "main_document": "a receptive skill but as one that engages the reader in an interaction with the text, enabling him to draw inferences by activating his \"relevant schemata\" (Ur, 1996: 108). Additionally, this variety of reading texts should aim to \"engage [readers] in purposeful reading\" (Hedge, 2000: 221). Similarly, work on the testing of reading until the early 1980's focused on the formats used to test it (Urquhart & Weir, 1998: 150) and \"provided no information on learners' ability to use language for communicative purposes\" (Brindley, 2001: 139). However, current research focuses on direct tests which assess readers' ability to \"understand texts in a range of different topics\" (Alderson, 2000: 63). As every test should be immediately linked to a teaching context so FCE reading test addresses students at the intermediate level. Particularly, in Greek society where people start learning English from the age of 9, FCE examinations constitute the first certificate that Greek students hold. English is being learned mainly for general academic knowledge, for job qualification, for traveling and for communicating with speakers of other languages. In this possible context which I will be teaching in the future, English is taught 6 hours a week at a private language school with the help of a textbook, past papers of FCE and blackboard. The class consists of 10 Greek mixed ability learners and the main aim is to teach them how to communicate adequately in all 4 skills outside class. More specific objectives concern the teaching of language items, functions and skills according to their level. All the research done in testing highlights its importance and immediate need in language teaching. Tests are taken to assess learners' language abilities (Bachman, 1990: 2), identify their strong and weak points and evaluate their overall progress (Bachman, 1990: 3). So teachers can reflect on their teaching and make any changes if required. However, as teachers need to base their lesson on a plan, so test developers need to construct their tests following certain criteria with the most important being validity, the degree to which a test measures what it claims to measure (Popham, 1991: 55) or as the American Psychological Association (1985) supports: \"the extent to which inferences we make on the basis of test scores are Many researchers have attempted to evaluate tests by constructing various frameworks since these depict closely what we aim to test (Weir, 1993: 20). Therefore, this paper provides evidence for the three aspects of validity covered by the FCE exam: theory-based, context and scoring validity, following the complete and up-to-date Weir's (2005) framework. The discussion starts by the presentation of the tables of the three validity aspects and continues with a detailed analysis of them. To begin with, all tests should be designed by keeping learners and their specific traits in mind. So the test taker table is linked to the theory-based validity one because these characteristics affect the way candidates process the test tasks (Weir, 2005: 51). Test taker characteristics fall into three categories: physical, psychological and experiential. The physical characteristics concern short-term ailments such as toothache, or cold,", "label": 0 }, { "main_document": "tame the dark natives and jungle. The symbolic association of white, like the white people, is corrupted. Conventionally connoted to purity, ivory ironically corrupts men by enticing them to build wealth instead of civilization. Kurtz, like ivory becomes polluted; ''ivory' rang in the air...you would think they were praying to it. A taint.. blew through it all, like a whiff from some corpse.\" Whiteness is further manipulated by being associated with death, as Kurtz in his skeletal state is an; 'animated image of death carved out of ivory.' The image of white bones evokes notions of fossils, thus implying that humanity faces moral extinction if ethical frameworks continue to be scarified to avarice. Conrad again reveals an admirable ability to capture mood and character, yet remains dependent on narrative to do so. Brooks' argument that the narrative operates with; 'constant references to the inadequacy of the inherited orders of meaning,' (Brooks, P. 'An unreadable report: Conrad's Heart of Darkness' in See 'I met a white man.. I saw..white cuffs, a light alpaca jacket, snowy trousers, a clear necktie, and varnished boots.' (Conrad, Op.Cit., p. 25) Ibid., p.92 In 'The Game of Chess,' Eliot too successfully demonstrates that the creation of mood and complex character need not be at the expense of telling an interesting story. The reader becomes drawn into the fascinating world created through the characters presented. Ornate, decorative language generates an artificiality that exposes the essential emptiness of modern life. The woman is; 'troubled, confused and drowned the sense in orders,' Characters in the poem often appear imprisoned in their minds, doomed to continual misunderstanding in being unable to use language effectively. The omission of inverted commas in the assumed man's responses and absent question mark in; 'Speak to me. Why do you never speak,' Accustomed to the woman's habitual questioning, he sees no point in replying. His flippant indifference reveals an unsettling lack of communication between the sexes. Much is deliberately and literally unsaid, making suggestibility as powerful here as it is in the novel. Character seems to not only replace the need for narrative but also mood on occasion. Even when seemingly united, characters remain alienated. Alternate rhyming couplets provide a surface impression of harmony between the typist and clerk, but as the rhyme's mechanical predictability becomes apparent, an impression of monotonous apathy is created. 'Hardly aware of her departed lover', Eliot, T.S.. The Waste Land and Other Poems. (London: Faber and Faber, 1999.) p.26, l.88-9 Ibid.,p.27, l.112 Ibid., p.32 l.250 Unlike Conrad, Eliot severs the conventional unity between plot and character by depicting fragmentary snatches of individuals that demonstrate how narrative is no longer a necessary component of character creation. Characters are mainly nameless and faceless, as unsure of themselves as we are of them. Differing concepts of identity were emerging during the period, which may be why writers resorted to new and innovative modes of characterization. No central voice dominates narration; continual changes in pronoun and tense leave the reader doubting each character's gender, nationality and perspective. Hetroglossic pollyvocality and dislocated vocalization ensures that mood, like", "label": 1 }, { "main_document": "passage, 'I feel', but is now operating on the level of 'I am thought'. The utterance towards the plural subject is the deepest form of self-expression known to human understanding, from ancient times through to our current post-Freud knowledge of the self. The effect that it has in the reader is one of vicarious insight into the way symbolic 'secondary' imagination provides thoughts and language to the 'primary' imagination by working out 'a voice adjusted to the self's intimacy' (Cassagniere, 1994, 238). It is also clear to see how Wordsworth acquired his 'Pantheistic' views on spirituality, a view that assumes that God is the power that runs through all living things in the universe. We see this because as natural images permit his imagination to run wild, he feels a 'power'. This power is not described as a specific deity, as Blake thought, but merely a presence that is felt in his transcendent state of mind. The assumption that there is a force in nature and the mind of man is reached by the association 'of ideas in a state of excitement' (Wordsworth, 1798, p21). This excitement in Wordsworth is comparable to similar notions of spiritual energy in Blake, in that nature manifests 'the same transcendental energy as informs the human mind' (Day, 1996, 45), whilst the same time providing an 'objective, material barrier' (45) for the subject to recognise the transcendence without the risk of being completely overwhelmed by his internal processes. There is a sense of doubt in Wordsworth's belief though, as it is only a feeling that he experiences, and not a certainty. This, however, is typical of Wordsworth's style, a poet whom, even in his beliefs refuses to reduce his thoughts and ideas to fixities and densities. Although of contrasting backgrounds and different beliefs, the parallels running through the works of the poets discussed have been clearly observed through the use of natural images; revolutionary thoughts and ideas about rigid hierarchal structures such as the Church of England and the government, expressions of the self and of individual perceptions of the world. Through nature, poets have also expressed a belief in the unifying and redemptive power that the self-awareness of creative thought brings, and its subsequent feelings of oneness with a divine or spiritual level of existence, and not merely conventional notions of deity. Even though Blake believed spiritual transcendence needed to be achieved through concentrated thought and Wordsworth was of the opinion that transcendence was a natural reaction to the external environment, both poets have excelled far beyond simple nature writing. They, and many other poets of the period have created works that show that by using natural images a primary and 'elementary' language can be constructed to inform us of the nature of humanity. Price (1964, 104) can provide a suitable summary to encapsulate the notion of what both Blake and Wordsworth attempted to convey to their readers: 'The strength of Experience comes from its ability to sustain or recover the faith of innocence'. In this way, the newly informed reader is prepared to meet the", "label": 1 }, { "main_document": "The Hockney Management Co. originally began with the establishment of The Hockney Suites London in early 50's. Not until the third serviced apartment- The Hockney Suites Edinburgh opened was its Management Company set up, in order to achieve the most satisfaction of business traveler, retain customer loyalty and offer higher quality of client services. As a successful business of luxury serviced apartment, The Hockney Suites London, sitting in the most happening zone of central London, has been considered as one of the most remarkable pioneers in the UK hospitality industry over the past six decades. Keen on its mission statement- \"Enjoying Flexibility and Comfort at Your Luxurious Home Away From Home\", the Hockney Management Co. provides luxuriously furnished, extended stay accommodations and same standard of client services as any four or five star hotel (see Appendix 1); further, customers' needs of privacy and homely feeling away from home are ensured during the stays with the Hockney Suites. Over the past half century, The Hockney Management Co. has made its brand reputable among the industry and customers, and also profitably operated 7 others luxury serviced apartment in Great Britain, along with 5 units in Europe, 3 units in the United States, and 4 units in the rest of the world (see Appendix 1). Most lately, The Hockney Management Co. has noticed the gap between the rapid growths of Tourism in India and its relatively slower development in infrastructure, such as accommodation (World Tourism Organization, 2004). Therefore, the emergence of market of serviced apartment in India- one of the world's biggest economies in Asia, has drawn The Hockney Management Co.'s attention and made the company looking for the opportunities to compete and stand out in the hospitality industry in India. Business environment analysis will assist the global business to (Lee & Carter 2005): reduce risk of failure, identify the key environmental variables (see Appendix 2), aid strategy planning and decision-making, make more effective choice of market and marketing mix, as well as assess the risk of conducting business between the home country and the host country. Marketers hence need to recognize the similarities and differences of two different countries that greatly influence the marketing of the business (Bowie & Buttle, 2004). This section endeavors to analyze and compare the business environments of India and the UK. India, one of greatest purchasing power in the world (The Report: VisitBritian's performance in 2005-6, 2006), is also well-known for its mass population and speedy development, only second to China (British High Commission, 2006). Throughout the last decade, India's economic situation and national development have prospered, making it appealing to many foreign investments (BBC News, 2006); as a matter of fact, foreign direct investors now enjoy 100% automatic entry to India (See Appendix 2) which led India to the world's largest recipient of Foreign Direct Investment by receiving $3.75 billion in the financial year of 2004-2005 (UK Trade & Investment, 2006). Similarly, the UK, have had a strong economic growth (World Development Indicators database, 2006) and a FDI-friendly climate (refer to Appendix 2) in the recent years. Generally,", "label": 0 }, { "main_document": "Buttle, 2004), the unique characteristics (\"Drivers\"- quoted from Prager, 2006) of The Hockney Suites in India will be: First of all, individual and practical working space is included in each unit, which can flexibly be set up as temporary meeting room; additionally, parking lot offer is guaranteed, since the central location of The Hockney Suites situated will increase customers' needs in parking space (Special Attributes strategy). Secondly, focusing on offering exclusive image, high quality of tangible and intangible elements with slightly cheaper price, compared with other branded hotel residences (Price/Quality strategy). Thirdly, due to primarily targeting middle to senior managers/business traveler, 24-hour in room Internet Access and telephone help desk are provided (Customer Benefit strategy). Furthermore, British- luxury interior decoration is one of the principles of The Hockney Suites product, aiming to form the phenomenon of upper class or celebrity lifestyle (User strategy). \"Marketing mix is a set of marketing tools that the firm uses to pursue its marketing objectives in the target market\" (Kotler, 1991). Whether to adopt its marketing mix elements (such as its product, packaging and so forth) for foreign target market or standardize and sell essentially the identical elements in favor of significant cost saving, it is a fundamental task for international marketing (Calantone Amusingly, the both strategies (adaptation and standardization) however are normally used at the same time (Vrontis & Papasolomou, 2005). Instead of choosing either one, both typologies of standardization and adaptation will be carried out by The Hockney Management Co. in order to optimize effective marketing strategies at most (White & Griffith, 1997). Illustrated by Usunier (1996), there are three layers of product attributes that assist marketers to make better decision between standardization and adaptation: the physical attributes, the service attributes, and the symbolic attributes. The Hockney Management Co. will standardize its physical establishment of serviced apartment by representing luxurious British interior d The Hockney Management Co. names the new international organization as \"The Hockney Suites Greater Delhi\" for being seated in Gurgaon-the most rapidly emergent area of Greater Delhi in National Capital Region (NCR), where the first Master Plan for Delhi notified in 1962 (New Delhi-India. Net), including the metropolitan area of Delhi and several neighboring satellite towns. Gurgaon is located in the area with impressive emergence of IT companies and international firms. It also enjoys an excellent location and the convenience of transportation, which is 30km away from Delhi, 10km away from Indira Gandhi International Airport. Currently, only two 5-stars branded hotels (Trident Hilton Gurgaon, The Bristol Hotel) and 2 luxury serviced apartments (Enkay Condominiums, NK's Exclusive) are dominating the serviced accommodation market. Yet, 4 newly-completed malls (DLF Mega Mall, Gold Souk, Regent Plaza, and Galaxy Mall) and 15 new malls under construction for now will enrich the city life in the near future. And, these are all the significant evidence suggesting the magnitude of purchasing power, fueling population and cheaper cost of real estate development of the area (India Retail Review, 2005). Therefore, sited near New Railway Road and Khandse Road and facing the great northern meadows, The Hockney Suites Greater Delhi", "label": 0 }, { "main_document": "It is extensively used for the understanding of maternal effect genes. These are genes from the mother which are inherited by the embryos and which are essential for the correct and complete development of the embryo. In The anterior- posterior axis is set up through maternal and paternal contribution. The position of the sperm after entry into the egg specifies the posterior pole. Maternal material consists of PAR proteins which are localized at different poles through the action of centrosomes brought by the sperm. These proteins are required for maintenance of the asymmetry along with the CDC-42 G-protein. An array of other proteins such as RIC-8, LET-99, LIN-5 and others are also involved in a-p axis formation. Once the egg is fertilized in undergoes a series of divisions that give rise to somatic blastomeres and germline blastomeres. Specification of the germline involves the maternal genes Finally, the dorsal-ventral formation relies on the somatic blastomeres, in particular ABa, ABp and EMS. ABa generates the anterior pharyngeal cells through cell-cell interaction with EMS, also involving the receptor GLP-1 and an unknown ligand. EMS forms the posterior pharyngeal cells autonomously and with the help of the Finally ABp forms the rectal and valve cells through interaction between its receptor GLP-1 as well, and the ligand APX-1 signalled from P All these genes have homologies with other species including human, and although most of them form a complex interactive network, their analysis is vital for understanding human development. The success of molecular biology is due to the existence of model systems. These are defined as extremely simple organism such as Drosophila, Arabidopsis and The latter was introduced as model system by Dr. Sydney Brenner in 1974 (Gilbert, 2003). This free living, non-parasitic soil nematode has all the characteristics of the ideal model organism: it is small, easy to manipulate, cheap, has a short life cycle and most importantly its genome (which is approximately 100 Mbp, that is to say 20 times bigger than that of Figure 1 shows 3 nematode worms viewed under the microscope. Moreover, this multicellular organism is a hermaphrodite. It is also transparent making the tracking of cells and following of cell lineages (which is most of the time invariant) very easy using reporter genes such as GFP (green fluorescent protein) (Gilbert, 2003). Its genome also makes it easy to mutate it: various mutagens can be used, such as EMS, to define mutants. Mutants can also be obtained using reverse genetics particularly RNA interference (RNAi) which consists of transcribing RNA in vitro from a specific target gene and injecting it into the gonad of wild type worms. It then prevents the production of the protein product of the target gene (Rose and Kemphues, 1998). The latter has been extensively used to study maternally expressed genes. Maternal effect genes are genes required for the development of the embryo. Many of these appear to have a role in the development of the worm and researchers are currently analysing is their exact role. Understanding such an analysis requires the discussion of how the initial asymmetries are", "label": 0 }, { "main_document": "was found: the volumetric flow rate, and hence the speed of flow, where Results from the manometer readings are given in Table 3. These are compiled into a chart against the length of the tapping from the pipe end face on Graph 2. From the equation of the graph we can we can determine (say), Equation (15) can be re-arranged to give Therefore, and Substituting into (12) reveals Again, to find Reynolds number using (9), with These two values are plotted on the moody chart, Chart 1. The velocity is reduced when going from turbulent to laminar flow due to additional wall shear stress, Flow speed is also reduced because of the effects of surface roughness. Whether this surface roughness is dependant on Reynolds number or not can now be deduced from the results. Earlier we found that Where Using values found in the results and equation (13), we determine that hence Therefore the Friction Factor is dependant only on the Reynolds number, not the surface roughness. Hence the point for friction factor against Reynolds number lies along the (straight) laminar flow line of the Moody chart, as has been shown. We can determine the value of the additional shear stresses by comparing the values obtained by (11) and (12) using the shear stress obtained here. Looking at (11) and (12) Therefore it can be seen that the additional shear stresses add an extra 0.0025 onto the friction factor value. Both of these values have been added to the Moody chart (Chart 1) to show how excluding the additional forces As As with figure 4, a greater pressure gradient is required to produce the flow rate in turbulent flow. Hence the pressure gradient increases as the flow turns turbulent, and the speed decreases, as expected, a pattern contrary to what would have happened if the two flows had been laminar (see equation (8)), where flow speed is proportional to pressure gradient Both flows have a velocity profile development region. The development region for the laminar flow experiment can be calculated now using Therefore Hence roughly 1.5m should have been needed for a parabolic velocity profile to develop. However, looking at Graph 1, we see that the flow has fully developed, i.e. there are no more adverse pressure gradients, from roughly 0.5m down the pipe. Therefore this result is flawed. In the case of turbulent flow, the velocity profile took a lot longer to develop, hence the effects cannot be seen that greatly. For a more accurate estimation of flow in this pipe, one might want to look at the pipe roughness value in turbulent flow by taking the Reynolds number a lot higher. For different (high) values of Reynolds number, these can be compared on the Moody chart to find the roughness value These results however could only be obtained by changing the set-up of the equipment to give different Reynolds numbers. This would involve changing the geometry of the pipe work or, better, changing the pump speed to increase A way to get the turbulent results to correspond on the Moody", "label": 1 }, { "main_document": "of epiphany of understanding, for to laugh at the situation is to understand it, and naturally desire it to change. To hear As Spencer notes, 'the humour embedded in Similarly, the explicit violence is Compared to the 'strategic' bombing of German towns it is a negligible atrocity, compared to the cultural and emotional deprivation of most of our children its consequences are insignificant.' [Author's Note: p.6] Those who denounce Bond works hard to implicate his audience in the baby's killing because of his belief that 'society has enforced upon humans strictures that can only lead to violence. It then calls that violence an inherent part of human nature and uses it as an excuse to add still more strictures.' [Scharine: p.66] Nor is Particularly discomforting is Scene Four, during which ' [p.36] Nothing is more effective at conveying The baby Yet despite his obvious failure as both a husband and lover, Apparent throughout Saved is Len's willingness to help, in numerous circumstances his offers of support provoke outright rejection, yet this fails to deter him: 'he carries Mary's groceries and helps pay the rent; he offers Pam his tickets to the Crystal Palace to help her win back Fred; he accepts responsibility for Pam's baby, refusing to desert the situation even when begged to do so; he brings Fred cigarettes in jail and offers him his own room when he gets out.' [Spencer: p.32-3] Symbolically, at the end of the play Len is still obstinately trying to help out: he mends the chair. This final scene shows people 'at their worst and most hopeless,' yet despite this, Len 'does not turn away from them. I cannot imagine an optimism more tenacious, disciplined or honest than this.' [Author's Note: p.5] Len's character is practical, like the optimism ascribed to the play. He is also a good person, though his goodness is flawed and human: 'his faults are partly brought home to him by his ambivalence at the death of the baby and his morbid fascination with it afterwards.' [Author's Note: p.5] Whilst these shortcomings are significant, they are shared with the audience, and Len is not alone in his apathy towards the baby, neither in his inaction when he could have could have saved it's life: the audience is also culpable for both failings: 'the audience witnesses with him - as it witnesses a thousand worse atrocities daily - and, like Len, is does nothing', and though 'the audience of the play may be disturbed by the view, but stays put nonetheless.' [Spencer: p.35] Len's simple admission that 'I saw the lot... I didn't know what to do. Well, I should a stopped yer' [p.76], coupled with his uncomfortable harassment of Fred in Scene Ten, where he repeatedly asks 'Wass it feel like when yer killed it?' [p.103] are unpleasant reflections of the voyeuristic role the audience adopt as inactive observers of the stoning and its aftermath. To watch the events of Yet Len somehow manages to transcend other characters in his refusal to accept the triumph of evil and despair. In Len,", "label": 1 }, { "main_document": "local business environment must be acquired, which implies incurring extra costs. However, this method is going to require a high amount of capital and also the risk burden is entirely on Allisgood plc. Adapted from Morrison; 2002: 35 Porter et al; 2003: 45 Adapted from Rappaport et al; 1997: 6 Finally, there is a case for a joint venture, an interesting option for Allisgood plc. Its speciality compared with the other options is that a partner business is needed to set up a joint venture. This will prove to be to an extent a difficult consideration, as the nature of the partner business must be carefully taken into account. Because Allisgood plc is a conglomerate, it does not really matter if the venture partner is from a rather different kind of business sector, which in some cases can lead into further integration of the companies. The opportunity cost is, not surprisingly, management problems arising from differing views and conflicting company cultures, and the short-term nature of joint ventures. Morrison agrees, Morrison; 2002: 143 Ibid In sum, it is not clear which of these methods is the most optimal for Allisgood plc. To further evaluate these options, considerations about location and financing of the foreign operation must be made. It is only then we can try and formulate an accurate evaluation of prospects involved with each of these strategy options. Hence, the issue of location is discussed next. Naturally, in foreign expansion plans the most central consideration is to decide the location of expansion. There is a variety of countries with their specific competitive advantages available for a business. Interestingly, it appears that we can distinguish the feasibility of these areas and rank them in an order of preference. However, the analysis below is unlikely to be exhaustive as there are yet considerable differences between the countries in the given areas. Adapted from Caves; 1996: 34 To begin with, expansion inside European Union (EU), but not Eastern Europe, was suggested. A great advantage of this area is that identical legal framework is in force due to standardised European Community (EC) legislation. The fact that the location is geographically close means transportation costs are likely to be lower than in other suggested locations. Additionally, due to the geographical proximity the location is relatively convenient in terms of monitoring of the process, the management will hence have better information about the state of the operation. Clearly, there are downsides to this location, main ones being expensive and highly regulated labour. Morrison; 2002: 43 Ibid Ibid: 123 Ibid Second proposed option was expansion to Eastern European countries inside the EU. As the countries are part of the supranational organisation, same advantages arising from identical legislation can be derived here. Although, the labour productivity in the given Eastern European country has to be considered, it may be low. Limitations to the advantages of Eastern Europe can be identified, namely, labour is available elsewhere at lower cost. Additionally, Eastern Europe is struggling with corruption problems In spite of these limitations the EU countries of Eastern Europe seem to", "label": 0 }, { "main_document": "was actually some degree of private property ownership in Inca civilisation. As well as the Inca himself, local lords (usually former nation leaders before Inca takeover) and the Inca elites (kin of the emperor) were assigned lands again worked on their behalf by ayllu populations. D'Altroy points out, for example, that the most famous of Incaic archaeological sites including Machu Pechu, were actually estates owned by Inca elites. D'Altroy, Baudin cited in Means, Linked to the idea of communal ownership under the Incas is the notion of economic cooperation as opposed to competition, another feature encouraging the socialistic definition of the Inca state. Reciprocity in the form of mutual obligations was a key feature in society, both among those of the same kinship networks and between those of massively different statuses. In the ayllu setting, economic cooperation focused upon a communal spirit of mutual aid. If a member was called away to carry out work on behalf of the government, his share of land was cultivated by others in his absence. Thus, in theory at least, no person or family sharing the same status had an advantage over another, although those who claim that a large family was an innate advantage in terms of time taken to carry out work dispute this notion. Such cooperation between producers suggests that the Inca was indeed a socialistic empire, as it is the competition between workers in the market system that most socialists deplore in other social and economic forms. Mason, J. Alden, Mutual obligation extended outside the ayllu to incorporate the work carried out by commoners on land designated to the Sun and the Inca. It would appear that such work was carried out equally on the basis of reciprocity. The theory it seems was that by cultivating state and religious lands, the common peasant insured not only the stability of their empire through feeding its elite, but also of their own community through the giving of tribute. For example, the ayllu carried out work on land belonging to the Inca in exchange for the giving of the right to use their own communal land unhindered. Wachtel, Nathan, This links into the idea of both the economy and society being run 'for the common good' It is with this aspect that many historians have taken up issue in relation to the idea of the Inca state being a socialist one. A second key feature of the Inca economy is important here, that of redistribution. Crops cultivated on these portions of the land were sent to storehouses from which they fed the Inca state and religious elites. However, part of the mutual obligation in repayment for the work done on state and religious land was, it has been suggested, that ayllus had access to these stores at times of want. As Baudin suggests, \"The granaries represented, in a socialistic state, the capital which individuals build up by their thrift\" Garcilaso and Valera have both contributed to the impression that the main use of state stores was to alleviate hunger during periods of frost or famine.", "label": 1 }, { "main_document": "A case summary for the % spent in organic food, grouped by the yearly household income was then performed and income was banded into seven groups. A case summary was also performed for the age of respondents and then this variable was banded into seven groups. After creating the three new banded variables of household size, income and age case summaries were done for each one of them. The data set consists of vegetarians and non-vegetarian consumers. Two layers were created and shown that the mean % spent in organic food for both vegetarians and non-vegetarians were different. However, in order to check whether the difference between the two sample means is large enough to reject the null hypothesis (Ho: vegetarians and non-vegetarians spend the same % in organic food), t-test was performed in Excel. Regression analysis was run in Excel to check whether there is any linear relationship between income and age. Multivariate regression was also performed in SPSS between the % spent in organic food and household size, number of children, weekly TV watching (hours), weakly radio listening (hours), surf the web, yearly household income, age of respondent, vegetarians versus non-vegetarians to check for any relationship between the % spent in organic food with any of those variables. Finally, Waitrose's consumers were filtered out and case summaries were provided for the following variables: yearly household income, number of children, household size, age of respondent, weekly radio listening (hours), weekly TV watching (hours), surf the web, vegetarians versus non-vegetarians in order to check the profile of Waitrose's organic consumers. In addition to that, principal components analysis (PCA) was conducted in order to define groups of consumers and the selection of the principal components was based on the method of eigenvalues. At the end, cluster analysis was performed. Organic consumers purchase most of the organic food from Waitrose with the amount of money spent accounting for 6.91%. Tesco comes second with 6.47% spent in organic food, followed by Azda (5.98%) and Safeway (4.53%). Consumers do not spend any money in Kwiksave for buying organic food (Fig.1). Households bigger in size spend more money in organic food (5.82%) compared to smaller households (4.08%) (Fig.2). The least amount spent in organic food comes from families without children (4.04%), while those families with one or two children spend more money in organic food (7.11% and 7.67% respectively) (Fig.3). The yearly household income ranges from 10,759 to 75,170. Those who spend money in organic food have income of 30,783 and above, while those whose income is less than 30,783 do not spend anything in organic food. The greater amount spent in organic food (10.33%) comes from households with yearly income between 52,977 and 60,374 (Fig.4) and the highest income have households with two children (44,025) (Fig.5). Regarding the age of the consumers of the current study varies between 9 and 84 years old. However, it was found that consumers younger than the age of 29 do not spend anything in organic food. Those who purchase organic food are at the age of 29 and above. In", "label": 0 }, { "main_document": "started to grow. Not surprisingly falling income, increasing unemployment and high interest rates resulted in to cut in the consumer demand. Blanchard; 2003: 4 Blanchard; 2003: 6 No doubt the strong decrease of IT sector was the main reason to the financial crisis, but the virtuous circle had turned in to a vicious circle of falling investment figures and fall in confidence. The US economy was stagnating in the early 2001, but there were expectations of a recovery in the year 2002. Straightforwardly, the oil prices increased and the prices have remained high ever since, but this issue was not evident in 2001. Hence, the lack of confidence and terrorist attacks caused more problems in the financial markets. In conclusion, economic year 2001 in the US was an unsuccessful one. Greenspan agrees with this view (Greenspan; 2000) Meyer; 2001 Greenspan; 2001 Greenspan; 2001 The US government used extensive fiscal and monetary policy in order to tackle the problems of the crisis. The prevailing theme on the fiscal side was tax cuts, which was strongly advocated in public by the Bush regime. Blanchard; 2003: 6 The aim of fiscal policy used was straightforward. The consumer spending was to be encouraged. As consumers have more disposable income, they will spend more. This is because the marginal propensity to consume is higher than marginal propensity to save, except in some rather rare circumstances. Consumer demand has a large importance in the US. It is around 60% of the GDP Consumer demand recovered during the 2002 accordingly to the use of the fiscal policy. Federal Reserve; 2004 Gramlich; 2002 In contrast, on the monetary side, the Federal Reserve Board decreased interest rate known as \"Federal Funds Rate (FFR)\" in order to encourage spending and investment. But this aim was arguably not easy to achieve. Throughout the latter half of 1990s, the FFR had remained flat, but from the end of the year 2000 to the end of 2001 the FFR was dropped from 6% to 1.5%. Though, there are limitations such as the investment growth was still negative. In the light of this evidence, it is clear that this was a turning point. Blanchard; 2003: 6 Ibid Ibid NIESR; 2003 (Jan) The year 2003 was a year of cautious confidence, as the US economy was performing better. Even though, it was argued that the US economy \" The recovery was slow, yet it was a positive sign. Nevertheless, the net trade deficit was building up during 2003. NIESR; 2003 (Oct) Ibid In 2004 the US economy was still growing The oil prices were further increasing and its impact on the world economy, and the US in particular, was increasingly important. Unemployment in the US is yet at sustainable levels, but there is risk for growing unemployment figures. The increased exports to the European Union and Asia, especially to China are likely to reduce this deficit. But if it is considered whether the economic situation in the US has improved since the crisis of 2001, the answer is yes. at 4.2% (NIESR; 2004) NIESR; 2004 Ibid Gramlich;", "label": 0 }, { "main_document": "filter is especially useful since the random errors involved by relatively high frequency contents. The frequency response of the Butterworth filter is maximally flat (has no ripples) in the passband, and rolls off towards zero in the stopband. A standard Butterworth Filter's pass band attentuation is When viewed on a logarithmic Bode Plot, the response slopes off linearly towards negative infinity. For a first-order filter, the response rolls off at For a second-order Butterworth filter, the response decreases at -40 dB per decade, a third-order at -60 dB, and so on. So, in a word, Butterworth filters are characterized by a magnitude response that is maximally flat in the passband and monotonic overall. From the diagram result of step and impulse response , I found that the magnitude of filtered with ADC/DAC is a bit lower than the only filtered one. The filtered frequency response of signal + noise, though much better than the non-filtered ones, it still exists some quantization noise. So it is not exactly as good as the original one. ADC/DAC introduces and magnifies rounding errors for large filters. As required, the filter is 3 The rounding or quantization error, increases as the arithmetic precision of the filter decrease. If filter is unstable or contains quantization noise, there are several options: I learned lots of useful practical knowledge from this DSP lab. The 3 2-hours-session made me proficient in using Matlab dealing with DSP problem. When confronting with some difficulties in analyzing the results, I referred to a few books and understand more about digital signal processing from related information. The whole lab work is like a small project, from design to realization. I can see how industry works step by step through it. Finally, I have proven that the design met the 3 Many thanks to my tutor Dr.", "label": 0 }, { "main_document": "system which is not considered. Air friction - there is air resistance because of the air present around the experiment area. Values of mass - the values of the masses considered in the calculations could be slightly different to the actual value. For the graphs to be sketched the Then two graphs (a maximum timings graph and a minimum timings graph) of mass The graphs of small disc; The graphs of medium disc; The graphs of Large disc; Then the gradients ( Then by substituting these values in the main equation the inertia of the discs are found; For the small disc (min); For the small disc (max); For the medium disc (min); For the medium disc (max); For the large disc (min); For the large disc (max); (All the calculations done and the answers obtained carry certain degree of uncertainty due to the rounding off of numerical data and due to the errors mentioned in the '4.0 Results and possible errors' section.) The maximum values and minimum values of the inertia for each disc matches with each other to a certain extent showing that the experiment was successful, even though some values obtained are slightly deviated. All the graphs plotted have a similar linear shape with most points fitting close to the best fit lines with a few deviated points.", "label": 0 }, { "main_document": "indicate data that are skewed right of its middle point, mean. In this case the skewness measure is positive, so the data is skewed right of the mean, meaning that the median is left from the mean. The frequency distribution for military expenditure as a percentage of GDP in 2002 and the histogram produced: It can be seen that for most of the countries the percentage share of military expenditure is between 1.6 and 3.2 per cent. In this variable there is quite a large number with no data, 10 in total. This will affect the following calculations. In this case the mean is 2.419 (3DP), meaning that the average military expenditure of the 50 countries is 2.419% (3 Decimal Places). The middle value, median, is 1.9. Range is 11.2; the largest percentage of GDP in military expenditure is 11.2%, shown by Kuwait in this case. The standard deviation, The skewness of military expenditure is also positive, so the data is skewed to the right. In this case however, it has to emphasised, that there is some error in these calculations, due to the fact, that 10 countries did not have available data for military expenditure. When plotting one variable against the other, the following result is: The trend in this diagram seems to be, that the larger the GDP per capita, the smaller the military expenditure, as a percentage of GDP. It seems reasonable to conclude, that for different countries, at least some military expenditure is required, and as wealth increases, the opportunity cost of maintaining an army gets smaller, i.e. the percentage share gets smaller. However, much care has to be taken, this is a large simplification, and does not explain everything, that could affect the need or willingness to spend money on military. All the above sources seem to be reliable sources of information. The Washington Times is a known newspaper, and should contain relevant information, at least from the American point of view. BBC web page includes information from European perspective, but not from the Euro zone itself. Voice of America and Bloomberg sites are both American, pages for people interested in current issues in financial sector. All the pages, apart from BBC, were last updated on December 1 The BBC website was last updated on 10 The US trade deficit exceeded 50bn USD for fourth month in a row in September. A strong euro has raised economic fears in Europe, where exporters want the dollar to stay strong to boost their international competitiveness. As a result of this the US customers will not buy as many European goods, and the US money paid abroad will get smaller, i.e. the US trade deficit gets smaller. Also weaker Dollar will make US goods cheaper for foreign markets, and as a result, US firms will collect more revenues. Raising the cost to us of foreign goods and lowering the cost of American goods to foreigners, should reduce imports and increase exports Ibid. However, it may be that the weakening Dollar in comparison to Euro is not enough to reduce the", "label": 0 }, { "main_document": "the circle behind another would have more of its colour showing through. This transparency caused the colours to mix also simulating the effects on inks on a paper. A preliminary experiment was carried out to gain experience in performing the testing procedure. With the knowledge received from the preliminary experiment it would be possible to improve the procedure and provide a more fair and suitable final experiment. To display an image the program had to be loaded from a command line with the image file name as an argument. This made it unfeasible to change the image being displayed while the program was running, especially with different images requiring different window sizes. Therefore to change an image being displayed the current program had to be closed and a new command had to be typed into the command line with a new argument. This could take a lot of time, especially when displaying a string of images - a test participant may be unwilling to wait for such periods in between testing. Therefore a method was needed for quickly changing the image being displayed. The workstation for performing the tests was a Windows platform pc. Therefore an MS-DOS batch file was created which lined up the commands that needed to be called. All the necessary program files were compressed to a jar file which made organisation and execution of the program easier. As soon as one image was no longer needed, the program was closed and the next image would automatically be displayed. This process greatly reduced the changeover time between images. A similar batch process could also be created using UNIX shell script to allow the same test to be run on a workstation running UNIX. The test procedure asked participants to compare an image with different shape pixels, at varying distances, and decide which displaying of the image they preferred. The images were displayed without any sub-sampling. To ensure that the height and width of the images were still correct the dimensions of the hexagon were altered. A regular hexagon is wider than the regular square and so displaying every sample with this type of hexagon would cause a distortion in the image. Therefore the width of the hexagon was adjusted to only 3.0 units width instead of 3.464 units. This allowed images consisting of hexagons on a hexagonal sampling grid to be displayed in the same window as images consisting of squares on a square sampling grid. The four images chosen for testing were images of natural scenes. The images displayed were a face, a portion of sky, a house and a field. The dimensions of the images were also relatively small in size so that the images would load up more quickly. The images were also displayed at twice their original size to allow the graphics card to display more accurate hexagons. The participants were asked to view one of the images. The pixels would be swapped between hexagons on a hexagonal sampling grid and squares on a square sampling grid. The participants were not told which pixel shape", "label": 1 }, { "main_document": "an image are varied in a particular manner such that a useful image is formed. A useful image can be defined as an image from which some information can be taken. If an image is too blurred due to noise, or too low a resolution then it is not possible to make out the picture, hence there is no information in the picture and so the image is useless. As resolution is gradually reduced information is lost from the image. This may be achieved by subsampling an image. A fixed number of samples are taken from the pixel data - the fewer samples taken the lower the resolution. It is at these low resolutions where pixel shape can make a difference. At high resolutions the human eye can not discern individual pixels. So whether the image was displayed using square, diamond, trapezium or indeed any shape pixel would not make much difference. The image would still look the same; the same amount of information could be obtained. However because the resolution is so high there is much more pixel data to be stored and hence the storage size of the image increases. Image compression can be applied but this has led to artefacts appearing in images and a lowering of the quality of the image. At low resolutions using a different pixel shape could result in more information being gained from an image. Figure 1A represents a diagonal line at high resolution. We have a very smooth line. Figure 1B represents a diagonal line at very low resolution. Here we can see the individual pixels. Figure 1C represents a diagonal line at low resolution. Here were are just about able to see the individual pixels In figure 1A we see a diagonal line at a high resolution. Here we cannot make out the individual pixels of the line. Therefore the line looks like a smooth diagonal line. However in figure 1B the resolution is a lot lower. This image is not pleasing to the human eye because we can see the individual pixels. The diagonal line can be seen as a series of squares in a stepping fashion. If the intention was to produce a diagonal line this image would be unsatisfactory. In figure 1C it is only just possible to see the pixels. The image is bordering between acceptable and unacceptable. Using a different shaped pixel in this situation could result in a smoother diagonal line, meaning a more useful image would be obtained at the same low resolution. This also applies to edges of objects. The boundary between two objects can be simply viewed as a line. An object may have edges at a variety of angles. Using the example of World Wide Web based images it would be useful to produce an image that had a low resolution, allowing it to load more quickly, but still contain useful information. Printers use circular droplets of ink or circular light spots to print images, or indeed anything else. At low resolutions printers sometimes print incorrectly leaving large areas with unwanted ink.", "label": 1 }, { "main_document": "perspective, resulting in polycentrism, geocentrism and ethnocentrism (Roper , 1997). Harris, Brewster & Sparrow (2003), further introduce the regiocentric approach. Important to note that no company depicts any centricity profile in entirety, but rather reflects its tendencies. Ethnocentrism suggests parent country staff and management policies are superior to those of the host country (management mindset) (Roper This reflects in significant expatriates in host-country management, moving across the organisation maintaining home-country managerial practices (Edwards, 2004) and host-country people management decisions made at company headquarters-power centralization (Harris 2003). Though ethnocentricity maximizes the use of limited resources, lowers costs and increases control, it best suits home-country targets, overlooks host-country cultural differences and consequently fails to consider host-country staff interests (Harris 1997). The need for UK nationals in Saint Fusion Korea is unnecessary as labour market reforms and high levels of education have significantly increased the availability of highly skilled staff-appendix 2-(Edwards, 2004). Despite reform, many employment law differences remain (appendix 4) which added to sustained cultural differences pose significant challenges to an ethnocentric approach. As examples, a preference for home-country nationals clashes with Korea's nationalism and homogeneity; low hierarchical levels in the UK oppose significant PD in Korea (Hofstede, 2001), and would confuse host-country nationals as to their relationship with superiors; young UK managers in a context where father-figure superiors are sought after, representing status and power, would not be appreciated by staff; UK nationals would have difficulty understanding the collectivist nature of the workforce (Hofstede & Hofstede, 2005). As a mindset, polycentrism holds that host-country nationals are the best to show loyalty and understanding towards their country's institutions and culture (Roper Polycentrism is most common in adaptive organizations that give subsidiaries decision-making independence (Harris 1997) and tailor people management policies (Doherty, 1998). Though convenient in countries where legal, political and significant cultural differences persist it does challenge a company's ability to offer a consistent product, especially if it is a multinational group (Usinier & Lee, 2005). Regiocentrism, links with a regional business strategy and organizational structure, is broader than polycentrism and from an HRM perspective, allows host country nationals to move around the area, gain experience, be promoted and make strategic decisions affecting their region (Harris 1997). It applies to companies with regional target markets and country cluster strategies, allowing for similar practices to be brought together (Edwards, 2004). Geocentrism is an ideal (Doherty, 1998) HR orientation, with global companies identifying advantages in diverse cultural practices and adjusting them to different contexts. It prioritizes skills over nationalities and executives support both equal parent/host country systems and the application of cultural/institutional/organizational resemblances at a global level (Harris et al, 1997; Roper et al, 1997). Knowing the human resource strategy should be integrated in the business orientation (Storey, 1992; Riley & Jones, 1992), and Saint Fusion Korea is adopting, from a marketing perspective, mainly regiocentric tendencies (appendix 1), its people management strategy should also have a regiocentric orientation (appendix 6). Korean nationals are recommended at operative level due to different environments, employment law, bureaucratic work permits which do not validate importing all employees (appendix 6), but", "label": 0 }, { "main_document": "oscillating system with the low pass filter is shown in the appendix. It is possible to calculate the resonant frequency of the system using this graph. If we count the number of peaks to the first second we can see that there are about 21. This gives us an estimate for the resonant frequency of 21 Hz. A more accurate method would be to use the exact time coordinates of two peaks or troughs and then to apply the expression: Selected troughs have time coordinates of 198 ms and 246 ms. Putting these values into the above expression will give us a more accurate value for the resonant frequency: We can also use the graph to work out how long it takes for the system to stabilise after being disturbed. To do this we can use both the graph of the oscillating system with the low pass filter in place and the graph showing the same scenario without the low pass filter (also shown in the appendix). The graph without the low pass filter shows up the noise more than the one with the filter but they are both useful for determining the length of time it takes for the system to settle. In both cases the line appears to stabilise just short of 2 seconds. The graph showing output voltage against time when no load is applied and no filter is in place (shown in the appendix) highlights the level of noise the system has if we don't have the low pass filter. We will now consider whether the ambient temperature will affect the measured results. The error or change in length of a dimension can be expressed using the following equation: where The thermal expansion coefficient for steel is 11 This means that temperature would have had little effect on our measured results since The Wheatstone Deflection Bridge system was tested using two methods. In both cases a non-inverting amplifier was constructed in order to amplify the output signal front the system. The amplifier increased the signal form the order of millivolts to volts. The first test involving the application of force by placing washers on the end of the beam gave us the information we needed to calculate the sensitivity of the system. The sensitivity was found to be 10.954 VN Of the parameters that affect the output of the system, the thickness of the cantilever beam was found to be the most influential. The maximum load that the bridge could support was calculated to be 3 N. This was not possible to get close to however because the equipment was not capable of accommodating the required number of washers. The resulting graph of output voltage against force was therefore completely linear. The second test involving the computer software allowed us to calculate the resonant frequency of the system if it is disturbed and left to settle. It was estimated to be 20.83 Hz. The system took just under 2 seconds to settle after being disturbed. It was also found that the output voltage settled nearer to zero much", "label": 1 }, { "main_document": "to west, which comprises a whole set of rules governing peoples' life and worldview. Weber argues, \"the preservation of this magic garden, however, was one of the tendencies intimate to Confucian ethics.\" (1964: 227) \"Men remain close to the gods through magic, ritual and the observance of taboos\". (Schluchter, 1979: 22) The supreme and eternal behavior and life order is existent in this world, represented by wise people with perfect morality in the ancient times. Every one could become a \"gentleman\", the supreme personality in Confucianism, through learning the rituals, reading the classical texts, and following up the behaviors of the ancient wise men, which will be automatically accompanied by the good fortune in this world and a good name after death. There is no equivalent of the conception of \"salvation\" in Confucianism. \"Confucianism rationalism leads away from the problem of salvation and aims at the 'maintenance of the magical garden'\". (Schluchter, 1979: 25) The tension between nature and deity is absent in Confucian ethic. (Weber, 1964) Ancient Greek polytheism could be seen as a unique interlude between an unfragmented cosmos in the primitive time and Judeo-Christianity of monotheism. Every Greek God is in charge of certain spheres of human life and accordingly has a specific function. \"Just as Hellenic man sacrificed on this occasion to Aphrodite and on another to Apollo, and above all as everybody sacrificed to the gods of his city.\"(Weber, 1989: 23) \"With the systematization of the idea of divinity and transformation of the subordinate magical realm of spirits in the direction of anthropomorphism, specialization, and hierarchy\" (Schluchter, 1979: 24), human distanced himself by attributing the functions upon the Gods and then became confronted with the conceptions outside them. In the case of Ancient Greek, science is not necessarily opposed to religion. On the contrary, religion is the first attempt of rationalization of the cosmos, upon which science is developed. The monotheism in the Judeo-Christian tradition further rationalizes the world and expands the gap between \"a postulated divine perfection and the imperfection of the 'world'... It tears apart the magic unity of event and meaning, produces a tension-ridden dualism, and thus also destroys the 'primeval immediacy of man's relation to the world.'\" (Schluchter, 1979: 27-8) But the tension is compromised by the institutions of churches in the daily life of ordinary people. Some \"religious most highly qualified individuals\", who represent the whole mankind, leave the world and retreat into special religious communities. Those non-religious people therefore can still carry on their worldly business. The rationalization of religion culminates finally in Puritanism, especially in Calvinism. Not only does Puritanism \"condemn all magical means of salvation as superstition and blasphemy\" (Schluchter, 1979: 40), it also destroys all intermediary agencies between God and man. Everyone is standing directly to the God alone. This world is everywhere imperfect, worthless and inferior. In Weber's description, However, Calvinism does not lead to world rejection, but on the contrary to world mastery. Weber explains that Calvinists believe the chance of salvation is related to their success in this world. In order to prove the particularism", "label": 0 }, { "main_document": "dividends, which Easterbrook (1984) regards as an incentive mechanism to provide incentive effects of debt without risk shifting or overhang. Further, substantial payouts to shareholders reduce managers powers, and making them to incur the monitoring of the capital market when firms has to acquire new capital, where monitoring of managers is available at lower cost. The agency costs theory uses dividends policy to better align the interests of shareholders and corporate managers. It is an alternative program by which a company buys back its own shares from the market place, reducing the number of outstanding shares while allowing shareholders to receive cash payments as a capital gain. Stock repurchase are generally made in the following ways: Open market: it usually involves gradual programs over a period of time. Tender offer: corporations specify a number, a tender price and a period of time during the offer is in effect. Block of shares in on a negotiated basis. Stock repurchase was permitted in the UK only since 1981 and in Japan since 1995. But since 2000, over three-quarters of the companies in S&P 500 have bought back stocks. As I mentioned earlier in dividend policy, in absence of taxes and transaction cost, shareholders are indifferent between dividends and capital gains. Now we assume firms decided to implement the policy of translating its retained earnings in the form of share repurchases, in which case all gains are taxed at the capital gains rate. Farrar and Selwyn (1967) model represents the after-tax income of an investor as: Where Tgi = the capital gains tax rate The implication of the formula is explicit: As long as the tax rate on capital gains is less than dividends, shareholders should prefer share repurchases to dividends payments due to the tax benefits, and the magnitude of tax advantage depends on the costs bases marginal tax rates for the individuals. Moreover, a non-tendering shareholder can defer the tax liability until the shares are actually sold. In this case, the deferred capital gain equals to the time value of money. Despite the theoretical analysis, surveys of executive suggest taxes are the second-order concern in setting the payout policy, and the tax-shield hypothesis has little empirical supports. The primary hypothesis is that dividends represent the ongoing long term commitment of delivering its permanent cash flows, while repurchases are more pro-cyclical thus preserves financial flexibility relative to dividends. The market punishes a company if it fails to achieve historical patterns of dividends annual percentage increases. Given the market expectations, stock repurchases is considered as a sensible option for firms who have a high likelihood of not being sustainable. Grullon and Michaely (2002) report that young firms have a greater propensity in the recent years to choose financial repurchases over dividends. It is consistent with Guay and Harford (2000) results, who demonstrate the aggregate share repurchases are more volatile and vary considerably with business cycle compared to dividends payments. From shareholders point of view, with a dividend, the discretion lies with the firm. Investors will receive the dividends and must deal with the tax consequences.", "label": 0 }, { "main_document": "process is running the monitor's procedure, no other process can run any procedure of this monitor until the first one finishes and leaves the monitor. It makes the use of shared variables easier for programmers - they do not need to assure the exclusive use of shared resources, it is enough to place them inside the monitor which will automatically take care for that. In case when a process wants to run monitor's procedure which is currently used by another process, its request is put on queue. Monitors also have condition variables, on which a process can wait if conditions are not right for it to continue executing in the monitor. Some other process can then get in the monitor and perhaps change the state of the monitor. If conditions are now right, that process can signal a waiting process, moving the latter to the ready queue to get back into the monitor when it becomes free (Hartley 1997). We can perform the following operations on condition variables: Even though wait(c) and signal(c) look similar to P(s) and V(s) there are crucial differences between them (Andrews 1991): So far I have described solutions which can be used only in a shared memory environment. Nowadays, when network architectures are becoming more and more popular, a new solution needed to be introduced. This is where the message passing evolved. Sometimes it might be also convenient that processes executing on the shared memory architecture use the message passing as communication and synchronisation mechanism instead of using the shared variables. It happens e.g. when processes are executing on behalf of different users (Andrews 1991:339). The primitives operations for message passing mechanism are: In case of message passing mechanism processes share channels. To initiate communication one process sends a message to a channel; another process acquires the message by receiving it from the channel (Andrews 1991:339). There are three message passing models: In Execution of each As the channels are supposed to be unbounded the sender is not delayed. To receive the message the process executes If the channel is empty the receiver is delayed. Using the asynchronous message passing model has three major disadvantages: In In this model both The communication is performed synchronously that means that the sender blocks until the receiver does a In In case the channel is full the sender is delayed. If the channel is empty the receiver is delayed. While designing new software, the main goal for the software engineer is to ensure that his product is reliable and predictable. Therefore he must make sure that all design flaws are eliminated. Unfortunately the more complex the system is, the more difficult it is to ensure its correct performance. There are three main approaches in handling the design flaws: Several researches have shown that the fault avoidance approach is the best one to use for high integrity systems. Using the computers for monitoring and controlling safety critical system is a convenient solution on one hand, but very dangerous on the other. Making a computer system responsible for controlling planes, medical devices", "label": 0 }, { "main_document": "or striatum of the blueberry supplementation rats, but not the controls. This finding suggests that polyphenolic compounds are able to cross the blood brain barrier and localize in various brain regions important for learning and memory and may deliver their antioxidant and signaling modifying capabilities centrally (Andres-Lacueva et al, 2005). The effects of the food matrix on the bioavailability of anthocyanins have not been examined in much detail. There may be some important differences exist between animals and humans and so clinical studies will be of great help in investigating the health effect of anthocyanins. Much more research can be carried out on structure-activity relationships of anthocyanins. Anthocyanins were previously characterized by paper chromatography (PC). With the improvement of analytical technology, high-performance liquid chromatography (HPLC) coupled with UV-visible (UV-vis) detection has been the standard method for the analysis of anthocyanins (Zhang et al., 2004). Recently, HPLC coupled with mass spectrometry (MS) has become an efficient tool for providing useful structural information regarding molecular weight and fragmentation (Wang et al., 2003). However, the most widespread technique for separation and qualification of anthocyanins is HPLC with UV detection at wavelengths around 520 nm as anthocyanins have a very specific absorption of UV and visible light in the red area (Nyman et al., 2001; Nielsen et al., 2003). Actually, either UV-vis or mass spectra alone is not sufficient enough. Both information should be used to distinguish anthocyanins with similar structures. The method of extraction influenced the composition of fruit extracts. The highest anthocyanin was found in extracts obtained using a solvent of acidified aqueous methanol (Kahkonen et al., 2001). In many berries studies, frozen samples are freeze-dried and ground into powder. The samples are extracted with solvent (85% methanol/15 % water/ 0.5% acetic acid). After vortex and centrifuge, run the supernatant into the HPLC (Nielsen et al., 2003). Each of the berry fruit extracts contain a different composition of anthocyanins that produce a distinctive HPLC chromatogram based on the retention times of the individual anthocyanins and the relative amounts (Ichiyanagi et al., 2004). Anthocyanin identifications are made by using retention time data and UV-vis spectra and comparing them with standard compounds (Zhang et al., 2004). However, it is very difficult to obtain a complete profile as petunidin standard is not always available. Even though a large amount of published data is available, different investigators have used different experimental conditions, which can make comparison of anthocyanins in different foods more difficult. Due to the attached sugars and acids, the number of anthocyanins is 15-20 times greater than the number of aglycon forms. This results in a large number of peaks on the chromatogram and difficulties in identifying individual anthocyanins (Nyman et al., 2001). Acid hydrolysis greatly simplifies the anthocyanin profile by converting anthocyanins to six major anthocyanidins (Zhang et al., 2004). Most studies focus on the qualitative anthocyanin characterization of fruits and berries. Only a few reported the quantitation of aglycons, especially those in plant-based material is still lacking. More research can be done to quantify aglycons in fruits as to give a clear view of", "label": 0 }, { "main_document": "and burdens across generations. More immediately, such rules should help This will Conversely, a running budget deficit in the present leads to an increase in government debt, which will have to be serviced in the future. If the interest rate on the government debt exceeds the growth rate of the economy, a debt dynamic is set in motion that leads to an ever-increasing government debt relative to GDP. When this becomes unsustainable, painful corrective action is required. Where G is the level of government spending T is the tax revenue, r is the interest rate on the government debts B, M is the level of high-powered money The left hand side of the equation is the government budget deficit, consisting of the primary budget deficit and the interest payment on the government debt. The right hand side of the equation is the financing side. The budget deficit can be financed by issuing debt or by issuing high-powered money. Monetary financing can be disregarded since it is surrendered to the confederation's central bank. In the following, the changes per unit of time is represented by putting a dot above a variable. Where Y is GDP, so that b is the debt GDP ration Where This equation defines the dynamics of debt. The necessary condition for solvency: This says that in a world where the nominal interest rate, r, exceed the nominal growth rate of the economy, x, the government must make sure that the primary budget (g-t) has a surplus or that money creation is sufficiently high to stabilise the debt-GDP ratio. If not, the debt-GDP ratio will increase without limit and surely lead to a default on the government debt. This places pressure on the central bank to print money for that government, which in turn have the dire consequence of This is one of the reasons that European Union members at the Stage 2 of EMU are prohibited from borrowing from the European Central Bank (ECB). Although Ricardian Equivalence argues that given an intertemporal setting in which consumers have perfect foresight, debt might be irrelevant, these line of reasoning rest on the assumption that generations care about the future. Assuming that most elected governments have no incentive to consider future fiscal sustainability, policy rules are the solution to reduce or remove the influence of short-run political expediency that leads to a default bias. Countries such as Italy, Netherlands and Belgium that accumulated sizeable deficits in the early part of the 1980s, had to run corresponding large primary budget surplus in order to prevent the debt-GDP ratio from increasing automatically. The experience of these countries show that large government budget deficits quickly lead to an unsustainable debt dynamics from which countries find it very difficult to extricate themselves. This illustrates the limits to the use of fiscal policies to offset negative economic shocks. It maybe argued that financial markets would act as a discerning disciplining mechanism by raising borrowing cost and downgrading credit rating in the face of an excessively lax aggregate fiscal stance. However an upgrading or downgrading by international credit-rating", "label": 1 }, { "main_document": "The purpose of this paper is to discuss whether the alignment of business with information technology (IT) could be a possible way to gain a decisive competitive advantage on organizational performance. The paper examines on the possibilities and necessities of alignment by evaluating sources, ideas and researches of different authors and provides an insight on the diversity of prior research on alignment. Such a deduction provides firms with a platform on several contingency factors of alignment where they position within their respective industries. The fervent debate concerning strategic alignment of business and IT is under discussion for many years. Businesses are investing decades of time and billions of dollars to struggle for achieving competitive advantage by using information systems. However, as Luftman and Oldach (1996) state, organizations seem hard to position itself and identify their long term benefits by harnessing the capabilities of IT. This 'competitive advantage' paradox has been both generally supported (Venkatraman 1989, Niederman et al 1991, Earl, 1993, Boynton et al 1996, Davidson 1996) and condemned (Carr 2003) in the literature. As a consequence, there is no precise, commonly agreed notion of preposition of alignment as well as the contribution of information technology to the success of organizations. Business-IT alignment is defined as a process of 'applying IT in an appropriate and timely way and in harmony with business strategies, goals, and needs'. (Brier and Luftman 1999) In this paper, applying business-IT alignment as a mean to strive for competitive advantage in dynamic marketplace as the underlying theory (Adcock et al 1983, Cardinali 1992, Faltermayer 1994), possible antecedents of alignment are analyzed to give insights on the extent in which IT contributes to the business success. The importance of business-IT alignment is widespread in IS research literature. Papp (1995) indicates that this concept has documented since the late 1970s (McLean and Soden 1977, IBM 1981, Parket and Benson 1988, Mills 1986, Brancheau and Wetherbe 1987). Alignment addresses a coherent goal across different departments with IT perspectives. Such cohesive organization goal and IT strategies enable better leveraging the business - IT partnership. This harmony can be extended and applied to help the organization in identifying new opportunities. (Papp 1999) Brier and Luftman (1999) point out that the traditional methods for planning the business strategies have not taken full advantage of IT. That is one of the main reasons why organizations fail to realize the underlined potential of their IT investments (Barclay et al 1997). Brier and Luftman (1999) adopt and modify Henderson and Venkatraman s' (1989) strategic alignment model (Fig. 1), which in concert with their enablers (enabling activities) / inhibitors (inhibiting activities) research from 1992 to 1997 consecutively. Brier and Luftman (1999) interviewed the executives and obtained the data from consultants' engagements and eventually identified six most important enablers and inhibitors for business-IT alignment. They argue that by maximizing alignment enablers and minimizing inhibitors through a six-step approach, strategic alignment of business with IT can be achieved. Brier and Luftman s' (1999) study on enablers and inhibitor based on the strategic alignment model (Henderson and Venkatraman 1989) which help", "label": 0 }, { "main_document": "that I am solely responsible for the decision making procedure, by consultative I mean that I discuss the problem with others, but ultimately still make the decision myself and by group based I mean that I share the problem with others and we make the decision together. The first autocratic decision procedure is based on me taking known information and making the decision alone. It is very similar to the second autocratic decision procedure, which only differs in the fact that I take the information on which I base my decision from my employees. I would use the first autocratic procedure in situations where I am confident that my knowledge and use of it is better than that of my employees, for example when selling corn to a grain trader. The second autocratic procedure is better for use in situations where I feel my employees may have a useful perspective on the situation, but due to my position as farm manager, I must ultimately make the decision. This procedure may be called for when deciding on the cattle feed rations, or other such situations. The first consultative decision procedure is based on me sharing problems with my employees individually and then deciding alone. The second consultative decision procedure is different in that I share problems with my employees as a group. I use the first consultative procedure in situations where the information that my employees give me may include personal opinions, such as the negotiation of wages. I use the second consultative procedure to solve problems that I feel require group input, such as the allocation of jobs, as I feel that it is important to hear from my employees on which jobs they feel best suited too, or whether they have particular objections to certain tasks. The group decision procedure is based on me not having sufficient knowledge, so I share the problem with my employees and encourage them to decide with me. I personally have never used this method as I feel that as the farm manager it should be my responsibility to make the final decision. Alderfer's ERG Theory is an expansion of Maslow's hierarchy of needs, where ERG stands for Existence, Relatedness and Growth needs. In order to keep my employees happy and motivated in their work it is important for me to ensure that all three needs are met. My employees existence needs are satisfied by acknowledging their presence and importance in the workplace. Sometimes this involves my complimenting their work and reassuring them that they are doing a good job. My employees relatedness needs are satisfied by their having friendship and company and this has been easily accomplished by giving them fellow employees. To the best of my knowledge they enjoy each others company. My employees growth needs can be satisfied by aiding their personal development. All of my employees have had to attend various courses and complete tests, such as the Sprayer Operative test, at different stages in their working lives and this will have enriched their progress as an individual. McClelland's Acquired Needs Theory", "label": 1 }, { "main_document": "it would be difficult for both the researcher and the participants not to know which group they are in, as they are not receiving a placebo. Therefore the researcher will know who is in the intervention group yet the participants will not know which arm they are in. The intervention group will obviously know they are getting more training, however they will not know that the control group is not receiving additional training. After the participants are randomised, both groups will be asked to demonstrate their hand washing and drying in a laboratory and swabs will be taken. Within the National Institute for Clinical Excellence Infection Control guidelines (see appendix 1) it states that areas that need particular attention are the fingertips, the areas between the fingers and the thumbs, therefore swabs shall be taken from these areas. Then the intervention arm will be given additional training on hand washing techniques, using the guidelines on the Royal College of Nursing (RCN) website (RCN, 2005). As well as guidance on appropriate drying and show both techniques three times and each participant will have one-to-one tuition whilst practicing. This session will last for seventy-five minutes, as this is the time originally given to infection control and hand washing in the corporate induction before time constraints. A week after the additional training session both groups will then be taken into a laboratory setting and asked to demonstrate their hand washing and then drying technique and after swabs will be taken. From the swabs the number of cultures present will be counted and verified by another researcher to ensure reliability and compared to the first swabs. Validity and reliability are crucial in ensuring unbiased and trustworthy research. Some issues they may arise within this study regarding validity include testing, whereby participants may have learnt skills in the pre-test (first swabs) that affect the results. This is similar to participant learning, which is an issue in reliability, when participants perform better due to previous learning. Additional, the Hawthorne effect needs to be considered when ensuring validity; this refers to participants performing better due to receiving attention from the researcher, rather than because of the independent variable (intervention). For example, participants could be washing their hands using a better technique as this is being tested. To overcome these problems, participants will be asked to provide evidence of their training on hand washing to ensure they all participants have only received standard training. As well as ensure blinding so participants do not feel they need to perform better as they are in the intervention group and the researcher does not influence their technique by giving the intervention group more attention. Once the data is collected the statistics will be analysed using the Mann Whitney test, also known as the rank sum test. This is a nonparametric test that compares two unpaired groups. Probability tables will also be used when evaluating the statistics and will determine how significant the results are. This value will be expressed as P=<0.005. Confidence intervals will also be established at 95% CI. Other statistical information", "label": 1 }, { "main_document": "as implicational, unidirectional and asymmetrical: \"A grammar is a mechanism that maps a huge set of semantic distinctions onto a small set of syntactic distinctions\". Comparison of the approaches with respect to the nature of innate knowledge, again presents the evaluator with a series of fine discriminations. Essentially, both hypotheses adhere to the principles of the wider framework of Generative Grammar which postulates the innate ability of the human to utilise unlearned, universal, semantic-formal constraints to generate grammatical rules. The only discernible difference is between the emphasis placed on this approach: the Semantic Bootstrapping Hypothesis has had a greater opportunity to develop a theory of those aspects of language acquisition that postulate the mechanisms and procedures by which innate rule prototypes are developed into fully-fledged grammatical rules. There is no evidence in the Syntactic Bootstrapping Hypothesis literature that the proponents of this approach challenge the theoretical validity these proposals. This essay has demonstrated that the so-called 'semantic' and 'syntactic' bootstrapping hypotheses are not equivalent in their status as theories: the former, as a true 'bootstrapping' hypothesis, provides a mechanism for accounting for how the child initially acquires syntactic rules as part of the wider theoretical approach described by the Generative Grammar framework, and the latter provides additional evidence to support the importance of the child's syntactic analysis of sentences once those syntactic rules have been derived, also within that framework. Despite this more obvious difference, and the fact that the Syntactic Bootstrapping Hypothesis was initially developed to challenge the Semantic Bootstrapping Hypothesis (but later voluntarily abandons the focus on bootstrapping), their approaches barely conflict. Finally, given that both approaches have essentially been developed within the wider Generative Grammar framework, it is likely that the minor differences of perspective and emphasis noted in this essay will, in time, be reconciled along the lines suggested by Grimshaw (1994).", "label": 1 }, { "main_document": "as evidence for a proposition being true. If the evidence obtains than it entails the proposition in question as true. This brings us to D's reliance on Tarski's theory for analysis of individual sentence for extraction of TC and accounting for compositionality of sentences. As sentence meaning is a function of its semantic structures, Tarski's theory of truth provides the necessary framework to deal with composition of natural language in order to identify truth conditions of individual sentences. In this sense, D's theory of meaning directly relies on a theory of truth for an account of truth conditions. This section gives a brief explanation why the TC approach to meaning fails to make sense of Q&C. The theory relies on a general feature of language and declarative sentences in particular. All declarative sentences make a statement which asserts a proposition describing certain state of affairs. We can analyze the sentence to determine what has to obtain for the proposition to be true in other words its truth conditions. However, this approach seems to fail when applied to interrogative and imperative sentences as they don't identify the eveidence which would make them true. The former request information and the latter make a demand. Since questions ask for information about a certain state of affairs we can't exactly ask what would have to obtain in order for that request to be true. Commands also pose a problem for the theory as they demand that certain state of affairs obtains, which again fails to describe how the state of affairs is. It seems counter-intuitive to ask for truth value of Q&C assertions and hence Tarski's theory of truth doesn't help in analyzing their truth conditions. The rest of the paper discusses possible solutions to the problem of the theory. One way of thinking about this problem is trying to interpret Q&C into declarative sentences which would be descriptive and fact stating. Davidson had dealt with ambiguous sentences by providing an interpretation for every possible meaning of the sentence and then making a theorem for each of this options. (Evnine 1991) By giving meaning of each possibility, he provided an account of meaning for ambiguous sentences based on truth conditions. If we interpret Q&C in a sensible way we should be able to use the TC approach to determine their meaning where the meaning of the Q&C in question would be a function of the interpretation. For example, \"Is the sky blue?\" can be interpreted as \"The speaker of the sentence wants to know whether the sky is blue.\" This would lead to the following theorem: \"Is the sky blue?\" is true iff the speaker of the sentence wants to know whether the sky is blue. \"The speaker of the sentence wants to know whether the sky is blue\" is a fact stating declarative sentence which can be understood in terms of the TC approach. It also spells out meaning of the question. An example including a command (\"\"Go home!\" is true iff the speaker ordered me to go home.\") also seems to work as the", "label": 0 }, { "main_document": "and infrastructure (NTB, 2006) Following the stages of the product life cycle, the tourist area life cycle (TALC) is also another useful tool to anticipate demand for a destination (Bowie and Buttle, 2004). However, it is a model that has been primarily developed for resorts, and is less applicable in a situation where tourism is a support to economy (Gale and Botterill, 2005), as it is the case in Nepal. The model is even though also suggesting a potential growth, since, by its increasing number of tourists over the years (see Appendix 3), it must be still in the exploration stage, slowly leading through development. Despite the political and economical instability, the demand analysis for tourism in Nepal shows a potential growth with Plog's psychographic analysis and with the TALC. Also, the international leisure market is clearly the most important market (90%) and within this market, the leisure purpose of travel is the most significant (59.2%). There seem to be an important British demand for tourism, since in 2002 UK was the third-largest country of tourist arrivals in Nepal, after India and 'others' (NTB, 2006). According to Porter (1985), the bargaining power of suppliers, of buyers, the threat of substitute product, of new entrants and the rivalry among competitors are the five forces that drive competition in an industry. Porter provides a framework enabling managers to determine the strength of each of these forces, and to evaluate how each of them interacts with each other (Porter, 1985). The understanding of competition is at the core of strategy formulation (Porter, 1979) since it provides a basis to differentiation. The weight of the threat of new entrants largely depends on the entry barriers of an industry (Porter, 1979). Among the six major sources of barriers to entry mentioned by Porter (1979), the government policy and the capital requirements are the factors that increase the threat of new entrants in Nepal. The government of Nepal lowered the barriers to entry by making some exceptions in its foreign direct investment policy to encourage investment in the tourism sector (US and FCS, 2006). Also, capital requirement in the hospitality industry is often low compared to other industries and it is even lower in Nepal since the abolition of the minimum required foreign investment in 1996 (U.S. and FCS, 2006). Hence, the barriers to entry in Nepal are low, which creates an important threat of new entrants. Porter (1979) defines substitutes as product or services that can perform the same function as the one offered by the industry. The main substitute to Monarka would be tourists staying with friends and relatives in Nepal, or deciding to choose another category of hotel (three or four star). Substitutes could also be another destination that fulfils the same functions as a stay in Nepal. Substitutes to the hotel facilities would be a ready-to-eat meal bought in the supermarket instead of a meal at the hotel restaurant (Bowie and Buttle, 2004). However, the threats of substitute are not highly affecting the competitive environment in this case since the brand aims a specific", "label": 0 }, { "main_document": "variables can be viewed such as pressure and temperature. The following streamline flow pattern (Fig 1.4) is for temperature, where water enters the inlet pipe at a high temperature (red) and leaves at a colder temperature (blue): A Slice Plane can is also created to view the flow pattern using different variables (temp. pressure etc.) at a cross section through the mixer, which is done by creating a new plane and defining its position using co-ordinates (Point and Normal method is used). The position of the plane can be re-adjusted using picking mode. Once parameters are defined the flow is made visible (Fig 1.6). Contours can also be viewed, which allow the user to more easily visualise points of equal values (or boundaries between different values/colours). The plot can then be animated so that the flow sweeps the geometry of the mixer (inlet to outlet). This is done using animation editor, which allows the user to define number of frames (minimum of two required) including details such as plane starting position, temperature range and number of intermediate frames (where more frames increase duration of the animation clip). This file is then saved in MPEG format and is therefore viewable using Windows Media Player This tutorial modifies the geometry and mesh that were previously created. A refined mesh is created by reducing the maximum spacing and an inflated boundary is produced to provide a better resolution of the velocity near the wall therefore improving accuracy for the pipe geometry. It can be seen in Figure 2.1 that the new mesh is finer than the one initially created. The next stage of the tutorial modifies the original geometry. This includes changing the pipe radius (to 0.4m) using the dimensioning feature and extending the outlet pipe. The extension requires the application of Since these operations take place at the end of the outlet pipe a new plane is created and a corresponding sketch to provide the required reference for the new extrusions. Figure 2.2 shows the modified geometry. The latter part of the tutorial updates the geometry in CFX-Mesh so that the mesh corresponds to the modified geometry. The new mesh is imported into CFX-Pre under Since the physics for this simulation is similar to that used in Tutorial 1 the same settings are used by importing the relevant CCL file. The solver is then set for the new mesh with slight modifications to the parameters to provide greater solution accuracy. To achieve this increased number of iterations are used (solver time is increased). A slice plane is created (Fig 2.3) and compared to the slice created for the mesh in Tutorial 1. The main differences are 1) the inflated layer has produced prismatic elements that appear as rectangles around the edges; 2) more line because slice intersects more mesh elements; 3) smoother curve as the new mesh represents the geometry better. The inflated elements can be viewed in 3D by creating a volume. The volume is defined (point and radius) and elements contained in this volume can be made visible (Fig 2.4). Separate volumes", "label": 0 }, { "main_document": "is taken into account. The number of speakers, in fact, is more equally distributed among the different groups. A more detailed description is presented in Table II below. From a closer analysis of the data, an important bond between gender of the speaker and social class can be outlined. Almost all the women who use It means that the use of This is not the case with male speakers, who are distributed among the four social class groups. A further observation needs to be made about the spreading use of The reciprocal usage of the vocative is intended as 'social cement' and plays an important role in the creation of closer in-group nets. This phenomenon is surprising because it is extended to high social class students, in possible contrast with what was found by Ervin-Tripp: \"In upper-class boarding schools, boys and some girls address each other by last name instead of first name\" (1972: 224). The formal use of the last name as form of address amongst upper-class students pointed out by Ervin-Tripp may have been replaced by the more informal use of the term of friendship According to the literature, The BNC distinguishes two groups of spoken texts: 1- a 2- a The trend of informality claimed by the literature is supported by the kind of texts which make up the Query Results. In a total of 77 texts, 52 of them are demographic texts (about 70%), while only 25 are context-governed texts (about 30%). Furthermore, the majority of context-governed texts in which The spreading use of the vocative Let's now analyse the occurrences in more details to point out the main functions and patterns of use of the vocative. From a structural point of view the term tends to appear in final position, both in long and short sentences. The concordances show that the vocative very often collocates with expressions of agreement such as A few examples are found with expressions of disagreements such as Moreover, Instances in which these expressions cumulate are very frequent and in many cases, the form of address can be surrounded by pauses and hesitators. In addition, Another interesting co-occurrence is that of the vocative These uses of swear words may be traced back to the high informality of the settings in which conversations take place, but above all they reveal the existence of very closed relationships between the participants (on the other hand, the usage of expletives as terms of address is common among very intimate friends - Braun (1988), Biber [et al.] (1999), Gramley and P In all the examples discussed above, the use of the vocative Only few utterances show an unfriendly connotation of Defining if a linguistic item has a positive or a negative connotation is not always an easy matter, especially when vocatives are concerned. Words of this kind have an inherent pragmatic force and in most cases it is the way they are used which determines the precise meaning. Partington (1998) tackles the problem of connotation analysing the In other words, the favourable or unfavourable connotation of a term", "label": 0 }, { "main_document": "and result in coefficients being insignificant in t-tests. However, I may have omitted some relevant variables, as this causes bias Furthermore therefore I could have included other variables, such as sex, as prescribed by Siegfried and Strand, which had an insignificant effect on exam mark. However, having tried these variables, I found that they either were insignificant or made other variables insignificant. I also tried using the year intercept dummy; however, as hypothesis 4, appendix 6 shows, this only significantly improved the fit at the 10% level. Moreover I believe that the sample needs other variables not included in the survey such as stress and whether problem sets have been completed, which I believe would be a good approximate to 'motivation' which Romer Unless the coefficient is 0 which is not true in this case or the co-variance between the included variables and omitted variables is 0 which again cannot be true as there are no 0 co-variances between variables. Romer (1993), \"Do students go to class? Should they?\", Journal of Economic Perspectives, 171-2 In conclusion, I have calculated a model of exam performance based on a number of variables, as given above. I think these have a significant impact upon exam performance, based on theory from economists such as Romer and also from my own experience. While nearly all the variables are significant individually at the 1% level and together at the 0.01% level, I do think there may be a better model as the However it difficult to offset increasing R", "label": 1 }, { "main_document": "price of steel. Unemployment rate increased due to cost cutting. Steel demand dropped from 20 million tonnes in 70s to 13.9 million tonnes in 2000. Increasing import and export. Merger of three of the Corus's European rivals, Usinor (France), Arbed (Luxembourg) and Aceralia (Spain). They would create the world's biggest steelmaker which competed with Corus Arbed and Usinor organised global alliances and focus on developing markets such as Brazil. The major opportunities and threats of Corus in 2001: Opportunities Growth through product and market development. Product repositioning. Trading with more developing countries. Threats Competitions with other organisations. Demand of steel decreased. Strategy capability is the ability to perform at the level required for success. It is about whether an organisation's strategies continue to fit the environment in which the organisation is operating and the opportunities and threats exist. Many issues of strategic development are concerned with changing strategy capability better to fit a changing environment. The critical issues about the British Steel/Corus's strategic capability addressed in 1990-2001: Successfully become a profitable privatised company through combination of increased investment, good management structures and revolution in working practices. Gained competitive advantages through product development and better management of the logistics of the supply and distribution chains. Globalisation to increase global competences. Had connections with other countries (Eastern Europe and USA) through joint ventures, overseas transplants and trading, so as to develop globally. Carrie out a three-pronged attack: productivity gains, exploitation of IT and reduction in costs of supplies to cope with falling prices of steel and unfavourable exchange rate. Exported excess production to other countries to prevent stock storing and increase revenue. Merger to increase company size and reputation. Joint chief executives to show that British Steel and Hoogovens were completely integrated at the business unit level. Cost efficiency by economic of scale and reduction in costs of supplies. Had aluminium production as an alternative source of income. Resources available to increase capability--- An organisation needs to identify its strengths and weaknesses which are relevant and capable of dealing with changes taking place in the business environment. Analysing the strengths and weaknesses help an organisation to increase opportunites and reduce threats. Strengths Long history and well-known. Developed technology and have experience in steel production. Focus on the development and growth of downstream carbon steel business and took part in aluminium and stainless steel production to increase product range and explore new markets. Existing resources to increase competitive advantages. Weaknesses Small size and scale compared with main competitiors. Its operations outside Europe are mainly concentrated on small steel ventures in the US Lack of market in developing countries. Cultural problem as the corporate culture in the UK and the Netherlands are different. Short-termism. An organisation should concern about what people expect it to achieve and what influence people can have over the organisation's purpose when planning its strategies. Stakeholders are individual, groups or organisations with an interest in what the organisation does. They depend on the organisation to fulfill their own goals and in turn the organisation depends on them. The major stakeholders for Corus", "label": 0 }, { "main_document": "for a cash transaction to prevent sending inaccurate signals to the market (Hitt et al, 2001). A former accounting convention whereby the acquiring firm does not recognize the goodwill paid through the premiums. Most M&A`s at the end of the 1990 In a fixed-share deal the shareholders would receive the same numbers of shares but of less value if the stock price fell, whilst in a fixed-value deal, the shareholders would receive the same value, but the extra stock issued to pay would lower their potential stake. The premiums paid appear higher when there are conflicts between CEO`s, when a CEO has been positively appraised and is also the chairman of the Board. Managerial hubris is evident in the Hilton proposal to takeover ITT, where the respective CEO`s refused to negotiate and instead opted to use the press to trade insults, consequently leading to the payment of irrational premiums. The premiums offered by competitive bidders augment in a similar fashion to auctions. Likewise, in hostile takeovers the premiums paid are greater than that of friendly takeovers, because of the upward pressure the reluctance to takeover has on the bid. The tools managers use in defence of hostile takeovers may apparently protect shareholders against unfavourable takeovers whilst genuinely used to ensure their jobs In contrast, Japanese firms try to avoid takeover by become more firm specific investment. (Hitt et al, 2001). Acquiring managers might choose to form conglomerates (which the 1980 Most M&A action in continental Europe involves a transaction from a large shareholder to another large shareholder. These owners are perceived as controlling because they can influence decision making, without bearing the full cost of ownership. A large shareholder may free ride on insider information such as growth opportunities by exploiting them through his/her share and control in another firm. Likewise, the controlling owner might internalise corporate decisions by recommending managers to reduce overcapacity, so that supply falls and prices increase in the industry where he/she possesses stocks. More obvious benefits of control are perks and stock options. An owner with a majority stake in two companies, A which is gaining profits and B which is under performing, may oblige A to buy B Consequently the large shareholder pays lower taxes for A, unfortunately this imposes a sub-optimal position for A`s minority shareholders. (Zingales, 1998) For a controlling shareholder whose firm is a potential target and who may lose say as a result of the acquisition, the value of control will justify the high premiums paid. Likewise occurs in UK/US in targets with high levels of management ownership. The market price is a fair representation of the value of a share; therefore any amount paid for the target above the bidder's share price represents an estimate of the private benefits of control the buyer is willing to pay for (Zingales, 1998). Beyond this universal ownership reason, the following differences between countries account for the M&A failures: shareholder rights and degree of protection, accounting standards, enforcement of corporate law and governance. In the UK/US the difference between non-voting and voting shares is minimal,", "label": 0 }, { "main_document": "demonstrated that 'for purposes related to' was limited only to the prevention and detection of crime, the investigation of an offence and the conduct of a prosecution. The population is strongly opposed to the state's retention of data about individuals. All England Law Reports pg 207 section (a)- paragraph [26]. The majority also considered the other way round that is, taking into account the possibility that there might be some interference with Art 8(1). However, they were of the view that such interference was objectively justified as per Art 8(2). As per the submissions It is also worth mentioning that this is being reflected in Parliament's decision to enact s82 of the Criminal Justice and Police Act 2001. Moreover, there was no disproportionality as to the fact that the retained samples conferred the idea that the presumably innocent is suspicious. Such data will help in investigating future offences, not past offences. Besides the advantages of an extended database outweighs the contention in this issue. That retention was not in accordance to law and the power of retention was disproportionate- Concerning the second issue, it was held that there was no breach of Art 14 since the facts of the case was concerned as far as Art 8(1) applied. As Lord Stern puts it: \" Nevertheless, he also considered the possibility of a breach of Art 8(1) and directed his speech towards the elimination of any possibility for a breach of Art 14, in other words, why retention was objectively justified against discrimination. All England Law Reports pg 211 section (g)- paragraph [44]. First, he explained that the word 'ground' in Art 14 was limited because if it were the contrary, it would be unnecessary to mention 'and other status'. The difference in treatment Nor could such treatment be considered as a 'status' within the meaning of Art 14- that is personal characteristics. The appellants are \" Moreover the appellants cannot compare themselves to others who have not given their fingerprints or samples during investigation. They are clearly not in the same situation since the former has already provided the fingerprints and samples, whereas the latter was not required to do so. The relevant group of persons against which they should have compared themselves was those who had their fingerprints lawfully taken. They cannot also claimed that they are being treated as being in the same group as the convicted because it must be borne in mind that the database also contains the profiles of pure volunteers. Between those who have had (the appellants) and those who did not have their fingerprints and samples taken during a criminal investigation. All England Law Reports pg 213 section (a)- paragraph [51] Finally concerning the last issue, since s64(1A) did not engage Art8(1) and, there is an objective justification under both Art 8(2) and Art 14 if it did engage Art8(1), it can be concluded that s64(1A) is compatible with the Convention. The reasoning behind the dissenting opinion Also, misuse of data, which would amount to an interference with an individual's private life, is strictly prevented", "label": 0 }, { "main_document": "as we see it. If we are under influence of a drug that makes blue things appear red, we cannot take the fact that we see a red object a reason in favour of believing that we actually see a red object. Therefore epistemic reasons seem be contingent on the circumstances of a particular case. Holism of reason holds that \"what is a reason in one case may be no reason at all in another, or even a reason on the other side.\" Cited from The second premise establishes the connection between holism of non-moral reason and holism of moral reasons. Dancy's main reason for that is that \"nobody can say with any confidence just which reasons are moral ones and which are not.\" Since there is no obvious difference between non-moral and moral reasons, it seems more plausible to believe that moral reasons function just like non-moral reasons. \"If moral reasons, like others, function holistically, it cannot be the case that the possibility of such reasons rests on the existence of principles that specify morally relevant features as functioning atomistically.\" Therefore, since Ross' deontology fails to specify why moral reasons should not function just like non-moral reasons i.e. holistically, particularism seems more plausible. 4 Jonathan Dancy, \"The Particularist's Progress\", in Hooker and Little (eds.), 6 The third argument against Ross' deontology is based on the idea that in order for some reasons to be reasons at all, they require something / a condition to be in place. Some features act as enabling conditions, \"they enable the features that are reasons to be reasons they are in this case, without themselves being among the reasons why the action is right.\" A good example of this is the idea that ought implies can. This in practice means that when an actor is not capable of actually doing an action he cannot have the moral reason for it. Thus for us to even start considering the morality of an action or the polarity of a pro tanto reason, we first need to have an opportunity to do it. Consider an example where a disabled man sees a drowning person but has no way of actually saving him. If the disabled man would try to save the person, it is very likely that he would drown as well. In addition, a promise under duress would not be considered binding, as the agent was forced into it. A morally binding promise would require the enabling condition that it was not made under duress. Thus we can maintain that there are no moral obligations in these situations and particularist are right to make the \"distinction of 'counting in favour' and 'enabling something else to count in favour.'\" Ross' deontology seems to be mistaken to rely on principles, as they cannot incorporate the enabling condition. Ibid., pg. 21 10 The next part of the essay evaluates the argument presented above. The argument from examples seem to show that the polarity if reasons changes in accordance with the circumstances. However, this does not mean that this is a rule", "label": 0 }, { "main_document": "Constitutional theory provides for the supremacy of Parliament as the primary legislative body. Yet, contrary to The Declaratory Theory of the Law [1], the nature of language; the complexity of content; the unpredictability of future cases; the need for statutes to provide for potentially conflicting interests; the limitations of a crowded parliamentary timetable; and the effect of contextual factors, confers upon the courts an active, discretionary role of statutory interpretation (Zander, M., 'The Law-Making Process' (4 As their reasoning forms a precedent for future cases, when faced with a new point of controversy within a statute, theoretically the courts should seek an interpretation corresponding to the parliamentary intention behind that statute. In practice, however, statutory interpretation is not so straightforward. The case of The case was concerned with the interpretation of s.118 of the County Courts Act 1984, relating to the powers of the county court to protect its witnesses by punishment of those guilty of contempt of court. This essay shall consider the court's reasoning behind its decision on the two material facts of the case: Although s.118(a) of the CCA 1984 provides for power to commit for a contempt which occurs 'in going to or returning from the court', the trial judge felt bound by the conflicting limitation of s.1(2), which stipulates that county courts, as inferior courts of record, only have the power to commit for contempt 'in the face of the court.' As s.118 provides for the offender to be taken into custody, the judge felt his authority was restricted to committals for contempts made in the face of the court. In his interpretation of the statute, the trial judge was guided by the general presumptions against deprivation of liberty and alteration of the common law [2]. Adopting a contextual approach, in terms of the statute as a whole, it was assumed that the phrase 'in the face of the court' was to be used to establish the boundaries of s.118. On appeal, however, the Court of Appeal found that the mischief, that the county court's jurisdiction cannot extend to contempts outside of the courtroom, was already provided for by the common law. Although the cases cited by the trial judge were not directly applicable , the Court of Appeal based its decision on This reasoning of Sir Thomas Bingham MR, that adopted by Lord Woolf MR in Yet, in essence, it is simply a literal interpretation of the section in question. A literal reading of s.118 was not taken to conflict with the intention of providing the county court with the jurisdiction to protect its witnesses and hence ensure justice within its proceedings. In this case the grave effects of the contempt on the witness and his family [4] demonstrate that the intention of parliament should be applied to the situation in question and its equivalents. Contrary to the trial judgement, Woolf emphasises that 'the county court's power to commit for criminal contempts are those to be found in section 118, no more and, importantly in this case, no less.' There is no doubt that returning from", "label": 1 }, { "main_document": "independently with the Indian west coast. Furthermore, through the Calcutta port the English controlled the whole trading network from India to South East Asia and through South China Sea into the Chinese east coast. Apart from this monopolisation of oceanic trade, European colonies in the 18 As Karl Marx put it, 'the homeland of cotton was inundated with cotton' as European manufacturers, now free from Asian competition, flooded markets with produce from Lancashire mills. Political control entailed that the Europeans could uproot domestic entrepreneurs at will. Thus, the development of connections between Europe and its overseas colonies were strikingly different than those Europe had with the former independent Asian nations. This contributed significantly, perhaps most importantly, towards the increasing material and developmental gap between the East and the West from the late 18 K.N. Chaudhuri, 3 'The Portuguese seaborne empire in the Indian Ocean' As Prasannan Parthasarathi has argued, global connections resulted in the development and nurturing of very similar types of capitalism in both Europe and Asia. However, what led to the development has primarily to do with the alteration of the nature of these connections, whereby peaceful trade was replaced with violent mercantilism and ultimately imperialism, that paved the way for the material and technological disparity on the eve of the Industrial Revolution, from where these inequalities became further conspicuous. Parthasarathi, Prasannan: 'Review Article: The Great Divergence', Perdue, Peter: 'China in the Early Modern World: Short Cuts, Myths and Realities' Education about Asia, 4:1 (1999), Introduction", "label": 0 }, { "main_document": "reasons, do not take any employees from say the travel sector when training employees on travel insurance. When installing the software use a simultaneous implementation pattern if the IT department has enough resources to do this. When reorganising the sectors, use a progressive approach so each team is moved one at a time (e.g. vertically). This minimises disruption. I recommend giving the employee groups and trade unions the highest level of empowerment of all project stakeholders due to the reasons below. Adapt a participative approach rather than the current paternalistic leadership style that management appear to use with employees. The project relies on employee commitment and they are likely to resist the change The employees will experience the most change of all the project stakeholders Many employee and both trade unions have a high influence and interest in the project The pace of change is already relatively slow; high empowerment reinforces this Giving employees and unions discretion in decision-making makes use of their expertise The \"cultural\" involvement of employees should make reaching a compromise between senior staff and employees relatively straightforward. A high level of employee support is necessary for the success of the project. The long-term cost and performance benefits will outweigh the short-term cost, time and lost capacity implications. I recommend giving employees three types of employee support: 1. 2. 3. After the appraisal phase both unions should be fully informed and then invited to participate on certain key decisions such as pay and training as mentioned on page 2. Plan time for this activity and follow the procedure agreements in place. Inform the unions shortly before informing employees, because the unions have more power and they may potentially 'sell' the project to their members, reducing resistance to change and improving our industrial relations. Assuming that there are no better alternative uses of the HR and IT departments, (who are available and capable), conduct the training, reorganisation and Internet communications in-house to maintain control, reduce costs and take advantage of core competencies. If these assumptions are not the case, then the project specific risks 1 and 2 will be high, so I would recommend outsourcing using a turnkey provider if the cost implications are manageable. The Operations Director has overall control of the project team. The project team consists of the Project Manager for coordination of all work packages, IT Manager, Training Manager, Team Leaders & a trade union representative (if they wish). The WBS & responsibilities are: I recommend a steering group to exercise 'stop-hold-select-go' strategic control before and after the following phases: Phase 1 - Develop software. Phase 2 - Test, install & train for software use. Phase 3 - Implement the restructure. See Gantt chart. A steering group is needed because the majority of the business will be affected by a large amount of change involving multiple stakeholders. The group should consist of the Operations Director, the Operations Manager and the Project Manger to allocate resources and monitor progress and costs. The Project Manager and the project team should exercise all operational planning & control. This involves", "label": 1 }, { "main_document": "driver who should find a parking space without senseless driving around the floors. Car will not be let in if there is no space to park. My system is also capable of calculating amount of money to be paid when car is leaving the parking. Payment is dependent on the time spend in the facility. While designing the real-time applications we must ensure that it meets not only functional but also timelines requirements. The Multi-Storey Parking System is a soft real-time system. \"A soft real-time system is one in which performance is degraded but not destroyed by failure to meet response-time constraints.\" (Laplante P., 2004). For my system it simply means that if driver will have to wait e.g. 5 seconds before he will obtain detailed directions to find his parking space, no serious damage is made. Multi-Storey Parking System contains following processes: Multi-storey parking system processes' concurrency is dependent on the number of cars which are currently inside the facility. When car park is empty the only processes that work are the Display panels on all three gates, which wait for a car to come (they work all the time). When first car arrives one of the gates exchanges the data with Update process and concurrently with ticket machine. That means that Central Panel starts working parallel. Than car parks, that activates the Sensor which updates Central Panel. When car leaves Sensor is deactivated, that updates Central Panel. Ticket is putted into Pay centre process, which again exchange data with Central Panel. Finally car goes toward Exit. As it is easy to notice when there is only one car in car park it does not happen to often that other processes than Gate processes run at the same time. Situation changes dramatically when parking is full. When there are more cars in the car park where, some of them are parked, some want to park and some are leaving the building, all of the processes work concurrently. From all this processes that take place when parking is busy I believe that process Park, Leave and Display Panels on gates are the processes which require the most attention when concurrency is taken into account. I decided to use semaphores as inter-process communication and synchronisation mechanisms. Process Park uses semaphores to lock the place where the car is parked. Process Leave unlocks semaphores when car leaves the parking place. Display Panels on Gates are also based on semaphores as only one car can go threw the gate at one time. The program below simulates working of Multi-storey Parking System. With processes: Though the program above is wrote for 200 parking places this output is from the program in which the number of places was decreased to 5 what illustrates better working of the system.", "label": 0 }, { "main_document": "The role of process research and development department at AstraZeneca is to discover and develop robust, economic manufacturing processes for new chemical entities. The team also provides supplies of drug substances to fund development programmes and to provide chemical manufacturing controls documentation to satisfy external regulatory authorities. The equipment used in the scale up lab/plant is not that dissimilar to the type of vessel used in a discovery lab. However to describe process chemistry as bucket chemistry would be incorrect and to assume that it just involves scaling up existing process would also be an incorrect description. The department is divided into five parts; process chemistry; process analysis; development manufacture (pilot plant and large scale lab); process engineering; and projects management. Each of these plays a key role during the chemical development of a new chemical entity and each of these areas employs chemists. The process chemistry area devises the final route of manufacture and optimises the chosen route into a viable manufacturing process. Process analysts develop appropriate analytical methodology to assess the purity of the active pharmaceutical ingredient (API) and the intermediates, specifications are set along the process chemists of intermediates and APIs, a methodology is developed to monitor reactions to help gain the maximum amount of information. Development manufacture are responsible for running the process in the pilot plant or kilo-lab, they work closely with process chemistry to ensure that the process being developed is compatible with the intended site of manufacture. Process engineering help with plant suitability and compatibility, help decide between isolation options for example crystallisation, resolution of specific scale up issues for example agitation and heat/mass transfer, and make sure engineering problems are not solved using chemistry. Project management is the link between the global project teams and process research and development; they coordinate pan-process research and development activities at a high level. The interface between process research and development and other departments for a particular project is overseen by project management along with various outsourcing activities. Process research and development is a diverse area, there are many interactions with other departments in order to aid the overall production of a successful drug. These departments include discovery, regulatory, formulation, drug metabolism, manufacturing, safety assessment, patent and marketing. The process of producing a new drug is a lengthy and difficult process, from the point of lead identification if can take up to 3 years for the drug discovery team to develop a candidate for selection. The next stage is drug development, which is where process research and development fits in this can take up to 5 years. There are three phases within this, phase I is initial safety and clinical trials, phase II is clinical trials in patients and phase III is wide ranging trials for comparative efficiency and safety prior to marketing. When all this is completed the drug is marketed and launched. The whole process can take in the region of 15 years to complete Each phase of drug development requires a differing quantity of the selected drug candidate and it is the job of process", "label": 1 }, { "main_document": "As early as the 5 Resting on fundamentally opposed assumptions about the nature of the international system, Realist, Liberalist and Marxist world views continue to divide political thinkers today. As a student of world politics, I evaluate the efficacy of these ideologies based on the validity of their assumptions and on their historic ability to predict world events. This approach reveals that Realism, Liberalism and Marxism may not be mutually exclusive, but rather complementary. Furthermore, the fluctuating popularity of the three world views in recent study of international relations may suggest that no single theory is superior to the others; rather, all are historically contingent and, in the words of Machiavelli, provide only \"situation-bound knowledge\" It remains to be seen which stream of thought retains greatest relevance in the new world order of the twenty-first century. Dunne, Tim, and Schmidt, Brian C. \"Realism\" in Baylis, John and Smith, Steve. Dunne, Tim and Schmidt, Brian C. \"Realism\". p.163 One can gain a basic understanding of Realism, Liberalism and Marxism by contrasting the 'essential elements' of these ideologies. Thus, the Realist world view rests on the assumption that states are the most important actors in an international system devoid of central authority The additional beliefs that human nature is fundamentally war-like and that all states are potentially dangerous means that states will strive to gain power relative to each-other; this creates an unpredictable international environment that Kenneth Waltz has called a 'self-help system' Whereas Realism therefore \"paints a rather grim picture of world politics\" The numerous strands of Liberalism hold that human nature is basically good, and that the interest of the state is bound up with that of its people; furthermore, states are capable of cooperation in a system where international institutions are essential actors. Unlike Realism which considers only relative gains in world politics, Liberalism stresses the importance of interdependence as a means of achieving gains in absolute terms Finally, the Marxist representation of the international system is one dominated not by states but by classes, where capitalist modes of production and class struggle shape political outcomes. Economics plays a crucial part in this system, as the tensions between capitalists and workers Realism, Liberalism and Marxism therefore promote highly contrasted views of international relations. Dunne, Tim and Schmidt, Brian C. \"Realism\". P.172 Dunne, Tim and Schmidt, Brian C. \"Realism\". p.175 Mearsheimer, John J. \"The False Promise of International Institutions\". Internet. Accessed on 5th November 2005. p.9 Dunne, Tim. \"Liberalism\" in Baylis, John and Smith, Steve. In Marxist terminology, respectively relations and means of production Hobden, Stephen and Jones, Richard Wyn. \"Marxist Theories of International Relations\". p.229 As a student of world politics seeking a straightforward understanding of the international system, I can begin to evaluate these rival ideologies in a purely pragmatic manner by comparing the relative parsimony of each world view. Realism offers an attractively simple conception of international relations that may explain its ongoing popularity in political analysis: it maintains that states are unitary, rational, and self-interested actors which will behave in the interests of national survival By contrast, Liberalism", "label": 0 }, { "main_document": "2001, p.21). They suggest that fortification and the production of \"functional\" foods is being used as a clever marketing and thus profit-inducing measure, as opposed to one genuinely concerned with improving the health of the nation and resolving nutritional problems. One must remember that the food industry, like all others, relies on profits and thus there is a vested interest in maintaining the \"eat more\" message. By creating products with added micro-nutrients that can be marketed as health-giving, food companies have simply assimilated this need with the modern public interest in healthy-living. Part of the problem that this has created is that it has eroded the idea of the importance of dietary patterns overall, instead suggesting that individuals foods can be seen as \"good\" or \"bad\". \"Functional\" and many fortified foods reduce the value of foodstuffs to their single functional ingredient(s), working off the idea that there must be something within \"good\" foods that is responsible for better health. As Nestle is keen to point out, this ignores the complexity of food composition and also how its components interact. Simply 'Dumping nutrients into... foods will not neutralise their detrimental effects or make them more healthful' (Nestle, 2002, p.314). In real terms, foods with added vitamins and minerals are likely to be beneficial only when incorporated into an already healthy and balanced diet. It seems unlikely, at least at present, that technologically developed or adapted foods can make up for the detrimental effect that missing certain foodstuffs and/or food groups (such as fruit and vegetables) will have on health. The other factor unmentioned thus far is that these added nutrient foods can only offer a benefit to those who actually consume them. In many developing countries this problem is a literal one: foodstuffs are being distributed that contain added micro-nutrients to help ease health problems, but not everybody has access to them, particularly those in remote, rural areas, for example (Young, 2001, p.256). In the developed world this problem is related more to cost. Aside from those general foodstuffs that have been subject to fortification for many years, foods with added nutrients tend to be more expensive than those without. In this scenario then it will be those who can already afford a better diet who will be the prime purchasers of fortified foods and thus who will gain the nutritional benefits they bring. This leads onto the key factor that adding micro-nutrients to foodstuffs as a health solution ignores, that nutritional problems rest on a multi-causal base and thus require a multi-faceted solution. Poor health is often related not to inherent nutritional deficiencies in the diet available to the individual, but instead their lack of access to the choices that would appease deficiencies. This inability may be a physical lack of access, as in the developing world, but also one of a lack of knowledge or the right circumstances to make the healthiest choices. Many nutritionists, for example, see health education, rather than modified food, as the way forward in solving nutritional problems, at least in the developed world. Learnt early in", "label": 1 }, { "main_document": "his mark would be 51.3%. Again, the same caveat applies. To test whether exam performance is related to the separate attendance measures, separate tests are done on the respective slope coefficients. We use the one-tailed t-test since we are interested in knowing whether attendance has any positive impact on performance. Given a particular significance level, we reject the null hypothesis of a test if its t-statistic exceeds the critical values given in statistical tables. The critical values at 1% significance level is Comparing the t-statistics from the equation Hence at these significance levels, lecture attendance affects exam performance. Similarly the same conclusion can be made about class attendance (E3e). On the other hand, at both significance levels, we accept the null hypothesis that there is no relationship between exam performance and revision lecture attendance. Next, using the same three attendance measures as above, a single multivariate regression is performed. Exam performance is modeled as follows: Stata yielded the following regression estimate: Equation Similarly for a 1 percentage point increase in revision lecture attendance, Should a student not attend any of the lectures or classes, his expected The same caution should be applied here as that made earlier in question 2a. Again, testing separately on each slope coefficient, we conclude that at 1% significance level, we reject the null hypothesis that class attendance is not related to exam performance. However, we both accept the null hypotheses that lecture and revision lecture attendance has a non-positive impact on exam performance at both 1 and 5 percent significance levels. To test the joint significance of the explanatory variables, an F test is performed. Using the calculated R The critical value of this test at 1% significance level is given as Hence the null hypothesis that the model has no explanatory power is rejected. We thus conclude that at least one of slope coefficients is not equals to zero (which is consistent with the previous t-test). The relationship between exam performance and lecture attendance and the number of As at A-level is given in equation Extending this model, to test whether students from different years did differently in their QT tests, the dummy variables This would mean that 2002 would be the 'base year'. Estimates of the regressions: Looking at model Should that student have obtained an extra A grade at A-level, his qtmark is expected to be 0.03 69 points higher. If a student in 2002 never went for any lectures and had no A grades at A-level, he'd be expected to get a mark of 57 percentage points. A student in 1999, who attended a given amount of lectures and number of As, is expected to get 1.191 marks less than if he were in 2002. Similarly a student in 2000, who attended a given amount of lectures and number of As, is expected to get 5.185 marks less than if he were in 2002. A student in 2001, who attended a given amount of lectures and number of As, is expected to get 6.854 marks less than if he were in 2002. To", "label": 1 }, { "main_document": "This question can be tackled in two parts: How do we characterise simplicity? In what ways does simplicity guide a theory to truth? There are several different theories about what truth actually is, however, truth in this essay is being taken as agreement with reality. Simplicity is hard to determine. To show this I will repeat the documented example of curve fitting to a set of data, from Carl G. Hempel's First say that We have four instances of Namely for There is no previously known functional connection for these two characteristics. In such case we can conjure up three hypotheses to exactly fit the data: Hemple says that with no background information known or assumed then we would undoubtedly choose H Solely because this hypothesis is simpler, it counts as more acceptable in our eyes. But why is it simpler? Simplicity has to be objective, and there must be clear criteria of what constitutes simple as we intuitively know when an equation, or even a theory, is simple. Is H What then when we include trigonometric functions, exponents and logs? The functions Just as A straight line in Cartesian coordinates would appear curved in Polar coordinates, and vice versa. Similarly, complicated functions can be made to appear simpler through coordinate transforms. This transformation must be taken into account when attempting to characterise simplicity. If we focus on theories instead of functions then it has been suggested that the number of independent basic assumptions is an indicator of how simple a theory is. However, these assumptions can be broken down or merged, which confuses matters. Also, separate assumptions would have to be weighted rather than just counted. Despite this difficulty in characterising simplicity it seems that we follow Hempel's principle of simplicity whereby in the case of two theories with the same number and diversity of confirmations, the simpler theory should be chosen; as it is more credible. This is easy to state but hard to justify. One attempt is by assuming that the laws governing nature itself are inherently simple. Although many scientists believe this to be true, belief alone cannot justify a principle. Going on the past successes of natural laws being simple is no ground to take the natural laws as being simple on the whole. It is feasible that true reality - Truth - is too complicated to be mentally conceived and the best we can do is break it down into simple segments that we understand. From this it is obvious that all natural laws found are simple, as we cannot conceive the complicated ones. A weaker argument is that the natural laws we have found so far are only simple because we have concentrated on the simple ones, and so they are not indicative of natural laws on the whole. Also, it is a circular argument to state that the simplicity of the simplicity principle provides justification. An attempt of which has been tried. Another justification states that the simplest theories should be preferred as they provide the most concise description of the given data. This", "label": 1 }, { "main_document": "Please see Appendix A. Set the original point, P0.Then move to P1 and put down the pen onto the paper. Draw the frame by moving the pen from P1 to P2 to P3 to P4 and back to P1. Lift up the pen and move it to the top right corner of the letter \"Y\" on the right hand side. Put down the pen to draw the letter \"Y\" that the finish point is the same as the starting point of drawing. Lift the pen up and move it to the top right corner of the letter \"Y\" on the left hand side. Put down the pen and again, draw another \"Y\" on the left. Finally, lift the pen up. Please see Appendix B. During the execution of the program, everything went ok. But before the execution, some correction of the program is needed. On the line of \"10 APPROS P0:TRANS(-30,45,100)\", it should be \"10 APPROS P0:TRANS(-30,45,), 100\". Also, I declare \"Y=0\" which the technician said the program is not allowed as it is one of the programming language/ character in the program; it has its own meaning. Therefore, I changed it from Y to YG. The program would use the end point of the last program as P0 for the second time. It is because P0 in the program is not defined. Therefore, we need to declare where the starting point, P0, is by using the teach pendant. If we didn't do so before the second execution, it will use the end point of the last execution as the starting point of the next execution. Hence, the robot will draw out of the paper. Linear interpolation and joint interpolation are used to draw the frame. For join interpolation, a precise straight line can be drawn depends on the type of joints that the robot has. It also depends on the positions of the point the robot move from and move to. If only one joint move, the robot will produce a simple arc, centered on the axis of rotation of the joint with a radius equal to the distance from the axis to the point. To predict the precise shape of the frame, we need to know where P1, P2, P3 and P4 are related to the robot and the distance between. The path for POINT1 to POINT2 and POINT 3 to POINT4 will be a curved lines while the path for POINT 1 to POINT4 and POINT2 to POINT3 are straight line. i. The sides were straight and all the corners were perfectly square. The robot need to stop at the exact points of the frame that the speed can not be too fact when cutting the frame. Therefore, I will use \"CPOFF\" command as this set the robot to point-to- point mode ii. The sides were straight and all the corners were \"radiused\". The speed must be fast in order to draw \"radiused\" corner. An extra command is not needed as the robot will move smoothly and not stopping at any corners without setting it to point- to- point", "label": 0 }, { "main_document": "be detected again if the authorities had found a duplicate match. Moreover the legislation in place restricted the use of the information so that it only dealt with the policy aims of reducing crime and increasing detection rates. 8) 59) 38) The appellants also however appealed on the basis that retention of such information was incompatible with art. 14: There was a unanimous verdict that the retention was compatible with art 14. Lord Steyn explained that by following the Euisdem Generis Rule there is no basis for discrimination of this type under this article. This is because the class of personal characteristics referred to in the article, such as sex or race, did not reflect the position of the appellants nor anyone else who has had their fingerprints, DNA samples and profiles retained after a criminal investigation. On the contrary it was 'simply reflecting historical fact'. Thus the lawful retention of the applicants' information did not produce a status that gave ground for discrimination under art. 14. 50) Thus it was deemed that the retention of fingerprints, DNA samples and profiles was compatible with both articles 8 and 14. However there is some substance to the reasoning behind Baroness Hale's dissent depending on how much importance you place on the availability of personal genetic information and the amount of faith you have on future judicial decisions to be able to deal with the misuse of such information. However at present it seems reasonable to deem the retention of such information as fair and proportional both in consideration of the protection of the public as well as the human rights available to every individual.", "label": 1 }, { "main_document": "All primates, no matter which family they belong to, share certain traits and anatomical features that set them apart from other mammals. Anthropologist Robert Martin has defined primates according to nine traits by which they can be identified, including features of dentition, gestation and olfaction. Three important traits he identifies are the presence of an opposable pollex and hallux (with the exception of humans who only have an opposable pollex), the dominance of the hindlimb during locomotion and the location of the eyes in the head. It is these three features which have provided anthropologists with much material in the study of primate origins. It is widely agreed that primate traits arose in response to the domination of an arboreal niche, but more specific information is needed about exactly what features of an arboreal habitat led to the last common ancestor of all primates to develop these features. Anthropologists have focussed on different primate features in the search for answers and here I will try to draw the research together and assess which theory provides the fullest explanation. Most living primates are arboreal and it is generally agreed that it is this fact which led them to develop many of their physical traits. Their stereoscopic vision, enabled by the positioning of the eyes in the head, gives primates great depth perception which allows them to carry out complex tasks in the tricky environment of fine terminal branches. Their grasping hands and feet are also well suited to this purpose and it is for this reason that most anthropologists agree that the domination of an arboreal niche led to the development of many primate traits. Schmitt (2003) supports this with findings of a study in which he examined limb mechanics of primates, particularly small species. Because of the dominance of the hindlimb during locomotion and the consequent location of the centre of gravity, the forelimb's weight bearing role is minimal. However it does show increased mobility and Schmitt studied its mechanics in order to find out if the forelimb is suited to a fine terminal substrate more than any other branch. Findings of the study showed that elbow flexion of forelimbs was helpful in lowering the centre of mass which in turn aided balance. The study also demonstrated that the degree of protraction made possible by the high mobility of forelimbs was important for reaching and also meant that stride length could be increased. This decreases the number of strides necessary and stops dangerous oscillation of fine branches when the primates travel along them. These results seem to indicate that fine terminal branches are the reason behind primate traits but when studying differing substrate sizes, forelimb protraction was not affected. Rather than indicating that substrate size is insignificant in primate evolution, this perhaps suggests the degree to which travel in terminal branches has shaped primate anatomy. Also indicative of this is the evidence from a study by Pontzer and Wrangham (2004) which sought to explain why chimpanzees, who are mainly terrestrial travellers, incur such an energy cost when walking by retaining characteristics adapted", "label": 1 }, { "main_document": "The United States of America (US) is no doubt the leading market economy in the world and therefore it is clear that fluctuations in it attract interest, especially signs of slowdown. Throughout the 1990s the US was performing strong in terms of economy, but recently there has been a downturn. Especially the year 2001 was difficult, because the financial markets were suffering of lack of confidence and terrorist attacks further deepened the problem. Yet, there have not been any significant signs of recovery, even though it can be argued that there are prospects for a good economic year 2005 in the US. Nevertheless, oil price is a factor that can endanger the positive development in the economy and in can pose a problem in the trade deficit that the country struggles with. In addition, weak dollar might further deepen this difficulty. The uncertainty of the US economy is reflected on the other economies of the world, particularly on the Asian ones, but Euro Area has suffered from the fluctuations as well. Here, the main economic issues are dealt and the past, present and the future of the US economy is evaluated. The 1990s was a period of growth and economic boom in the US. This result is partly related to the optimism that the end of Cold War created in the world economy. The US was growing strongly due to emergence of new technology, which became known amongst the economists as \"the new economy\" boom. There was a sustained increase in the growth of productivity, which was largely explained by new inventions. These inventions induced massive investment and the confidence in the financial markets was high. It can be argued that between the years 1992-2000 there was a virtuous circle of economy in the US. The US gross domestic product (GDP) was growing fast and this facilitated a growth in investment, which in turn caused GDP to grow. As interest rates were low, high investment was made possible. The multiplier-accelerator action in the US was strong and the information technology (IT) industry was booming until the year 2000. Greenspan; 2000 Goldman; 2000 Ibid Moreover, the extensive growth in the US economy caused capital inflows and this resulted into a deficit in the current account in the US as its net foreign investment was negative. Mann; 2000 Ibid Ibid A factor relating to imports must be mentioned here, namely oil. The energy prices increased during the years 1999 and 2000 Oil price reached in real terms the levels of the 1970s Oil Crisis. This factor increased the costs of production and further widened the trade deficit. The crash of IT sector, current account deficit and increasing energy prices were inevitably leading in to a financial crisis in the US stock markets. Meyer; 2001 Investors were liquidating their portfolios and buying bonds. Bond prices were increasing as a result of uncertain moods in the financial markets, leading in to increasing interest rates. As interest rate, the price of money, became higher there was insufficient investment to sustain the production. Output figures fell and unemployment", "label": 0 }, { "main_document": "outcome with the highest utility. Consequently, they will choose not to shirk if and only if This is called the No-Shirking Condition (NSC), where the workers are indifferent between shirking and working and can be rewritten using (2) and (3): (4) We could alternatively express the NSC by Consequently, unless there is a penalty related to being unemployed through the risk of not instantly acquiring a new job, everyone will shirk. From the NSC we can see that the critical wage Moreover, it tells us that the higher the expected utility with being unemployed or the lower the penalty from being caught shirking, the higher the critical wage. Similarly, the lower the probability of being detected shirking, the higher the rate of interest (that is, the short run gains from shirking is relatively more important), and the higher the exogenous quit rate, the higher the critical wage needs to be to avoid shirking. The identical firms in the model generate an aggregate production function of The firm's labour demand is as usual found by equating the marginal product of labour to the cost of hiring an additional employee. The firms are offering as low unemployment benefits ( If we now turn to market equilibrium, we proceed with the same method to solve for the equilibrium value for We substitute the value for Where (5) We can easily see from the aggregate NSC that the higher the job acquisition rate or the higher the unemployment benefits, the higher the critical wage. If an individual gets high unemployment benefits, the punishment of being unemployed is not that severe. Similarly, if he can obtain a new job quickly after dismissal, this reduces the gravity of the penalty. The flow into the unemployment pool is Where (6) We substitute this expression into the aggregate NSC (5) and generate: Where b/fo is the rate of unemployment. (7) This constraint is illustrated in Graph 1 below, and from this constraint we can easily solve for equilibrium wage and employment level. Equilibrium occurs where the aggregate demand for labour intersects the aggregate NSC. For unemployment benefits equal to zero we obtain: (8) In equilibrium the firms have no incentive of increasing the wages, because they pay just enough for the workers to exert effort and they can get all the labour they need. There exist no motivation to pay lower wages either since this would just encourage shirking. However, from the workers standpoint unemployment is involuntary; many workers are willing to work at the prevailing wage or even lower, but fail to make a credible pledge not to shirk at such wages. The equilibrium is depicted in Graph 2 below. The relationship between real wages and unemployment in the Shapiro-Stiglitz model is best illustrated by drawing the NSC in real wage-unemployment rate space, as done in Graph 3. The graph perfectly demonstrates the negative relation between the two, and this is also intuitively uncomplicated. As described earlier, when the rate of unemployment increases, the cost of losing the job is amplified and firms can pay a lower wage and", "label": 0 }, { "main_document": "stock is bought in from markets etc, Manydown has managed an organised system to provide a constant flow of produce through the shop throughout the year. In relation to beef, three cattle are slaughtered a week and at the other end of the scale, calving is spread out throughout spring, summer and autumn, in order to keep a steady flow of beef stock maturing and finishing throughout the year. Steers and heifers are killed at 600kg and 550kg respectively. The cattle system is geared to 18 months including both indoor housing and outdoor grazing. Winter housing occurs, as on most farms, due to unsuitable field conditions outdoors in the wet and so that the cattle do not loose condition and the ability to gain weight by using a greater degree of feed energy for temperature regulation. Summer grazing is based on 160 acres (64 ha) of permanent pasture. Feed is a mixture of forage and concentrate with silage and cracker feed as well as milled rapeseed making up the majority of the diet. Following the increase in demand for Aberdeen Angus meat and the strive at Manydown for new ideas, a recent stem has been the creation of a pedigree beef herd was established in 2000. Originally comprising of 17 cattle from successful Canadian bloodlines, the Knightingdale Angus cattle, Manydown has bred these animals with the idea of creating a centre point to their own commercial herd in future generations. The aim of this specialist smaller herd is for Manydown to breed its own bulls with this successful bloodline and become sufficient as unit for future meat production. The Manydown website states this aim and relates to the importance of knowing the whole production process when selling through the farm's own farm shop and obviously by controlling all aspects of the rearing process this is more so the case: In conclusion the beef unit is an important part of the estate. It was because of the beef that the farm shop was originally set up and since has seen great expansion. It appears in regard to livestock production as a whole that Manydown has an efficient organisation in that the company produces and markets home grown produce. This has proved very successful and built up a widescale cliental base, including through mail ordering. Much of the success from beef production and the farm shop as whole is related not only to quality, but also the confidence that a reputable farm shop gives the consumer - that is the produce has been home grown and hence that person does not mind paying a little bit extra for this. There is also the public perception that Manydown is a well managed estate and deliverers sustainable farming practices which promotes the companies success.", "label": 1 }, { "main_document": "of Unlawful Acts Against the Safety of Maritime Navigation, 1988), the term 'terrorism' has never been used or defined in any of them. Indeed, it was difficult to find a unanimously accepted definition of terrorism because of the opposition of the members of the Group of 77 which did not want to confuse terrorism and national liberation movements in a period of decolonization. This debate highlights the ambiguity and ambivalence of the notion of terrorism whose definition is necessarily biased. It is striking that neither the European Convention on the Suppression of Terrorism of 1977, nor the International Convention for the Suppression of Terrorist Bombings of 1997 mentioned any definition of terrorism. The International Convention for the suppression of the Financing of Terrorism of 1999 (the most achieved text on the issue) mentions the term 'terrorism' ten times without defining it. In the Resolution 1368 of 12 September 2001, the Security Council did not specify what the term 'terrorism' meant Guillaume, G., Op. Cit. p. 540 It seems that this absence of definition is rather a way for states to use and legitimate ways to struggle against terrorism than a deficiency. Indeed, terrorism is precisely an asymmetric threat where the target is easily identifiable (the state, its population and civil infrastructures) whereas the perpetrators and their weapons can take every possible form. A definition is a way to name something but also a way to enclose, delimitate a phenomenon to its definition and exclude exterior aspects. If a definition of terrorism had been accepted unanimously, the danger would be that, renewing itself constantly, it would escape from its definition and leave the states without any legal means of riposte. There is no limit to the scope of action against terrorism as long as terrorism is not defined. It is in fact the international community which tries to cope with the renewal and constant reinvention of terrorism. To which legal category does the terrorist correspond? The category of 'terrorists' can be defined negatively according to two sets of conditions that they do not meet. It is important to note that each category applied to the terrorist act falls automatically into the imprecision. It is a common opinion that terrorists aim at attacking civilians and civil targets. Al Qaeda justifies it by holding the population of a state responsible for the acts of its government. This is a first breach in one of the cardinal principles of IHL: the distinction of civilians and combatants and the protection of the first. Indeed, the First Additional Protocol of the Geneva Convention of 12 August 1949, referring to the protection of victims of international armed conflicts, 08 June 1977, in its article 51-2, prohibits the acts of terrorism against civilians: If the function of this categorization is to accuse terrorists of war crimes, then the unlawful attack of military persons and objects should be qualified of act of terrorism. Indeed, both civilians and combatants are protected by IHL. This criterion seems then not to help at circumscribing the concept of terrorism. Second, the terrorist does not fit", "label": 0 }, { "main_document": "and education, however everyone should get an equal go at life at least at the subsistence level in their initial years so that they can translate it into long term productivity gains. The provision of credit for microenterprises is an important poverty alleviation strategy where the credit can contribute to improvement in the nutrition of the poor. Other policy options are providing cash transfers to poor families, family clinic visits, other nutritional and health benefit in kind and so on. Another very important aspect is the dissemination of information and creating awareness among the population especially in rural areas of developing countries which are plagued with myriad social problems. However the picture is not all bleak, greater proportion of the government budgets are being devoted towards human capital, there is a trend towards international convergence in measures of health and education, with unprecedented advances having taken place in the last half of the century. Gross school enrolment rates, teacher pupil ratios, life expectancy have all shown increases which are statistically significant. Improvements have been faster in developing countries, though the gap with developed countries still remains large.", "label": 0 }, { "main_document": "putting into effect various innovative strategies. It also managed to keep up with the different social trends (i.e. green issues) and in some way, easyJet promoted in Europe the trend of booking flight tickets on-line. Secondly, one may wonder how easyJet managed to become one of the market leaders in such a competitive environment. According to Proctor (2000, p. 2), \" Finally, the future development of easyJet will be probably based on the same 'tried and tested' principles. However, the increasingly competitive aviation market necessitates constant vigilance and careful marketing strategy, always taking into account consumers' needs and always seeking the most profitable way to satisfy them.", "label": 0 }, { "main_document": "were: Vocalisations made in context with agonistic and alarm behaviour (described as noisy) are very similar between species. References are made to the generality of these calls in a cross-species context. Vocalisations made in association with contact or contact-seeking, defensive or alarm behaviour (described as tonal and harmonic) were very distinct species specific. The authors, researchers at Tier According to Zimmermann The results confirmed the hypothesis. Beside the theoretical value to my study, this study also provided valuable suggestions to methodology and statistics to be used in analyses of vocalisations. The author, researcher for Conservation International, aims to assess the primate diversity in the East Arc Mountains and Coastal Forsests of Tanzania and Kenya Biodiversity Hotspots. The final draft manuscript at hand concerns one of three questions related to this topic: \"Confirmation of ' Vocal recordings made at Diani, Kenya, 40 km from Mazeras, were indicating on No vocal recordings ahd been made in the Mazeras, so in order to establish if the Diani galago was the same species, the author, accompanied by another researcher, set out on a short survey in the Mazeras region to record its vocalisation. Positive identification was made confirming the Mazeras galago to be the same species as the Diani galago, i.e. With this fact in hand, the author further suggests that comparative studies on The authors, researchers at Oxford Brookes University, aspire to show that the nocturnal Loriformes in Asia and Africa are a diverse group of primates with much more going for them than previously considered. The paper contains a very thorough review of what is known about taxonomy, distribution, phylogeny, ecology and behaviour, fully reaching up to the author's aspirations. It will be an important source of information for my study in all aspects mentioned above. If the information sought for is not detailed enough, the authors provide a very extensive reference list for more in depth research. The author, a researcher at Oxford Brookes University, was assessed as a consultant to collect information about three species of galago ( The findings confirmed a stable population of each species in the area and the author provided recommendations for future conservation efforts. The study confirmed to the validity of This was made from direct morphological comparisons with But data on habitat preference, and especially vocal repertoire, played a major part in the confirmation. Even though the study was relatively short, lot of valuable information has emerged from it. Beside important data in forms of vocal recordings, biogeography and morphology, the author also managed to collect tissue samples for genetic analysis. The author, researcher at Oxford Brookes University, reviews the current knowledge of nocturnal primates, with special emphasis on Galagidae, taxonomy, social behaviour and social systems. Each topic is elaborated on in great detail summarising research efforts and put them in an evolutionary perspective. This paper really reaches its goal of \"...leading to new perspectives on their [nocturnal primates] speciation, social behaviour, and conservation status\". The authors, researchers at Oxford Brookes University, review the current knowledge of the behavioural ecology of African Loriformes. Compilation and summaries", "label": 0 }, { "main_document": "individual and their personality. They determine the choice of the purchased product however from VOSM and HPA point of view these features apply to a group of students rather than individuals. Group of students might be described as adventurous, risk taker, well-balanced, open minded, relaxed and happy in general, and managers should design and market their products considering these characteristics. Attitudes are very subjective matter. Middleton and Clarke (2001) stress that they are 'attributes of individuals' nevertheless they still have to be taken into account, be understood and monitored in a way by marketing managers. Improving modern techniques will enable measurement of these attitudes in the future. Communication filters are being discussed next: perceptions, learning and experience. Perception can be described as an insight, the way students see and understand things and according to Middleton and Clarke (2001) it is formed through learning and experience and is influenced by reference groups and age. The information exposed by marketing managers might be understood by consumers in a different way from the initial intention. That is why Visitor and Outreach Services Manager at SP has to understand motivation of students and all elements of process number four from the consumer behaviour model described earlier in order to achieve positive perception. Learning is of high importance in case of students and SP is indeed offering products at different curriculum levels (SP, 2005) which is a huge advantage. Purchase outputs link to all the processes discussed earlier. Process number five is the action, the purchase itself-visiting the Scottish Parliament. Process number six links to experience that is the feelings and memories, the outcomes from the visit. The purchase is highly influenced by motivation and thus links directly back to consumer characteristics. If presentation of the SP and its design is appealing to students they will have a high motivation for visiting it. Furthermore if they enjoy their learning experience and will have positive feelings they will return to SP and will recommend it to their friends and families by 'word of mouth'. Linking this idea back to the services offered there is an issue of the cr Students after having a positive experience at the SP might encourage and motivate their parents including smaller members of their family to revisit on a weekend and one could assume that they would appreciate if the cr Challenge for both HPA and VOSM in the Scottish Parliament is to gain loyalty of existing visitors and attract new potential segments. Next the popularity of the destination itself is being examined and the advantages of the services offered at the SP. Scotland is a beautiful part of United Kingdom and Edinburgh is commonly called \"The Athens of the North\" (Visit Scotland, 2005), a beautiful historic city with a lot to offer. Festivals and events attract visitors from around the world all year round. The ancient castle is overlooking the city and is the most popular attraction (visited by 82% of tourists) according to Edinburgh Visitor Survey carried out by Lynn Jones Research Ltd. (May2004-April2005); an independent market research agency based in", "label": 0 }, { "main_document": "adaptation of crash testing, simulating a total of 700,000 finite elements including the vehicle, passengers and airbag. Birmingham has been using these models to simulate passenger collisions, and so increase the safety of the car with regards to humans outside of the vehicle. The importance of this is not yet fully reflected in NCAP tests, but digital simulations can allow manufacturers to design and supply safer vehicles for everyone. Computers are also improving results in physical crash testing. Rapid firing digital cameras have improved in resolution and sensitivity sufficiently to replace previous film cameras. This provides instant results and cuts out the expense of film processing, although significant capital is required for set-up costs. Film sequences can be further slowed by the computers ability to interpolate frames between those taken by the camera, allowing more detailed analysis of crashes. Developments in true 3-dimensional displays are set to make a big impact as virtual simulations can then be scrutinised from every angle and designers can better visualise the product under development. The use of latest technology such as FEA is an order qualifier, but the resultant class-leading crash test results are order winners in themselves. Jaguar already incorporates FEA into its design process, so training or expense of setting up is not required. The NCAP crash test ratings achieved in cases such as the Skoda Fabia highlight the benefits to safety offered by FEA. The increased sales resulting from such publicity cover costs of setting up the facility, with massive savings in prototyping and testing too. As prototyping is kept to a minimum, raw materials are not used, and much less power is required to power the computers used to model digitally. As no product is actually made, there is no recycling associated, although recycled paper may be used to make hard copies of some results.", "label": 1 }, { "main_document": "The efficacy of human rights law as a tool for women's empowerment has been vigorously debated over the past several decades. This article explores this debate focusing on the practice of female genital mutilation (FGM). The extensive literature that has been devoted to this practice and grown throughout the 1990s and the wide-ranging impact that this practice has on wide range of planes from the legal, to the political and the cultural, makes it an ideal case study for an examination of both the advantages and drawbacks of using international human rights law in the struggle for women's human rights. The cultural and religious significance of FGM, its strong links to sexual and reproductive health and rights, and the critiques raised by postcolonial feminists in particular on the dangers of gender essentialism and exclusive focus on violence against women has made FGM a contentious and vigorously debated topic. In the first part of this article, I briefly examine how the practice of FGM gained prominence in the international women's human rights movement. I summarize some of the contentious issues that FGM has raised, particularly the criticism that the FGM campaign essentializes both gender and culture. In the second part of the article, I discuss the role of UN treaty bodies and their articulation of FGM as a human right violation through concluding observations on state reports, and explore whether such exchanges between state officials and committee members provide meaningful dialogue that has an impact on the ground. Finally, I argue that a current gap in the literature on FGM, particularly that written from the international human rights law perspective, is a failure to take sufficient account of the wider social constraints that restrict women's ability to enjoy their rights pronounced under human rights instruments. When at the 1993 Human Rights Conference in Vienna, the slogan 'women's rights are human rights' became a reality, gender-based violence as a major human rights issue also took off and was the culmination of many decades of activism and lobbying. Although activists and women's rights advocates had written about the practice much earlier Parisi (2002), 581. Hellum and Knudsen (2006), 339. See, among others, Assaad (1980). The World Health Organization (WHO) defines FGM as comprising \"all procedures involving partial or total removal of the external female genitalia or other injury to the female genital organs whether for cultural, religious or other non-therapeutic reasons.\" According to the WHO, FGM has both immediate and long-term health consequences in addition to physical, psychosexual and psychological effects. \"What is Female Genital Mutilation?\" < Ibid. Questions such as does FGM constitute torture, can and should a cultural and religious practice be vehemently condemned as a human rights violation, is the focus on FGM a Western fetish and what measures, if any, can be taken to deconstruct a deeply embedded practice are just a few examples of the contentious issues raised in the struggle against FGM. The Convention on the Elimination of Discrimination against Women (CEDAW) makes clear that culture, tradition and religion cannot be used as an excuse to deprive women of", "label": 0 }, { "main_document": "investigation, as it was recognised that it would be a particular problem once the small magnitude of the amplitude peaks was realised. A third possibility is that the driver and detector were not in the optimal positions along the length of wire under consideration in order to cause and detect forced oscillations in the wire. The driver and detector were not moved from their starting positions during the course of this experiment, and it was only once the investigation was concluded that this was realised. In order to produce a pure standing wave pattern, the driver must be positioned in an antinodal position, and the detector must be in a similar position in order to pick up any changes in the forced oscillations. The wire length used, 0.25m, would have resulted in movements of the driver and detector of only a few centimetres each time, but it might have been an important factor in the quality of results obtained. This might also explain the shallow curve to the data plotted on figure 10, as the antinodal positions would have moved away from the driver and detector before moving towards them again as the value of n increased and the number of half-wavelengths within the length of wire under investigation increased. Once the difficulties involved in the location of the harmonic frequencies using the first method outlined in section Although the Lissajous figures would have been very unstable due to the highly sensitive nature of the equipment during this investigation, and hence the frequencies would still have been hard to find, this would have provided a second set of data and allowed any possible anomalous results to be identified. It might also have proved to be slightly easier to find frequencies at which the Lissajous figures became stationary than to find the frequencies at which the oscilloscope trace underwent a very small amplitude increase. The experimental setup was generally sufficient for the investigations being undertaken, but the sensitivity of the frequency meter and oscilloscope sometimes became very problematic. In order to try and compensate for this sensitivity, the experiment could be carried out in isolation. Performing the experiment in a separate room would help to minimise the effects of background noise on the frequency meter reading and oscilloscope trace. The other problem encountered whilst carrying out these investigations was the small magnitude of the amplitude peaks in the oscilloscope traces observed at the harmonic frequencies, which often made the harmonic frequencies quite difficult to pinpoint. As mentioned in section However in general it is difficult to see how this problem could be addressed without substantial changes to the experimental method, although the use of a more precise audio oscillator might help. Improving the resolution of the oscilloscope might also go some way towards addressing the problem. By setting the length of a thin wire to a constant 0.60 This compared favourably to the value of The initial evaluation of the error in the frequency readings was judged to be correct, but the original evaluation of the error in L was judged to have", "label": 1 }, { "main_document": "The aim of the assignment is to design the most cost-effective waste heat recovery system. In order to satisfy the requirements, a suitable overall size of heat exchanger has to be selected first, and then fin geometry is chosen with the suitable overall size. After determination of these two criterions, value of oil saved and payback time of the system can be calculated respectively. In this assignment, two designs give nearly identical payback time should be selected, and the final choice of design will go with the maximum return over system's lifetime. In this design, the suitable overall size of heat exchanger is 400x400x500mm, and the two designs are surface 9.03 and surface 11.1. However, comparison with the return over system's lifetime between these two designs, surface 11.1 is the best design for the waste heat recovery system. The object of the assignment is to design a waste heat recovery system. In order to design the most cost-effective waste heat recovery system, payback time has to be minimized. In this report, two designs that give nearly identical payback time are considered, and the one with the maximum return over its lifetime is selected. The basic data is shown as below The allowable pressure drop in the heat exchanger is Flow rate is Mean outside temperature is Oil costing The heating system efficiency = The pressure drop in the duct is Conductivity of aluminum is Overheads of The overall size is 400x400x500mm = Mass velocity in duct * (Face area/Free flow area) = Mass velocity in duct By geometry: = 0.453 = = = From Fig. 10.26 (Reference 4) And Pr = 0.707 Then St = 0.0113 = =46.75 Where Both sides are identical and we can neglect the thermal resistance of the plate and so Area of one side = = Using a capacity ratio of 1 = Where Energy/year = 11909 MJ Value of oil saved = = Core Entrance and exit From Fig. 5.4 (Reference 2) with Duct = 8.16 Pa So, total pressure drop = 56.5+2.86+8.16 = 67.52 Pa From the manufacturer's data sheet (Reference 1) we see that the 76FW fan at 1425 rpm will meet the demand. And the capital cost is The fan power = = Where Then the annual electricity cost is: Area of aluminum In heat exchanger = = Mass = Cost of aluminum = = Cost + Overheads = = Payback time = Capital cost/(Oil cost savings - Extra electricity cost) = = The Appendix shows a spreadsheet of Heat Exchanger design. Six different overall sizes of heat exchanger are considered. In order to find out which dimension of heat exchanger is the best one for the waste heat recovery system, surface 9.03 (Reference 3) is tested with these different sizes of heat exchanger. Finally, the best one is 400x400x500mm, which gives the shortest payback time between the six optional overall sizes. After determination of heat exchanger's dimension, best fin geometry has to be chosen for the system. In the spreadsheet, four fin geometries are considered, which are surface 5.3, surface 6.2, surface", "label": 0 }, { "main_document": "described above to SARS. Peiris and colleagues performed a study on SARS patients, which concluded that viral load decreased after a week of SARS-CoV infection due to immune clearance. However, some patient's respiratory function continued to deteriorate. They postulated that the clinical progression might be due to immunopathological damage; not due to uncontrollable viral replication (Peiris, Chu, Cheng A study on AIDS and non-AIDS patients for the infection This supports the hypothesis that a hyperimmune response is responsible for the increased damage and mortality (Wright, Gigliotti, Finkelstein In conclusion, SARS drastically affects respiratory function, which has been shown with radiographs, histological examination and clinical symptoms such as dyspnea and respiratory failure. Interstitial oedema, fibrosis and pneumocyte hyperplasia affect respiratory function mechanically. Alveolar and interstitial oedema and bronchiopneumonia all contribute to obstruction of the airways and increasing resistance. Vascular congestion/ infarction is caused due to migration of immune cells, causing platelet activation and fibrin formation. All of these pathological events lead to ventilation-perfusion inequality, which causes hypoxemia, decreasing arterial partial pressure for oxygen and increases arterial partial pressure of CO2; shown by the increase in lactate dehydrogenase. If left untreated lung failure occurs, resulting in multiple organ failure leading to death.", "label": 1 }, { "main_document": "all projects that offer a return greater than the cost of capital Cash flows that occur during the year will be rounded to the closest time period and henceforth discounted. Conversely, the cash flows rounded to the beginning of the year will be overestimated when discounting and vice versa for those rounded upwards. The return available from an equal risk portfolio of securities traded in the financial markets. Accurate and reliable prediction of cash flows is crucial for a serviceable NPV result. Not only does the inherent risk in predicting future cash flows pose a problem but the potential conflicts of interest between manager's performance measurements and cash flow estimates. Given that managerial and company performance is evaluated on a short-term basis, managers are likely to opt for projects that maximise their rewards in the short run. Similarly the fear of deviation from forecasted results may lead to the rejection of acceptable projects. The choice of a discount rate is another area subject to judgement especially if the firm is not quoted. The weighted average cost of capital (WACC) is the correct discount rate to be used for projects of average risk. The cost of debt is simply an average of the interest payments paid to debtholders, however the cost of equity is more difficult to calculate. If the Capital Asset Pricing Model (CAPM) is used the firm's beta These latter variables are difficult to calculate although the risk free rate can be approximated using t-bills. To derive the beta for a quoted enterprise, the return on the firm's shares and the return on the market portfolio have to be known. In practice, industry betas are used as estimates for a firm's own beta, as the potential error in the estimation of a single beta is much greater than that of a portfolio of securities in the same industry. the responsiveness of a security to movements in the market portfolio Problems are further accentuated if the firm is not quoted, (not the case of CIP), as its equity beta has to be inferred from the average asset beta It is often assumed that a firm only holds riskless debt. In reality, firm's hold risky as well as risk less debt and assuming a debt beta of 0 would not provide an accurate estimation of the firm's asset beta. Another way to obtain the cost equity is by computing the PV of the firm's expected dividend payments, but then again an estimate of the firm's dividend yield and growth would be required. Weighted average of the equity beta and debt beta. Once a firm's cost of capital has been attained, it has to be adjusted upwards for projects of greater risk than the average risk project and vice versa. Subjectivity again arises in the determination of how risky a project is and the resulting arbitrary upward/downward adjustment. In addition, perfect capital markets are assumed in the way for NPV cash flows are reinvested at the cost of capital and for IRR at the IRR. The IRR assumption seems more realistic than the NPV", "label": 0 }, { "main_document": "Global mean surface temperature has increased by approximately 0.7 This increase is thought to have been mostly caused by human activity, principally the release of carbon dioxide and other greenhouse gases. There are, however, many other factors, both anthropogenic and natural which could have an effect on climate. Attributing the observed climate change to each of these factors is a great challenge facing climate scientists. In this experiment a simple, zero-dimensional, two-layer spreadsheet model was used to investigate the effects of different radiative forcings on global mean temperature. The model allows climate feedback parameter Y to be varied, and estimates of the ten most important radiative forcings can be included, plus one user-defined forcing. Each forcing can be applied with a scaling factor to modify its effect. The model calculates the combined forcing for each year between 1850 and 2006, and from this the temperature anomaly relative to the 1961 - 1990 average and also relative to the 1850 - 1879 average (In this exercise the 1961 - 1990 average will be used for all comparisons). The equilibrium temperature change for the relevant forcing at each year is also calculated. The model produces plots of the combined radiative forcings, the global mean temperature anomaly relative to the two stated time periods, with each plotted alongside the observed temperature anomaly, and also the model vs equilibrium temperature change for each year. A The effect of different radiative forcings were investigated by scaling each to 1 (switched on) or 0 (switched off), in different combinations. For all these different combinations the climate feedback parameter Y was left constant at a mid-range value of 1.3 Wm With only the well-mixed greenhouse gas forcing switched on the temperature anomaly rises slowly at first then more rapidly from the 1960's onwards. The temperature change is too great over the whole period, with the anomaly being slightly too high from 1970 onwards, and substantially too low for most of the preceding time. The increase is steady, with none of the small scale fluctuations that are seen in the observed data. The rate of increase after about 1960 is similar to that in the observed data. Adding all other anthropogenic forcings except the indirect aerosol effect has only a small effect. However, the fit is slightly improved, particularly from 1950 onwards. If the indirect aerosol is also included the fit worsens again, with the temperature anomaly becoming too high until 1940, and too low in the years after 1995. The well mixed GHG's have the most effect of all anthropogenic factors; removing them causes the temperature anomaly to fall steadily throughout the whole time period. See Figure 1. The natural forcings, from solar fluctuations and volcanic eruptions, provide more small-timescale effects, with no overall trend through the time period. The smaller fluctuations do not seem to fit very closely with the fluctuations in the observed data. The negative anomalies due to volcanic activity are too large in several cases. Including all the forcings, the general pattern of increasing temperature anomaly is reproduced, although the model anomaly is above the", "label": 1 }, { "main_document": "Arguably, one of the reasons Hua Guofeng failed to hold on to power was because he never had it in the first place. His initial position was inherently unstable, taking on the legacy of Mao in an atmosphere such as this. Maoism had been steadily criticised with the people wanting something else completely- a return to 'normal' politics. Apart from this, Hua quickly lost the support of his few allies (especially the Gang of Four, now arrested). The entire economic equation that he applied to the nation, too, was ultimately wrong, with mini 'leaps' and other developmental plans doing little to raise the living standards of the people, while Deng Xiao Ping promised more. With Deng's star rising, it seemed evident that the latter was simple more adept at the political game, and more able to manipulate power in order to gain authority. A relative new comer to the Beijing political scene, Hua Guofeng had survived the Cultural Revolution to become party first secretary of Mao's native Hunan province in 1970. He gained national attention in 1971, when Mao nominated him to serve on a panel assigned to investigate the conspiracy and death of Lin Biao. With Mao's strong backing, he was elevated to the politburo in 1973. Two years later, he was appointed public security minister and sixth deputy premier. A competent administrator who lacked strong factional ties to either Deng or the Leftists, who enjoyed Mao Zedong's personal confidence and who lacked a powerful organizational base of his own, Hua made an ideal compromise candidate. His unexpected designation as 'acting premier' broke the stalemate. In April 1976, Hua was appointed first vice chairman of the Central Committee and premier of the State Council, the Tiananmen disturbance was labelled a \"counterrevolutionary incident\" and Deng's dismissal from posts inside and outside the party were ordered. 'As Mao's health continued to deteriorate, Hua visibly began to grow in political stature and self-confidence and this proved to be an obstacle for Hua's four radical rivals' (Baum, 1994). Events turned in his favour as when the earthquake struck in 1976, Hua took personal charge of the relief effort a managed to generate a great deal of favourable publicity for him. However, position and power did not necessarily mean the same thing. Of all China's leaders, Hua Guofeng faced the biggest challenge in responding to the legacy of Mao. He had, after all, been a beneficiary of the Cultural Revolution - he owed his very position to radicalism. He had also been a close ally of the Gang of Four during this time. And his sole claim to leadership was that Mao had personally blessed his succession. How could Hua criticise Mao or the Cultural Revolution without undermining his own position? Even criticising the much despised Gang of Four was dangerous for Hua, given his own Cultural Revolution experiences. Hua's position, then, was an inherently unstable one. Unwilling and unable to move away from the Maoist past, Hua instead turned to old tried, (but not necessarily tested,) means of cementing his position. Hua argued Mao had", "label": 1 }, { "main_document": "on pages 2-3 of the sport section, it was coded \"2\". A code of \"1\" was established if the photograph appeared anyway else in the newspaper (this represented the least prominent location). To reduce confusion, when referring to \"a page, or a number pages\", the researcher implies that both sides of a single leaf/page of the newspaper is analysed. Hence when a photograph portrayed a male/female athlete in an \"action shot in the sport setting,\" a code of \"3\" was given, when the athlete was portrayed in a \"still pose, but within the sports setting\", it was coded \"2\" and when an athlete was portrayed in \"a still pose shot in a non-sport setting\", it was coded \"1\". In both categories regarding the \"type of coverage\", a high coding score represents more prominent coverage of the athlete as well as \"positive\" portrayal of the athlete (e.g., in their natural setting). As these were objective methods of analysis, I simply counted and tallied the number of times each theme occurred against the coding scheme. Any ironic/doubtful content, which could not be located within these predetermined categories (e.g., content which could not be accounted for) was coded \"nine\" and excluded at the data analysis phase. This was an appropriate measure as there is a tendency for the researcher to code such ambiguous content subjectively (Berg, 1998). The total amount of time spent coding was 5 hours. A content analysis of the 60 photographs, randomly drawn from a sample population of 296 sports photographs, reveals that through their \"biased\" coverage, the media show sport to be the generic preserve of males. For example, this study found that male athletes received more photographic sport coverage (75%) when compared to female athletes (25%) across 56 newspapers. This difference is illustrated in Table 1. Consistent with the latter view, it is also evident that more male sports athletes received prominent coverage e.g., front page coverage both above (31%) and below the crease (22%) of the main newspapers, in comparison to female athletes who received no front page coverage in newspapers. A similar, yet slightly less prominent difference was also revealed in the sports section of each newspaper. For example, compared to 7% of female athletes (1/15 female photograph), almost half of the male athletes (47% or 21/45) photographs received front-page coverage in the sports section of newspaper. This finding is in keeping with previous literature, which suggests that the sport media works to reinforce and perpetuate hegemonic masculinity in sporting contexts (Kane and Disch, 1993). These trends are illustrated more clearly in figure 1. There was also a clear biased portrayal of male and female athletes. For example, whilst 71% of male athletes photographs' showed them \"in action\" or \"on the pitch\", no such photograph portrayed female athletes in this light. In stead, the larger majority of female athletes photographs showed them \"posing\" either in (60%) or off the sports setting (40%). Only 4% of male athletes were shown \"posing\" off the sports settings. This overwhelmingly stereotypical portrayal of females (Creedon, 1994) In conclusion, it is plausible to", "label": 0 }, { "main_document": "represent the growth of the Japanese economy since end of the Second World War (Insullal. 1968: 753-777). Moreover the pace of dietary change is explained by the work of Popkin, which has accelerated to varying degrees in different regions of the world (Popkin 1993: 138-157). As Harris (1986: 47) indicates that the dietary tends to be related to religious factors. My religion is Buddhism, which is one of the major religions in Japan. Buddhism often confuse people with veganism, although only a small number of extremely devout Buddhist priests voluntarily deprive themselves of all animal food. Thus, Buddhists can eat animal flesh as long as they are not responsible for the termination of the animal's life (Harris 1986: 23-24). Moreover, Farb and Armelagos (1980: 35) suggest that many eat butter and drink milk, and in some countries that are largely Buddhist, therefore it is not only Japanese consume meat among Buddhism countries. According to the food diary, my meat consumption is clear, such as kebab, sausages, curry as well as sushi. Foods shopping, living alone and living outside Japan are all my first experience, therefore buying take-away food for eating at home can be say one of my food habits since University starts, because there was no necessity to buy food by myself in Japan. The frequently use shopping place tends to be town centre, especially Sainsbury, which is one of British largest supermarket chain, having wide variety of goods available to consumer. Beardsworth and Keil (1997: 32-46) suggest that the supermarket itself may be considered one of the most successful outcomes of the development of modern systems of food production and distribution, indicating the extent of control over quality and the reliability of supplies, compared to those of the past. The development of the modern food system in the West is considered due to the long distance transformation of food production, which has been improved by the emerging international agricultural specialisation, between a numbers of nations with close economic ties to Britain. Such trade has shaped international relations (Beardsworth and Keil 1997: 32-46). However, the reason why I rarely use other large shops in Reading is that the accessibility tends to be difficult compared to Sainsbury, which is located in centre of town. Furthermore, the seasonal foods are tend to be low price, quality and quantity are relatively high, therefore my food consumption in Britain is identified rather \"seasonal eater\" than using long shelf life foods, such as ready to eat tin or frozen foods, for example, the spaghetti, pasta source in tin, or pizza, pasty in frozen. However, the process of preservation, including canning and freezing of foodstuffs were the most significant step in the development of an industrial cuisine. Moreover, the use of machines in the production of industrial food is also identified as important development in food preservation (Goody 1997: 338-357). Moreover, there are two reasons why I rarely use long shelf and ready to eat foods, one reason is due to consider the future pathology, which might relate to individual food consumptions. Fairweather-Trait (2003: 1709-1727) suggests that", "label": 0 }, { "main_document": "Molecular modelling is an aspect of chemistry made possible through the use of IT. IT opens the field of computational chemistry, which very quickly has become the main stay of many chemists across the world. Certain computer software not only allows for the design of complex chemicals but also the manipulation in 3-dimensions and several energy optimisation routines to be applied. For this aspect of the course, the software package Cerius The SGI Indy line of computers where introduced as 'cheap' Unix workstations capable of 2D and some 3D application. They have video cards which update the screen very quickly and so the machines seem to run fast and smoothly. Unix has many security features which makes it the operating system of choice for many professional and academic applications. This makes them ideal for computational chemistry uses. Computational chemistry applications utilise many empirical and semi-empirical routines designed for the 'realisation' of entered chemical structures. The routines allow for the calculation of steric energies, optimisation of structures, measurement of bond angles, lengths and degrees of torsion, calculation of charge distribution, intermolecular interactions (hydrogen bonding, van der Waals interactions, electrostatic energies) and many other processes which relate to structure. The 3D rendering aspects of the visual environment allow for the investigation of all conceivable conformations of molecules, provided along side calculation for the energies of each conformation. This allows you to find heats of formation and steric energies for chemical models which can give the user a feel for the stability of their designed model. For this course, I created, manipulated, measured and optimised several organic molecules including caffeine, E/Z-dichloroethene, 1,2-dichloroethane, isoflupredon (a corticosteroid) and syn/anti-dimethyldecalin. Through the application I also designed a complexing agent for removing Ni Molecular modelling is based on statistical semi-empirical methods. This means that the information was partly derived from experimental evidence and partly from theory. Bond angles and lengths are obtained experimentally, and related to electronic configurations. Data concerning the elements is collected and filed in a database which the chemical software can access in real time when performing the calculations. The calculation of minimum energy is achieved through a combination of optimising the bond angles and bond lengths by adjusting the coordinates of the atoms step-wise to obtain a conformation known to be stable. After each spatial adjustment, many calculations are performed to obtain the energy of the state, and the energy compared to the previous step to see if an improvement was made. Calculations performed involve the interaction between atoms and electrons, bond energies, torsions and strains, van der Waals and coulombic potentials. The variables provided to the maths engine are taken from the database and are specific to the identity of the atoms in question. A potential drawback of this method, is that the results of the maths engine can only be as accurate as the variables provided to it (taken experimentally) and the calculations are limited to the parameters provided. The calculation of the steric energy is involves several energetic contributions, The constants involved in the calculation, k Cerius Upon start-up the user is", "label": 1 }, { "main_document": "and organise the different workshops. Influenced by historical materialism, Hourmouziadis (1979: 89-90) envisioned an organised mass production system whereby Halstead (1993: 606-608) has challenged an earlier interpretation of craft specialisation of Spondylus ornaments and has suggested that the uneven concentration of finished Spondylus artefacts was the result of ritual deposition involving deliberate breakage and burning by socially differentiated households. By comparing these finds with additional sites exhibiting Spondylus ornaments one may conclude that Hourmouziadis' and Halstead's scenarios of social differentiation are rather speculative. At LN Sitagroi the Spondylus and metal artefacts showed a non-restrictive distribution whereas the presence of damaged and burnt shell ornaments can be seen as a wider practice occurring at many more North Aegean sites (Nikolaidou, 2003a: 359, 340). It therefore appears reasonable to agree with Souvatzi (2000: 119-200) who emphasises that the area X8 at Dimini yielded tools appropriate for shell ornament manufacture, houses with manufacturing debris were abandoned, and the burning at Dimini pertained to whole structures rather than just to shell artefacts. While the uneven excavation at Dimini may have biased our existing understanding of artefact scatter, it would appear that Dimini played an important role in the Spondylus production and exchange network whereby craft specialisation occurred at community level (Andreou Although there must have been separate production areas for metalworking, there does not appear to be clear archaeological evidence for such workshops. At Sitagroi, there have been sherds with copper deposits informing the presence of crucible melting for casting but there have been neither moulds, ingots or slag that could hint at large-scale metal production nor conclusive evidence for a furnace installation to withstand high temperatures (Renfrew and Slater, 2003: 303, 305-307, 315). As far as the output of rare goods is concerned, the fact that Sitagroi revealed the largest quantity of Neolithic/Chalcolithic shell bracelets and amulets in the Aegean could be the result of wet and dry sieving employed at excavation (Nikolaidou, 2003a: 338) rather than convincing evidence of an export industry. The contemporaneous and closely located site of Paradeisos in Thrace yielded only three worked Spondylus remains and no Spondylus beads (Reese, 1987: 127-128) indicating that there was not a pan-Aegean imperative to organise shell production for export. The rarity of metal finds clearly suggests a very low production output (Demoule and Perl Regarding the circulation of shell and metal artefacts these would appear to circulate in only small quantities over long distances (Perl The fact that Spondylus artefacts produced in the Aegean reached central Europe (Renfrew, 1973: 186) must not lead to generalisations about other kinds of rare goods. Rare goods could not have easily changed hands since their rarity made them extremely valuable and as a consequence one is less likely to justify 'circulation' than 'exchange'. However, it becomes difficult to be in a position to specify the exact exchange mechanisms in operation. Although the similarity of the gold strip found at Zas cave on Naxos to the Varna culture seems to indicate connections between the Cyclades and the Balkans (Demoule and Perl There are many uncertainties associated with the notion", "label": 0 }, { "main_document": "Suspicious Activity Reports Regime: The SARS Review, 2006, pp.13 Webb, L., A Survey of Money Laundering Reporting Officers and their Attitudes towards Money Laundering Regulations, 2004, Journal of Money Laundering Control, London, Vol.7, Issue 4, pp. 367 McCluskey, D., (op.cit.), pp.203 Practitioners are continuously seeking practical ways to reduce consent times. This can often be achieved by careful focus on the facts provided to the SOCA; \" This places an additional burden on practitioners, as time and resources are spent researching the details of trustees and sifting through complex transactions to provide useful and decipherable information for the authorities. ibid The all encompassing nature Fears have been expressed that where professional advisers are obliged to report trivial cases of money laundering this places \" In practical terms, the \" see: supra, no.37 Institute of Chartered Accountants, Proceeds of Crime: Consultation on Draft Legislation, 31 May 2001, FJB/T: 1267-5, pp.12 Webb, L., (op.cit.), pp. 367 In the long-term, there may a decline in apprehension of the legislation and fewer SARs will be submitted. This would result in reduced disclosure by overworked professionals who refuse to file SARs which appear to be of little value to most businesses. This would in turn undermine the effectiveness of the legislation, which relies on practitioners reporting their suspicions. In 2005 nearly 95,000 SARs were submitted; almost double the number of reports from 2003 Therefore, it appears more likely that reporting will reflect the current trend and continue to increase. Increased reporting will require greater resources for the SOCA to deal with requests. Unless government funding increases to match the costs, there are likely to be further delays pending consent. It is extremely probable that practitioners and financial institutions will incur increased expenditure, as yet more time and resources will have to be devoted to compliance. Sir Stephen Lander, Review of Suspicious Activity Reports Regime: The SARS Review, 2006, pp.13 The POCA has created some confusion with regard to legal professional privilege (LPP) and disclosure. In Bowman v Fels It was also held that it could not have been the intention of Parliament that steps taken by lawyers to determine or secure legal rights and remedies for their client would involve them committing an offence of \" Bowman v Fels [2005] EWCA Civ 226 P v P [2003] 3 W.L.R. 1350, see: Dame Butler-Sloss P, at 14 Bowman v Fels, (op.cit.), at 244 Proceeds of Crime Act 2002, section 328(1) This ruling is helpful to lawyers as its \" Following the Bowman ruling the number of suspicious activity reports filed by solicitors dropped from 19,000 in 2004 to 9,600 in 2005 McCluskey, D., (op.cit.), pp.201 Sir Stephen Lander, Review of Suspicious Activity Reports Regime: The SARS Review, 2006, pp.13 However, the Bowman ruling that the common law LPP is no longer overridden by the disclosure requirement, is arguably The Court had already decided the case on the basis that litigation did not constitute an 'arrangement' under section 328. This creates a legal complexity for lawyers, as they cannot be certain of the exact law relating to LPP. Some", "label": 1 }, { "main_document": "costs. As it has been reiterated before, the capital punishment system can be justified on its own merits via the combination of three approaches: retributivist, deterrence and reformist. Furthermore, it has been justified that there is no other alternative punishment that can match up to the appropriateness, efficiency and effectiveness of a capital punishment system in relation to grave capital crimes. What is needed is the realization that different principles or justifications may be relevant at different points in imposing the death penalty in different cases. Beyond the belief that capital punishment has merit in itself, the capital punishment system must also be justified by other practical variables within the state it is implemented in. Simply put, the death penalty could be more justified in some cases and in some countries more so than others. It is a circumstantial concept of which we should assess it not merely on \"its internal workings but on the wider socio-political context in which it is set\". Instead there should be a higher tolerance and understanding of countries that have retained the death penalty driven by practical necessity and out of pure belief in the value of the system. On this note, the death penalty is further justified in practice when it is imposed by a legitimate state which applies the penalty justly and selectively against the worst crimes fully secured in a just judicial system with the rights to appeal. Duff, To conclude, the capital punishment system is justified theoretically to a great extent on its own merit. While we do not always assume that the actualities will be close to that ideal, the capital punishment system is justified practically like any other fallible penalty system that is capable of reform to near perfection.", "label": 1 }, { "main_document": "presents a problem when the information given out is not accurate. The media tends to focus on 'scare-stories' that will sell the highest number of newspapers or magazines. As a result, there are many examples where inaccurate statements or advice on eating has been given as a result of an improper understanding of medical debate. For example, one newspaper reported the headline \"A healthy lifestyle might be the death of you\" out of the academic debate about the link between cholesterol and cardio-vascular diseases. Thus the media is responsible for people 'eating badly' to the extent that it has the power to create increased confusion among the population as to what this, and its consequences, actually constitute. A more formalised education than just food labels and dietary guidelines is required if the individual is expected to take responsibility for eating healthily. This can operate through a number of methods including incorporating lessons within the school curriculum and the distribution of health advice through leaflets or advertising, to give a few examples. Responsibility for this type of formalised education tends to fall to government and thus it is up to them to make the necessary budgetary alterations to be able to promote the healthy eating message more widely. They need to learn to cooperate with other sources of information such as voluntary organisations and academics in order that the information received by the public does not confuse or conflict, as this has been shown to be a major cause of distrust in health advice (Keane, 1997, p.181). However, education alone is unlikely to bring about changes in the population's eating habits, unless coupled with policy change. Robertson It is therefore unfair to expect individuals to alter their eating habits just by being provided with information about what is healthy or unhealthy. This is the point at which, once again, government intervention is crucial in imposing policy changes that operate in line with health education. Although it is important that national governments cooperate with global agencies in trying to tackle these problems of nutrition, implementing policy does in general have to be ultimately the responsibility of government. Organisations like WHO cannot compel governments to develop nutritional plans of action but they do have an important role in developing and assessing strategy, for example by encouraging multi-sectoral involvement, something proven very successful in Finland. Policy agreement across all sectors is far more likely to be successful in dealing with 'eating badly' habits because it avoids the consumer confusion that comes with contradictory claims. Included need to be health ministries, education authorities, those responsible for policy on food supply and pricing, marketing and advertisers and voluntary organisations; in short, all of the consumer's sources of information. Structural change might be required within government so that this type of multi-sectoral involvement is viable, allowing responsibility to be less fragmented and confusing to the public. While it is ultimately the individual who must choose what to eat on a daily basis, this decision is not uninhibited, instead being one that takes into consideration a number of different factors.", "label": 1 }, { "main_document": "to give the authority to the other sector (such as privatization and deregulation) in 1980s to seek or fight for the competitiveness. Cerny, 1990, pp.237 Cerny, 1997, pp.273 In addition, the concept of Developmental State and Competition States are mutually exclusive. The former emphasizes the importance of the state intervention in economic development. While the latter claims that the less state intervention can make the national economy more competitive. These two opposite ideas should not exist in the same period of time. In fact the Developmental State can be shown in the early development of East Asia. But the concept of developmental state is to explain the success of economic development in East Asia. When the states faced the economic recession caused by globalization, the developmental state concept is outdated. Indeed, the states are transformed into other form. The following sections show the reason and the way of transformation of developmental state in East Asia and conclude that the competition state is formed in the current situation. In the early stage of development, most of the East Asian States' leaders had the commitment to develop their economies. One of the best examples is founding father of modern Singapore, Lee Kuan Yew. In the 1960s, he committed to develop his state's economy by imposing the strict law to preventing the corruption. Although the strict law prevents the business affecting the state policy, the Singapore actively worked with them, especially foreign investor. They provided the superb infrastructure such as telecommunications and gave the subsidies such as tax holidays to Multi-national enterprise in order to attract the foreign capital. Other East Asian States also had this concept. Japanese State (including the Liberal Democratic Party and Bureaucratic) closely collaborated with the large industrial conglomerates (Keiretsu) to drive the economic growth, this relationship is known as 'Iron triangle'. Because of the legacy of Japanese Colonial development, the Korean States and Taiwanese States also had the close relationship with the large business group (Chabeol) and the small and medium size company respectively. Korea provided the state-directed credit to increase private investment and create big industrial conglomerates; while Taiwan gave the tax credit to private sector. Furthermore, Malaysia, Indonesia and China also follow the Japanese Model because of the economic success of that country. But we should not ignore that some of the countries such as Thailand and Hong Kong are the exceptional case. These two states did not have close relationship with business group. Because the former has the low social cohesive culture and the latter's colonial government has 'positive non-intervention' ideology. But their economies are still successful, especially for Hong Kong. Therefore, the concept of developmental state is not applicable to all of the East Asian States. W.G. Huff, 'The Developmental State, Government and Singapore's Economic Development since 1960', So the problem of the concept of developmental states is its failure in explaining the diversity of state intervention in the East Asia. In fact, this concept is described as institutional fetishism which overemphasizes the domestic state capacity but ignores the other factor in the developmental policy. Jayasuriya", "label": 0 }, { "main_document": "increases on a regular basis. Given the impact of discount booking from travel agencies and internet selling, consumers are more price-sensitive and expect these to fall (Deloitte, 2007). On the other, the development of hi-tech, internet and telecommunication, enable consumers to access full information on products and services, read reviews, and compare prices (Deloitte, 2007). The combination of these presents a high bargaining power of buyers. Nonetheless, this is counterbalanced by the strong growth of RevPAR, room occupancy and total industry revenue in the region (Appendix 2). Moreover, the new passport requirements for American and Canadian travellers that went into effect in January 2007 (Travel State Gov, 2007) may discourage outbound trips to the Caribbean and Mexico, and encourage domestic getaways, thus additionally increasing room demand (Ernst & Young, 2007). An added indication of strong hotel demand is the region's significant share in the international tourism receipts and business travel (AH&LA, 2007) which further decrease buyer's bargaining power. Given the recent high and favourable conditions of the hotel sector, there is a substantial threat of potential substitute products and services (Ernst & Young, 2007; Hotel Executive Insider, 2006). The strong possibility of rising construction costs in Canada (HVS Intl, 2006), given the trend in the US, suggests and strengthens the probability of alternative sources of accommodation entering the sector and therefore posing significant threats. North American tourists, who account for the most lucrative segment (Reuters, 2007a), are increasingly seeking online information about other forms of accommodation, e.g. self-service camping, fully- and partly- owned holiday accommodation, timeshare arrangement, and holiday rentals (Hitwise, 2007). Already there is evidence in the industry of emerging alternatives with booming condominium hotels in North America and major players emerging with their own brands of mixed concepts (Lodging Hospitality, 2006). With recent developments and popularity of the 'green travel' concept, the usage of advanced video conference service appears to be another potential substitute for the sector, saving companies' time and costs as well as preserving the environment (Hotel News Resources, 2007), therefore intensifying the bargaining power of substitute accommodation and services (Reuters, 2007a). Suppliers have a medium bargaining power in the North American hotel sector. Given the industry's successful performance in 2006, its current high (Baird, 2007; Ernst &Young, 2007; Deloitte, 2007) and the overwhelming amount of private investment over the past decade, in addition to its cash flow generating ability and significant real estate value, the hotel sector will continue to attract relatively substantial project funding, which in turn suggests a low supplier bargaining power. However, this is counterbalanced by the double-digit increase in recent operational and energy costs due to rising environmental concern, property tax increase (Appendix 1) and escalating insurance costs (Ernst & Young, 2007). High unionization (Rushmore, 2006), especially in North America, is a source of increased supplier bargaining power, where in 2004, UNITE Consequently, this will greatly contribute to wage negotiation, compensation, employee welfare and have an impact on hotels' operating margin. In addition, an aging population, long working hours with lower pay than other industries and the lack of emphasis on staff training", "label": 0 }, { "main_document": "Polymers based on the silicon-oxygen linkage are the only major commercial polymers with an inorganic backbone. These polymers are produced by hydrolysis of an organo-dichlorosilane, followed by spontaneous condensation of the product silanediol with elimination of water. The final result is a long chain polymer known as polysiloxane (silicone). If the reaction takes place at low concentration then the cyclic compounds; cyclopolysiloxanes are formed, this is due to the kinetic favourability for the 2 ends of the chain to react with each other than another chain (at low concentrations). In the experiment, from the starting compound, dichlorodiphenylsilane one can isolate the intermediate silanediol, as diaryl compounds are found to be more stable than their dialkyl counterparts. Condensation of diphenylsilanediol in dilute solution gives two different cyclopolysiloxanes, one of them will be isolated depending on whether an acid or base catalyst is used. See COSSH sheet To begin a mixture of toluene (10ml), t-amyl alcohol (25ml) and water (100ml) was cooled in a conical flask (500ml) to ~10 To the vigorously stirred mixture, a solution of diphenyldichlorosilane (20ml) in toluene (10ml) was added dropwise over a 15-minute period. The temperature was held between 10 - 15 The reaction mixture was stirred for a further 10 minutes and following this the crystalline solid was filtered off by suction. The solid was washed with water (100ml), then washed twice with 5% sodium hydrogen carbonate solution (50ml X 2), and further washed with water (100ml X 2). The crystals were then stirred gently whilst the vacuum was in operation. After an hour the crystals where easily moved and free flowing. Once dried the yield, melting point where measured and IR spectrum was obtained. To a \"Quickfit\" conical flask (100mL), diphenylsilanediol (5.0189 g) and 95% ethanol (50ml) was added, in addition a solution of sodium hydroxide (0.4521g) in water (1ml) was further added with stirring. The mixture was then transfered into a round-bottomed flask where it was heated under reflux in an oil bath, with magnetic stirring. The solution was heated under reflux for ~30 minutes and then cooled in an ice bath, The crystals were then filtered off and air dried until free flowing. The product was then dissolved in a minimum volume of heated toluene (~70 The solution was then re-heated to take the newly formed crystals back into solution. The solution was then cooled again by ice-bath, The crystals where then filtered off, a yield and melting point measured and an IR spectrum obtained. The literature melting point for diphenylsilanediol is 201-202 All the peaks are in the range 7-7.5ppm indicateing that the H's are all in aromatic environments. Hydrogen's are found in a ratio of 2:2:1. The peaks on the spectra show there are four C environments; phenyl groups bonded to each of the Silicon atoms, and two phenyl groups bonded to one silicon atom. From the IR spectra the structures are fairly consistent, however there are anomalous errors between the two. In the IR spectra of There were some unexplained peaks picked up in the IR spectrum, which shows that there were", "label": 1 }, { "main_document": "are freezers which are characterized by change of state of a refrigerant as heat is absorbed from the freezing food. Heat from the food provides the latent heat of vaporization or cryogenic of sublimation. Common refrigerants used are liquid nitrogen and solid/liquid carbon dioxide. There are several advantages of using liquid nitrogen as a refrigerant, which are; However, the main disadvantage is relatively high cost of refrigerant (Fellows, P.J 2000). Potatoes, sprouts, apples, mushrooms, carrots and tomatoes were cleaned, peeled and/ or chopped into halves as appropriate. Also they were blanched (except tomatoes) so as to destroy destructive enzymes. Most fruits and vegetables require blanching, although those susceptible to enzymatic browning benefit from inactivation of polyphenoloxidase. Indicator enzymes to establish blanching efficacy include peroxidase, catalase and lipoxygenase. However the maturity stage of the commodity can influence the efficacy of the blanching. Also blanching serves other function in addition to enzyme inactivation, the process is energy intensive and may cause undesirable changes such as loss of protein and volatile compounds. Sodium metabisulphite of 0.1% and 0.5% was used at 5minutes for pretreatment of potatoes, sprouts, apples, mushrooms, carrots and tomatoes. This was done so as to prevent texture damage during freezing- thawing and prevent enzymatic browning by excluding oxygen (Erickson and Hung, 1997). Although sodium metabisulphite was used and applied to sliced and chopped fruits and vegetables they undergo browning on additional of guiacol. This is because sodium metabisulphite can prevent enzymatic browning depending on the concentration used with respect to the type of food product applied on. During the experiment, only few drops of sodium metabisulphite was used of (0.1% & 0.5%) throughout for all the food products. During freezing process different parts of the product passes through various stages at different times. There are three stages of temperature change; Sodium metabisulphite was used to treat potatoes, sprouts, apples, mushrooms, carrots and tomatoes. Sodium metabisulphite or calcium oxide is added to blancher water to protect chlorophyll and to retain color of the green vegetables, although increase in pH may increase losses of ascorbic acid. When food are correctly blanched mostly don't have significantly changes to flavor, but under blanching can lead to off flavor during storage of frozen foods. There are several factors which affect freezing rates and times which are; Planck's equation is used to estimate the freezing time of foods: , where; Freezing time for the blast freezer: Freezing time for liquid nitrogen: To maintain the quality of frozen foods it's to maintain its temperature until its final use. Therefore, it is important to select the correct temperature for the expected period of storage and during this storage period the following hazards to quality must be avoided. Increase in temperature (both during storage and process of loading and unloading). Physical damage to product or packaging during the course of storage of handling. A low relative humidity in the cold store (if product isn't packed). Contamination of the product by foreign bodies, this can be avoided by An air tight and mechanically resistant packaging. Ensuring the design of the cold store", "label": 0 }, { "main_document": "among participants. Moss argues that this is often because certain words are more commonly linked, for example 'cat and dog'. Aitchinson also notes that 'two types of link seem to be particularly strong: connections between coordinates and collocation links' (2003:101). This was true with my results although antonyms featured highly. Although it is undeniable that there are patterns in the responses of participants it is important to note that these findings merely help to provide 'a general framework' (Aitchinson, 2003:101) of how words are linked within the lexicon. Perhaps more useful is the technique of priming which looks at how closely words are associated within the lexicon. Priming measures how quickly participants notice words which are/are not associated with the sentence. Field (2003:17) uses the example 'We saw a camel at the zoo .... fosk - bank - lidge - hump'. The participant has to press a button every time they see an actual word. The reaction time to 'hump' will be quicker than 'bank' because it has already been triggered by the semantically related 'camel'. This test is more useful than word association as it looks at how closely words are associated before activation occurs and also how long the activation lasts. This gives a deeper insight than previous word-association experiments.", "label": 1 }, { "main_document": "Before this project I felt that the factors needed for a successful business were motivation, coordination, planning, opportunity, future prospects, teamwork, etc. Having completed this project I feel I have better understanding of how these factors can help and work to establish an effective team. We initiated our business plan, by having many brainstorming sections. As we had many envisioners in our team we had a wide range of undemanding, easy, and straightforward to very complex, high tech and challenging ideas about many products and services. We subsequently researched on each idea regarding the engineering behind it and what new can be done in the product to make ourselves different in the market or to come up with a different product. We had a good thought to have a voting among the team to decide upon our business idea which I think made people feel more involved in the project and in the planning process as well. I feel this was a major part of the success of our project. Our idea was to make desktop phones which can work on both internet broadband connection and normal telephone line. The motivation was basically to make phone calls absolutely free after one time investment. As all members in our team were electronic engineers, it helped us design the phone easily. Therefore, I have learnt that technical knowledge is very important part of the evolution of a start up business. Our team consisted of mix skilled members. We had envisioners, enactors, leaders and followers. We all had electronic engineering background as common between us. There was a good understanding and co-ordination in the team. We also had fine communication and we could decide easily what role each of us was playing in our team. We believed in team work which I think made many problems simpler. Alex decided to lead the team, and as he had the necessary skills to delegate work effectively. We had many productive group meetings at regular interval of time which helped us for the upper level pitch. In those meetings we divided the work among ourselves and each member had to research about the topic given to them. So that in the next meeting we could discuss about it and try to think of what more could progress our plan. Our short meeting with our lecturer gave us numerous guidelines about what part of the business which we have not considered or looked upon to improve our plan. As I had the research and analyzing skills I was given the topic to investigate and make a report of the competitors and market entry barriers. I was also asked to find out the basic cost of outsource this product in India or China. To find out this I used the mentel, fames and other university provided business databases and some search engines like google, yahoo. This project has therefore helped me to confirm my own skills, and knowing this will help me in future projects. In addition to this, I used to think of any ideas or factors which we", "label": 0 }, { "main_document": "story should be regarded only as an anecdote in The aim was not to transform Jim Hawkins's adventure, his hunt for Flint's treasure, but only to add another issue to it. The re-writing had to be incorporated into The part of the novel that I found is at the very end of Chapter Fourteen. It takes place a few pages after Jim has landed on the island and has witnessed John Silver murdering another sailor. The young hero realizes that his idea to leave the ship and to go on the island was a mistake and that he is probably going to be murdered by the pirates. At this point of the story, he is describing the place in which he finds himself. He mentions that he has \"drawn near the foot of a little hill\". Jim would discover what remains of Crusoe's stay on the island and then, his quest for Flint' treasure would continue. In order to better define the scope of my project, I have added in the appendix a short epilogue of this new version of I presented Jim as the future writer of the life of Robinson Crusoe. However, the main focus one my work remained on the finding of Crusoe's habitation. Stevenson, R. L. Penguin Popular Classics. 1994. p 90. This re-writing had several aims. The main target was the shift in matters of point of view and narration. In this project, the hero of Defoe becomes a very minor character since he is only mentioned in an anecdote. It follows a complex narrational process. The re-writing brings into play three actual writers - Defoe, Stevenson and myself, and two fictional writers - Crusoe as the writer of his bibliography, Jim as the writer of his adventure and as the writer of Crusoe's story. Two narrators are competing; Jim as a 'verbal' narrator since he is still given a voice and Robinson as a 'silent' narrator since he is no longer narrating his story but his fictional autobiography, Defoe's novel, is still present in the reader's mind. Finally, there are two levels of narration: Jim's adventure and Crusoe's story. Not only Crusoe is no longer the one who speaks but his story is told by someone else. Jim is the main narrator of Consequently, Robinson's world is seen by another person. Jim had to imagine, to make suppositions about the whys and the wherefores. In that respect, the rewriting had to conform to Jim's narrative style. I therefore studied the narration of For example, I kept Jim's use of the grammatical shifter \"here\". Even if he writes some years after the events happened and if he is no longer on the island, Jim uses the term \"here\" as if the events were happening as he tells them. I found it interesting and worth maintaining. Similarly, I borrowed some of Jim's words and expressions. Just before the passage that I chose to incorporate my rewriting, Stevenson's hero sees his end near and writes \"It was all over, I thought\". Moreover, earlier in the novel, Jim writes \"then", "label": 0 }, { "main_document": "the rest preferring to pay premiums for locally produced products (particularly seen in South Korea). e.g. In Portugal, international fast-food and coffee-shop chains, find their efforts challenged by a heavily rooted cultural norm that it is utterly impolite and rude to eat/drink on the street, resulting in larger seating areas than other countries. Further, paper/plastic cups are seen as odd to drink out of (unless in social settings); this is heavily due to the ingrained caf In a world where mass production and services are taking over, attempting to standardize tastes, lifestyles and consumption, globalization generates diversity and innovation. Saint Fusion promotes cultural differences in recognition that eating habits and taste will never be absolutely uniform. The best approach is then a \"Koreanization\" of products, services and marketing procedures, realistically standardizing whilst adapting to local cultures, the spirit of the company. Saint Fusion's outlets will be larger in Korea than in UK, because the company is mainly targeting markets in business locations where larger units are more feasible, both to accommodate greater volume and to aesthetically harmonize large financial buildings (appendix 1). There will be some adaptation in service attributes acknowledging the Confucian and collectivist culture, which also influence loyalty (Usinier & Lee, 2005). Koreans well give positive word-of-mouth with satisfaction, though not the opposite with dissatisfaction; further they will not complain or switch companies. Although this may seem a bonus it also demonstrates that gaining consumer loyalty is more challenging in Asian countries, and therefore aspects that appeal to Korean interrelations, such as increased formalisation of interaction (resulting from power distance) will be enforced. The d Further, Saint Fusion's logo is international and will not be written in logographic characters. Business consumers are well travelled (appendix 3) and westernization, acceptance of foreign brands and international business targets (i.e. expatriates) also prove it unnecessary. Pricing is always delicate when expanding to international markets as culture heavily impacts price perceptions (Usunier & Lee, 2005). Saint Fusion will initially adopt an \"intermediate geocentric inventive position\" (Usinier & Lee, 2005, p. 325) as it's first goal is to gain customer loyalty (recognized as challenging for companies entering Korea), be accepted and establish itself, explore the market and develop brand awareness. This may therefore, at first, require international management of pricing strategies, also because of remaining differences in currency value (appendix 2). As it becomes established prices will be balanced with those found in the chain's international units. The guarantee that consumers in Korea can afford high prices comes from local competitors' pricing. These are 3 to 4 times as much as those of independent bakeries (appendix 11); nonetheless, consumers still buy their products because they want them and can afford them. Location is a major decision, involving significant costs and determining business the success (Bowie & Buttle, 2004); further, it is directly linked to target markets, their location, and consumer behaviour (Usinier & Lee, 2005). Saint Fusion's focus on business markets and provision of premium products-which consumers want with convenience-shows proximity to office buildings in financial centres of urban/gateway cities are its initial key locations.", "label": 0 }, { "main_document": "The current era is ruled by technology (Peacock, 1995), technological innovations appear in one's life on a daily basis; this includes information technology and the Internet. The world has moved from pre-Internet age to the Internet age, where the World Wide Web has a major importance in all areas of one's life. The theoretical concept and principles of the Internet were created essentially by three individuals and a research conference in 1956 (Internet History, 2005). Through technological development, research and implementing the visions of scientists there was a use of networking protocols in the 1980s. There were networks (ARPANET, National Science Foundation Network, and Computer Science Network) which connected the Universities in North America and then EUnet (European Network) connected Europe's research centres and Universities in 1990. ARPANET is the first internet that evolved to the one which is in use today. The use of the Internet increased quickly after 1990 and the US Government appointed independent internet management organizations starting in 1995 to work together in managing, researching and developing the Internet. Electronic commerce through the World Wide Web is gaining importance whether buying or selling products and services. The travel and tourism industry is not an exception. In the coursework three different electronic tourism distribution channels are being evaluated and discussed both from consumer and supplier perspective. The same travel package is analysed from three different providers which are the aggregator (tour operator) and integrator (travel agent) and the principals (airlines, hotels and car hire company) themselves. There is a need for clarifying special terms and explain definitions from different authors in the literature. The Internet is defined by Peter Drucker (2002 cited in Turban , 2004 p. 3) as ' The Internet thus enables people to buy and sell through a world wide network. This action can be considered as conducting business on electronic grounds- electronic commerce (EC). The method of buying, selling, and exchanging goods, services and information via computer networks is called EC (Turban This retail is supported by supply channels. A distribution channel for travel and tourism whether it is related to traditional travel agents or Web sites is defined by Middleton (2001, p. 292.) as 'any organized and serviced system, paid out of marketing budgets and created or utilized to provide convenient points of sale and/ or access to consumers, away from the location of production and consumption'. However other authors argue that this definition ignores some activities undertaken by distribution channels. These are These aspects are vital in the tourism industry as customers need information for motivation to travel and for making a decision for the purchase of the tourism produce. O'Connor and Frew (2002), Buhalis and Laws (2001) stress the importance of it by stating that information is the 'lifeblood' of tourism; especially because of the intangible and perishable character of its products. Consumers need the information to fill the gap between their expectation and experience. Hotels need to use various distribution channels (O'Connor and Frew, 2002) including the emerging electronic ones. The role of information technology in tourism is of a high", "label": 0 }, { "main_document": "What is slavery? According to Malinowski it is \"...the denial of all biological freedom except in the self interest not of the organism but of its master. [...] The slave does not enjoy the protection of the law. His economic behaviour is not determined by profits and advantages. He cannot mate according to his choice. He remains outside the law of parentage and kinship. Even his conscience is not his own.\" The picture is very grim and entirely inhumane, the slave is totally dependable on his master's generosity and kindness. However despite this black description, slavery in Brazil was abolished only in 1888 while the British had outlawed it already in 1806. Why did it take so long for Brazil to finally introduce manumission? Was it due to the fact that a different type of slavery was implemented which turned to be more \"humane\"? Was the Brazilian colonial society so tyrannical and despotic that slave resistance was not powerful enough to influence the change? What factors enabled the continuation of slavery up until 1888? Bronislaw Malinowski, Numerous historians and researchers have closely examined the problems of assessing the severity of slavery in Brazil. Some such as Chasteen, Levine, and Schwartz are fairly critical of the oppressive and harsh treatment of slaves, while others Foner, Freyre and Tannenbaum seem to paint \"Brazil as a veritable haven for blacks.\" The publications of these historians tend to portray conflicting images of the lives of the slaves, however this arises from the multitude of factors and the intense complexity of the structure of society at the time. Nonetheless the situation of the slaves was not as brutal and heartless as Malinowski's quote suggests, the Brazilian colonial society which included the Catholic Church and the Master class did try to \"humanize\" the oppressions of slavery. However the degree of the mitigation is debateable. John Charles Chasteen and Joseph Tulchin, The roots of Brazilian slavery trace back to the extensive sugar plantations in the beginning of the 16 After being dormant for 30 years since discovery of Brazil in 1500 by Pedro Alvares Cabral, the Portuguese crown finally became interested in its colony. The new settlers realised the economic value of sugar production and unsuccessfully tried to persuade the Indians to labour for them. The Indians viewed working on plantations to be designed for females and were very ill adapted to performing the task both psychologically and physiologically. They were accustomed to work as freelancers and did not like the notion of exhausting work schedules. In addition to that, the natives suffered from the numerous diseases such as smallpox and measles, which had been transmitted from the Europeans who were more resistant to them. The consequence of this was a dramatic increase of the mortality rate amongst the Indian population, which was followed by famine. This in turn stimulated an economic crisis for the Portuguese and so to prevent the collapse of the sugar economy, which required a grand workforce, Indians were officially taken as slaves and forced to work on the plantations. However the demand for labour", "label": 1 }, { "main_document": "the company outside UK. This would not only increase Fresh Breath cashflow by selling the patented formula at a higher price to them but also expand our influence in the toothpaste market outside the country. Additional cashflow from licensing the formula/franchising could be used against rival companies in price war or etc. Fresh Breath as a virtual company would find it hard to compete against their big rivals. As directors remain committed to their own business interests and their rival companies market value are worth 100-1000 times more than Fresh Breath, it may be the time to think about exiting the industry after making a huge profit. Like gambling, the longer you stay the biggest the chance you will lose money. There are many risks that have to be taken in toothpaste industry like making a crucial mistake. Since the directors are not willing to give up their own business interests and concentrate on the growth of Fresh Breath, the future of Fresh Breath at this stage is very unclear. Exiting is probably the best option and we would recommend selling off most of the shares to new buyers. Directors would receive a huge amount of one-off money from the sale plus they would still be benefited from further growth of the company. Moreover, they could concentrate on their own business duties without worrying about the future of Fresh Breath. Or they could sell of the formula of both the toothpaste and mouthwash to other brands and end the business of Fresh Breath if there are no buyers as the price of buying off the company would be too high. This would bring the whole business to an end and directors would expect a smaller one off payment from the sale of the formulas of the mouthwash and toothpaste. This move is not recommended unless Fresh Breath business is going down sharply and any delay in exiting the market would cause a big loss of money.", "label": 0 }, { "main_document": "Strategy as a response to external This view of strategy is what underlies It is the simplest form of presentation that is of practical value for describing a business and its environment. (Houlden, 1996) I. Real opportunity - the strengths enable it to pursue opportunities successfully. II. If the weaknesses significantly lower its performance, decision-maker must consider actions to overcome the weakness. III. Real threat - it is not equipped to deal with the threat. IV. Strength can help overcome such threat. Difficult to determine which factors are the most important at the present time/ the most relevant for the firm? Iraq opening oil sector: Strength versus Strong capacity of a firm is not necessarily the unique competence Can only be done once, no point of review Over-Simplification Not good at dealing with a complex and often paradoxical world (Broklesby&Cummings) - However, while SWOT is more appropriate for smaller businesses, its nature also leads to its wide application, from project to nation, where a relatively quick analysis is needed . Absence of - Certain industries such as mining must consider the degree of risk the society might entail. They may be examined against the standards of responsiveness to the expectations of society the strategist elects. - What the executives Of all the components of strategic choice, the combination of resources and competence is most crucial to success. (de Wit&Meyer, 1998) Provides Planners are left without indication as to where to SWOT remains rooted in (Panagiotou, 2003) It is an Duncan, Ginter&Swayne (1998) suggested a four step model for assessing internal strengths and weaknesses After initial survey of both strengths and weaknesses, the focus turned to Their four steps include surveying, categorising, investigation, and evaluating. SWOT's In this model, a firm would assess their capabilities based on their value(V), rareness(R), imitability(I), and organisation(O). SWOT serves as a stepping stone towards the implementation of the Balance Score Card and Strategy Formulation SWOT is implemented to develop the key performance indicators (KPI) with four main perspectives of BSC. Combination of SWOT and Maxwell's six dimensions of quality (Storr&Hurst) I have briefly learnt the framework before but was initially not confident if there would be enough to say about this simple model due to its limited content in both lecture handout and seminar presentation. However, by During the research of this assignment, one of the questions that I have found intriguing is that This assignment has left me with a genuine interest in the area of international strategies and strategic thinking, which I would like to explore further. I have found this form of assignment both enjoyable and intellectually stimulating and would definitely put in more time into reading relevant literatures for my next reflective piece on Governance.", "label": 0 }, { "main_document": "Universal human freedom and equality based on universal truths are at the core of cosmopolitanism. The revival of cosmopolitan ideas is connected to the fall of the Berlin Wall and the globalisation of world politics. Globalisation is said to transform loyalties, identities and communities. However, globalisation is a contested phenomenon. The feasibility of the cosmopolitan ideas is seriously questioned if one adopts a more sceptical position. Globalisation may be just a buzzword used by the powerful states to hide selfish interests. In addition, the promotion of a moral universalism is very often criticised as a veil of Westernisation. Whose morality are we to promote globally? Moreover, the cosmopolitan democracy advocated by authors like Held may seem wishful thinking since there is no global 'demos'. First the essay will make a description of the main cosmopolitan ideas and of globalisation's transformative power. The paper will argue that the ethical dilemmas inherent in moral universalism (advocated by cosmopolitanism) are difficult to overcome. The essay will conclude that cosmopolitanism seems to have developed the institutional dimension (cosmopolitan democracy's features) but has neglected its ethical dimension. Is there a genuine necessity to rethink the concepts of political community and democracy? Since globalisation has contradictory effects leading to, both fragmentation and integration, how is moral pluralism reconciled with universalism? The first section of the essay will look at the main cosmopolitan ideas and the context in which scholars speak of their revival. The second part will argue that the transformative capacity of globalisation assumed by cosmopolitans, and taken as the impetus for political change, is not universally accepted as such. Realists or leftists would reject globalisation and highlight that the great powers still dictate while international relations continue to be a struggle for power. In the third section, the essay will point out to the dilemmas inherent in moral universalism advocated in cosmopolitan thought. Whose morality are we to promote globally? In line with the Ancient Socratic debate with the sophists, a realist would argue that might makes right. As Gray mentions there may be the case that \"No single way of life exhausts the possibilities of human flourishing\". The essay will conclude that cosmopolitanism seems to have developed the institutional dimension but has neglected its ethical and cultural dimension. Plato. John Gray. \"Pluralism and Toleration in Contemporary Political Philosophy\", What is cosmopolitanism? Briefly, cosmopolitanism is a normative liberal theory Irrespective of culture, nation or citizenship, people all over the world should enjoy the same opportunities to basic rights as well as have the same duties to humanity simply because there is a universal human essence and certain universal truths. Any rational person under a 'veil of ignorance' would agree that life, freedom, security or the pursuit of happiness, are human universal goods that must be protected and promoted. For instance, Kant has pointed out to humanity's inclination towards constant moral improvement (perceived, as the disinterested sympathy of the beholders to the French Revolution, which gave birth to a feeling of solidarity). Nowadays, scholars believe that we witness a revival of cosmopolitan ideas and feelings as a", "label": 0 }, { "main_document": "\"de-specialisation\" and evidence of direct procurement of obsidian which is found in raw form in southern Greek sites (Demoule and Perl This contrast between the earlier and later Neolithic highlights the absence of an evolutionary trend or cumulative effect entailing increased craft specialisation and associated exchange. Regarding the distribution of production centres, the observation that stone artefacts reached various villages in the EN and MN as partially or fully developed tools (flint blades or obsidian cores) and not as raw material has logically given rise to arguments about a few localised production centres near the raw material sources (Perl Although the operation of such specialised centres remains a possibility, their archaeological visibility appears to be problematic. The EN site of Agios Petros on an islet in the Northern Sporades yielded large amounts of Melian obsidian which could suggest that Agios Petros served as an intermediary site and exported obsidian to Thessaly. Efstratiou (1985: 74) though reinterpreted the alleged obsidian workshop identified by Theocharis at Agios Petros as the If the ceramic affinities identified by Efstratiou (1985: 135) are correct, then Agios Petros could have maintained contacts not only with Thessaly but also with Southern Balkans and the Anatolian coast and it would be too simplistic to assume that the islet acted solely as a stop for travelling technologists or suppliers. Although obsidian reached the various settlements in different states of completeness in the LN and FN, Perl However, one of the early accounts of the Saliagos excavation mentions the recovery of two large roughed-out cores (Renfew and Evans, 1968: 266) but does not interpret it as a workshop. Moreover, it would appear environmentally deterministic to equate any raw material sources with localised workshops. Yali is a small islet in the Dodecanese with natural obsidian. However, in the FN settlement at Yali, there was no evidence of mining or knapping the Yali obsidian although there were Melian obsidian tools. This hints at some possible ad-hoc collection of Yali obsidian by incoming travellers rather than systematic or organised exploration (Sampson, 1988: 257-259). It is also interesting to note that the intensified sea-faring in the FN did not result in more specialised exploitation or export in the case of Yali. Concerning the volume of output, the even distribution of obsidian and honey-flints across different EN and MN settlements has been interpreted as a stable production of stone artefacts sustained by part-time specialists (Perl However, it is possible that Perl Regarding the circulation of lithics beyond the original production centres, from the EN there is clear evidence of long-distance circulation of obsidian and flint tools, andesite millstones, polished celts and chisels (Demoule and Perl Since exotic raw materials such as obsidian, flint and jasper all came from different sources, which were also located far away from each other, they are more likely to have been acquired through exchange rather than direct procurement (Perl Although this argument at first instance appears convincing, one cannot rule out direct procurement as it is difficult, even for obsidian, to be specific about the exact exchange mechanisms. Renfew (1973: 185) has", "label": 0 }, { "main_document": "determine whether an art education had any effect on visual-spatial abilities. It was found, through a statistical method known as meta-analysis summarising the findings of many empirical studies conducted in this area, that an art education had no significant effect. The author also discovered that in terms of designing a course that would encourage spatial abilities to be developed, a combination of studio art and the study of art work would be most fruitful. The meta-analysis used was based upon thirty studies concerned with visual-spatial ability, published between 1960 and 1990. Ascher's study was to compare visual-spatial abilities in male and female artists and non artists. The author found that artists were not more skilled in visual-spatial tasks and that there was not a smaller gender difference in mental rotation ability amongst artists. However, it was also discovered that males made fewer errors than female, which replicates previous studies, showing a male dominance. The tasks were carried out by a total of forty participants, twenty (ten male and ten female) recruited from the art departments of two undergraduate universities with similar admissions standards, and considered artists. The remaining twenty, consisting ten male and ten female, made up the non artists group and were all undergraduates that did not study art or consider themselves artists. Participants took mental rotation tasks which involved matching three dimensional shapes on a computer screen. Kozbelt found that artists outperformed non artists on a series of four perception tasks, which included a mental rotation task. It was also found that males outperformed females on all but there was less of a gender gap amongst the artists. The study used forty six volunteers, of which thirty were art students, seventeen in their first year of study (five male and twelve female) and thirteen in their fourth year of study (four male and nine female). The remaining sixteen were classed as novices (eleven males and five females) and followed other degree courses. The mental rotation task involved the participant deciding whether twenty four pairs of block figures drawn in perspective as to look three dimensional could be rotated to match. At first it seems difficult to reach any useful conclusions as to whether artists as a group have better spatial abilities than non artists, due to the varying nature of the findings from these studies. It seems logical to expect the findings of Haanstra's meta-analysis to be most conclusive as it summarises the work of a number of studies in the area and as was with the case with Ghiselli's study on aptitude testing. However Ghiselli applied a validity constant to his work whereas meta-analysis is a fairly new technique and remains unproven in the field of art research (Diket, 1994). In addition to this, Ghiselli had access to a much lager body of work and the study by Haanstra only used thirty previous studies so it is much less far reaching and therefore cannot be regarded equally reliable. It is also important to note the criteria used for the selecting of studies to be analysed. For example, studies were", "label": 1 }, { "main_document": "reader by illustrating the harmony in which the Ainu live with nature. The constant repetition of this imagery imbeds in the mind of the reader, an image of a naturally harmonious, gentle and spiritual people living a life of simplicity. This, in comparison with aggressive images presented of the Japanese 'corporate giants' Kayano, Shigeru. (1980, 1994). Our Land Was A Forest: An Ainu Memoir. Colorado: Westview Press Inc and Oxford: Westview Press Inc. pp-5. Kayano, Shigeru. (1980, 1994). Our Land Was A Forest: An Ainu Memoir. Colorado: Westview Press Inc and Oxford: Westview Press Inc.pp-38. Kayano, Shigeru. (1980, 1994). Our Land Was A Forest: An Ainu Memoir. Colorado: Westview Press Inc and Oxford: Westview Press Inc.pp-9. Kayano, Shigeru. (1980, 1994). Our Land Was A Forest: An Ainu Memoir. Colorado: Westview Press Inc and Oxford: Westview Press Inc.pp-99. Kayano, Shigeru. (1980, 1994). Our Land Was A Forest: An Ainu Memoir. Colorado: Westview Press Inc and Oxford: Westview Press Inc.pp-99 Although the book can be seen as successfully achieving many of Kayano's desired aims, interestingly, the above techniques could also be seen to be detrimental in attaining his ambitions. As I previously mentioned, the fact that Kayano is Ainu works in his favour, but only to a certain degree. This is also a negative element of the book, as due to Kayano's social and political context the reader is not given a balanced perspective, in this 'unabashedly partisan portrayal of the Ainu people's history.\" (Howell: 1993: xii) He has total control of constructing and presenting images and portrayals of people. For example, throughout the entire book he portrays the Ainu 'as passive victims of Japanese aggression' The repetition of the natural imagery could also be another example of this. Kayano uses it to create an idyllic and harmonious image of Hokkaido and also the Ainu people. However, the extensive repetition could be considered romanticizing and over-exaggerating. Once again, no Japanese viewpoint is put across, therefore a balanced perspective is unattainable. The chance that the author may be over-exaggerating and choosing to exclude certain information because of his desire to favour his own cause, is something that the reader needs to bear in mind. Howell, David in Honda, Katsuichi. (1993) Los Angeles and London: University of California Press.pp-xii Howell, David in Honda, Katsuichi. (1993) Los Angeles and London: University of California Press.pp-xii Although there are negative aspects to 'Our Land Was Forest' it is important to remember Kayano's key concern- making the reader feel empathetic for Ainu people, because if this were not achieved, the chance of gaining more support would be unlikely. The descriptive over-exaggerations, although they may be slightly misleading, are necessary in achieving his aim, because writing a 'dispassionate study of the Ainu' The book 'Our Land Was A Forest' sold extremely successfully in Japanese, so much so that it was translated into English and French to enable Kayano's message to reach a wider audience. Since the publication of the book, Kayano Shigeru has become 'a leading figure in the Ainu campaign for human rights, cultural preservation and recognition as indigenous people'", "label": 1 }, { "main_document": "type of prostitute, only that she hired out her body and was owned by Nikarete. Apollodorus reveals Neaira was a porne; she was the property of Nikarete, I believe she was better off then a lot of pornai as she was very attractive and was presented as the daughter of a free woman. As 'free' women they would have been worth more money than slaves. Carey also reveals this charade was beneficial as Nikarete could demand higher fees on the basis 'the girls were ruining their marriage prospects, especially when dealing with strangers, since she could represent the girls as virgins.' Barrett and Sommmerstein 1978:331 Athenaeus. 13.13.587e. Philetaerus writes 'did not La Hamel 2003:x Athenaeus. 13.587e Philetaerus writes 'and have not Isthmias and Neaera and Phila rotted away?' Demosthenes.59. Hamel 2003.18 Demosthenes.59. Demosthenes.59. Carey 1992:94 When Neaira was purchased by Timanoridas and Eucrates She had no control over her lovers, and gained no financial benefit. She was able to buy her freedom when they no longer required her due to her unusual circumstances when owned by Nikarete. Neaira developed close relationships, in the same way that a heatira would, and so could call on her clientele for help. The majority of pornai had one off encounters with their clients, so would have been unable to turn to them for support but Nikarete had developed long term relationships with her girls and clients, probably because she only had seven girls, In larger brothels it would be beneficial to have a number of men using the establishment, and it would therefore be unnecessary to develop longer relationships. Demosthenes.59. Demosthenes.59. Phrynion This emphasises how a pallake's status was superior to that of the porne, as once Neaira had entered this new lifestyle, she expected to be treated with greater respect, a fact appreciated by men. She then became the pallake of Stephanos. The fundamental difference between a hetaira and pallake is the housing arrangement. Theodote is a hetaira because men visit her in Although she may have moved in with Timanoridas and Eucrates, it was not an exclusive relationship with one man, so she can not be classed as a pallake. As she was their slave she could be classed as a porne. She had no choice over living with them and had no control over her finances. Yet she was like a hetaira as she only had two lovers, and had the means to buy her freedom, a factor that will be discussed later. This highlights the difficulty of defining the statuses of prostitutes. Although Neaira was a prostitute for Timanoridas and Eucrates, her position is not clear. She can be defined as a porne or a hetaira, for which pro and counter arguments can be efficiently made for each situation. It is such complications that have highlighted how the gradations can be blurred but I feel that there is still a spectrum that each type can be placed onto, as Neaira's status, regardless of what she is referred to, does increase. It is the terminology that causes the greatest difficulty when trying to", "label": 1 }, { "main_document": "International Humanitarian Law (IHL), or the laws of war ( It is codified in the four 1949 Geneva Conventions, the 1977 and 2005 Additional Protocols. The phenomenon of terrorism appeared centuries ago in the form of state terrorism (the regime of Terror established under the Convention by Robespierre during the 18 Let's consider the synthetic definition Gilbert Guillaume gives of it: G. Guillaume, 'Terrorism and international law', ICLQ vol. 53, 2004, pp. 537-548, p. 541. This definition is currently accepted and has been used by the international community to depict the acts of violence which occurred on 9/11/01 in New York, 12/10/02 in Bali, 11/03/04 in Madrid, 7/7/05 in London, and 11/07/06 in Bombay. Though, a legal definition has never been given in any legal international instrument. These events enact the recent development of non-state transnational armed actors and challenge the traditional legal apprehension of warfare and armed conflicts. A new type of war has emerged: the asymmetric war opposes a state and a group of private actors. The legal status of the actors of terrorism as an asymmetric war is the focus of this report. Do the actors of an asymmetric conflict observe the limitations on the means and methods of warfare set by the law of war? If not, are they entitled to the protection of IHL rules? Do legal principles apply to the perpetrators of unlawful acts of war? What are their rights and responsibilities? As IHL is done by and for the states, the question of the legitimacy, if any, of the use of force by non-state actors is highly controversial. Indeed, what are the consequences of the lack of protection and responsibility of transnational non-state groups involved in an armed conflict with a state? How might an increase of this protection change the outcome of the war? Since only states can be parties in the Geneva Conventions, IHL does not apply to conflicts between a state and a terrorist group. The necessity of a legal and general definition must be based on the reduction of the concept of terrorism to necessary and sufficient conditions and variables. But then, one might wonder what the use and the implications of such a definition are when it clearly appears that a consensus is internationally reached upon the qualification of the events from New York to Bombay as terrorist acts. The first section of this report will consider the controversial and ambiguous aspects of the phenomenon of terrorism. The second section will focus on the problems raised by the Guantanamo Bay detention centre. The last section will explore the possibilities and difficulties of the codification of the concept of terrorism and will raise the crucial critical question of its potential consequence on the relation between IHL and Human Rights law. Despite the numerous legal instruments prohibiting acts of terrorism (The Hague Convention for the Suppression of Unlawful Seizure of Aircraft, 1970; New York Convention on the Prevention and Punishment of Crimes Against Internationally Protected Persons, 1973; International Convention Against the Taking of Hostages, New York, 1979; Rome Convention for the Suppression", "label": 0 }, { "main_document": ": 204) Seattle was the first coherent global protest but social movements did not unite under a centralized entity; they were linked together in a network structure Should social movements concentrate more on the organization of a world forum, with a concrete, feasible program and agenda? Is that necessary in order to be heard and acknowledged? Ibid. 288 One has to wonder whether the coherence of social movements is desirable. In setting a specific agenda, the risk is that the movement should lose its insurgent nature and spontaneity; that a top-down structure should emerge, provoking the bureaucratization of the forum. The most dangerous pitfall to avoid is the recuperation of the movement by the hegemon. This can only be avoided by rejecting any structure, organization, language used by hegemonic globalization actors and institutions. For example, social movements can be recaptured by the hegemonic discourse through the confusion that persists between them and NGOs. According to Rajagopal, NGOs do not equate to social movements since NGOs are 'institutional actors who derive their legal identity from the national systems where they are incorporated' Confusing NGOs and social movements narrow the potential of radical change of the latter. It reproduces the liberal categories applied to reality. 'NGOs are formed by English-language-speaking, cosmopolitan local activists who know how to relate to western donors (who provide most of the NGO funding) and write fundraising proposals, while social movements activists do not often have this power' Rajagopal (2003 : 259) Ibid. 261 There is a 'Western bias in the NGO world' since they are dependent on foreign funding, which gives a certain authority to the donor. Growing in size implies a managerial and bureaucratic shift in the structure of the NGO. As a result they lose their flexibility and ability to innovate. The Zapatista solidarity network analysed by Olesen It is an original form of network created after the indigenous Zapatista Army of National Liberation (EZLN) uprising in Chiapas on 1 January 1994, in response to the NAFTA trade agreement. The Zapatista solidarity network's first aim was to call for a peaceful solution to the conflict between the Mexican army and the EZLN. The network relies on international solidarity as a protection against the national authorities. Its 'information circuit' strategy is based on the production, gathering, processing and distribution of the information that will be consumed abroad through the Internet. Communication is the main tool of the Zapatistan revolution because it is physical communication at the local level between the peasants, indigenous people and members of the network, and also mediated communication at a global level with any individual, NGO or social movement which is interested for various reasons in the Zapatistan struggle. This horizontal and decentred network does not depend on a hierarchical authority. Leadership positions rotate, giving the impression of a 'vacuum of authority at the centre' Olesen (2004) Hardt and Negri (2006 : 85) This type of centrality distinguishes between core, peripheral and transitory actors. The movement has never sought secession from the state of Mexico nor political power, nor sovereignty. Network relationships and democratic", "label": 0 }, { "main_document": "This report consists of a detailed analysis of the departmental profit and loss report, suggesting alternatives to cut costs or increase profit, taking into thought the significance of the variance figures and percentages. A general analysis of the front page P&L will be related to the departmental statements and its consequences and influences put forth. Finally, an evaluation of the Balance Sheet and relevant Ratios is also taken into account. A higher importance will be given to variable costs, as when it comes to decision-making, it is more functional to bear in mind controllable costs, due to fixed costs being mostly altered because of external factors which are beyond the handling of the hotel (Adams, 1997). The accommodation section is the department within the hotel which has a higher contribution to the general financial performance of the hotel. Therefore, rooms' management statistics are normally considered as providing an overview of how the hotel is progressing. The most common way of assessing the rooms department performance is by taking into account measures of price and volume, such as the average room rate (related to price) and room occupancy (regarding volume). Other measures such as bed-occupancy or double-bed occupancy provide a more detailed analysis of the general performance (Chin et al, 1995). The volume of sales, in this case, is relatively important for reasons mentioned above. Being a medium-sized hotel, the amount of variable costs may be superior to fixed costs meaning a lower prevalence of fixed costs per room, hence having a direct positive influence on departmental profit. (Kotas & Conlan, 1999) When analysing the ratios related to room occupancy and average room rate in order to assess the department's general performance, it is clear to see, that occupancy has risen by 5.7 % and the A.R.R. has lowered Although this variance in terms of price may seem unimportant, in a long term point of view, and allied to a higher occupancy, it has a direct increasing influence on departmental sales, and consequently departmental profit. It is important to be aware of the trend of this ratio as other factors such as sales objectives (selling more expensive rooms than economical ones) can influence it (Jagels & Coltman, 2004) The RevPar is also an important figure as it calculates the revenue generated per available room and therefore combines the hotels occupancy percentage and its A.R.R. and demonstrates how the hotel's overall performance is in comparison to its highest revenue potential. In this case, it shows that by having a lower A.R.R ( These figures are important, for the hotel to concentrate on which market segment its product is orientated and therefore, focus its marketing tactics (Chin et al, 1995). The yield figure shows how much of the rooms are being sold at lower rates than premium. In this case, it is possible to note, that although the hotels, A.R.R has come down, the RevPar has increased, meaning that less rooms budgeted are being sold at lower rates (Chin et al, 1995). The actual payroll is higher than budgeted; however, since benefits did not increase", "label": 0 }, { "main_document": "for the North to protect its own security, which is based on the security of the market (protected by the impediment that sustainable development measures will put to the industrialization of the South), and of the 'global consumer classes'. Sustainable development serves their interest and sustains economical growth. The rhetoric of environmental security is an excuse to continue the North longstanding practice of military and economic intervention and to write into international law. The extensive focus on the South is a way for the North to deny its own responsibility for the deteriorating state of the planet. S. Saad (1995) By redefining the environment, the North imposes its economic, political and legal norms and lifestyles to the rest of the world. The outcome of the imposition of 'global security' as a discourse through the securitization of the environment is the preservation of the status quo and the reproduction of the current power relation. Analyzing global security as a discourse enables us to think of its conditions of implementation and its effects in the maintenance of the international status quo. Contextualized within a specific power relation between the North and the South which rely on a specific international legal system, global security loses its consensual appearance and reveals the features of domination at stake. It is only one among the various discourses that shape socio-legal international relations. Indeed, power relations take place in a linkage of discourses. Analysing narratives on environment securitization and on sustainable development allowed to highlight how complex and intertwined the notions at play are.", "label": 0 }, { "main_document": "areas SAPRIN (Structural Adjustment Participatory Review International Network) ibid. Chossudovsky, M. (1997) Impacts of IMF and World Bank Reforms. Zed Books Ltd; London & New Jersey. p34 SAPRIN (Structural Adjustment Participatory Review International Network) Privatisation and other reductions in public expenditure within structural adjustment are, theoretically, mutually beneficial; whilst revenues increases from other reforms privatisation quickly reduces government expenditure in the short term whilst relieving it of a long term economic burden. So too the expectation is that profit from newly privatised services will further reinforce the economy. Primarily and quickly all subsidies for food, education and healthcare were cut. Secondly services such as water and electricity would be handed over to private management followed by education and healthcare. This privatisation of welfare in which business tried to make profits from healthcare and education had severe consequences for a society already suffering economic cutbacks. In Peru ever increasing poverty and a lapse in vaccination which were no longer provided by the government led to a rise in TB and a rise in malaria, dengue and leishmaniasis too. Health workers joined teachers on strike whilst child malnutrition hit 38.5% and infant mortality rose to a quarter in some areas In Zimbabwe the reintroduction of health fees saw the cost rise by 1,000% so that only a minority could afford adequate health care despite deteriorating health due to poverty and new bouts of epidemics such as TB. The number of outbreaks of AIDS, HIV and STI's rose dramatically as did the death toll Chossudovsky, M. (1997) Impacts of IMF and World Bank Reforms. London & New Jersey: Zed Books Ltd. P.201 ibid. Finally an often unconsidered social effect of adjustment is the great cost levied on the environment. Structural adjustment often prescribes heavy industrialisation and its focus on trade and production does not entail any sort of consideration for the environment. Whilst more than 20% of the world's oxygen is produced in the rainforest Much of this corresponds to ongoing adjustment programmes which have promoted exports and thus encouraged illegal logging as well as its emphasis on the freedom of businesses meaning until recently governments had not actively pursued policies against industries showing no regard for the environment. It is clear that the whole world could not consume and produce as much as developed countries do because the earth simply does not have the resources to do so. However IMF and World Bank adjustment policies proscribe this exact path, forcing the industrialization process not just to the detriment of the society today but without foresight for the society of tomorrow. UNRISD states that \"the effectiveness of policy responses to environment degradation is often curtailed by adjustment\". The \"rapid growth of agricultural imports\" accompanied by \"institutional changes\" It is estimated that by 2025 half our world will live with sparse water supply UNRISD (1995) London: UNRISD. p50 There are other costs resulting from the adoption by developing countries of adjustment policies. Oxfam writes of the social costs involved by adjustment because all funding is streamlined in exports and industry and other valuable social development programmes", "label": 1 }, { "main_document": "will need to check all the value of the items and display the ones that have a value under The most efficient way to do so, would be to use a 'for loop' with an 'if statement' within it. Example: Quitting the program doesn't need a procedure. In the main program part, a check if the user enters q or Q should be made to quit the program. This means that in order to quit the program, the user should simply enter q or Q in the menu. e.g. 1 e.g. 2 To describe a program, as its name indicates it, a basic algorithm is a sort of very simplified version of the complete program. Each step of the algorithm may also contain some \"sub-steps\" or sub-sub-sub steps that need to be explained and refined. For this program, these are the steps that needed to be \"refined\": These are: ID must not be different from all the other current ID's and it must be in range 1000 to 1999 to be accepted. If not, error message should appear and allow user to re-enter. Room No has to be in range 100-399. Value of item has to be higher or equal to 0. The program has to loop until the ID entered by the user matches one of the record. Once the ID is accepted, it will delete the record that has the same ID. Finding an item works exactly the same way, except that it doesn't need to loop. If the ID does not match a record, the program should go back to the menu. Finally, for those two options, the program should use a 'sentinel search'. As required by the specification, I have to use all the facilities that we've learnt for this work. The easiest and most effective way for me is to use one record type (which can be done in this case all the items records will have exactly the same fields) which I would relate with an array type [1.. 16]. The big advantage of doing this is that I will allow me to store many different records and avoid repeating the same code many times. In order to find what should be part of a procedure in the program, the best thing to do is to try to identify the different tasks that it needs to carry out. These are: As you can see above, the program can easily be spread into 9 tasks. We could use this as an example to spread the work in procedures or functions. First of all, the 1 Secondly, menu could use a function as it needs to return 1 value, which is the choice entered by the user. Once the value is returned to the main program, a selection statement should check what has been entered in the menu. For each choice, the program should call a procedure that would return the value when necessary (using parameters passing). The other procedures are as shown underneath. There will be one to: You will notice that the order of the records,", "label": 0 }, { "main_document": "models presume that only managers have all the information about firms and outsiders do not. Investors therefore value all firms as of low quality, which consequently provide strong incentives for the managers in high quality firms to try to convey their firms' true values in a way, such as cannot be mimicked by low quality firms. A firm increases dividends payout is sending an expensive but credible signal to the market as demonstrating its confidence to obtain higher future cash flow, in such a way that it can sustain the permanently higher dividends without increasing the bankruptcy possibility. The historical data suggest the stock prices may increase by average 1~3% after the announcement. The signal of dividends has strong power since they are tied closely to long term cash flow patterns, which can be traded off against the tax loss associated with dividend income. Special dividend is another form of dividends policy, which is typically viewed in the market place as a temporary increase in firm's payout while investors do not anticipate it occurs on the regular basis. As an unexpected positive change in dividends payment, the earning surprise can convey the same information as regular one. Jagannathan, Stephens, and Weisbach (2000) find out if the current cash flow does not appear sustainable, a firm would be reluctant to initiate or raise dividends levels because of the negative stock price reaction can occur if the firm was subsequently forced to reverse the changes. Ghosh and Woolridge (1988) and Denisal. (1994) report an average stock price decline about 6% in approximately 3 days surrounding a dividend cut announcement. As such, managers could use special dividends as a means of distributing cash in a setting characterized by short-term increase in cash flow and prior positive share price performance. Nevertheless, some investors interpret huge dividends payment as a sign of poor investment perspectives, and actually rather to see managers use up all the retained earning to fund rapid expansion. Prudent managers working on the behalf of the shareholders should invest on the all profitable investment opportunities which are consistent with investors' wealth maximization objective. Unfortunately, the separation of management and ownership implies that the manager might not always act in the shareholders' best interests. The Free Cash Flow Hypothesis was firstly presented by Jensen (1986), and he claims that the excess of free cash flow (more than required to fund all positive NPV projects) could tempt managers to engage in value-reducing activities e.g. investing in low-return projects for \"empire building\". In this case, large dividends payments help to diminish agency costs because it will deduce available free cash flow. On the other hand, Lintner (1956) conducted interviews with 28 carefully chosen US corporations, and his studies revealed: Firms primarily concerned with stability of dividends; Earnings is the most crucial factor to determine the dividends decisions; Firms very reluctant to cut dividends, at the meantime investment requirements generally had little effect on modifying dividends patterns. Therefore, although the managerial efforts are not directly observable, managers' hands are tied and under stress by precommittment of paying big", "label": 0 }, { "main_document": "do not share such views, however. Whilst they accept that authority and power are mutually reinforcing, these theorists reject definitions of authority as \"institutionalized power\" and insist that the two are In their view, authority involves B complying to A because A's command is These scholars thus warn us that associations between authority, force and power must be discarded if one is to obtain an accurate understanding of political power. Goodwin, Barbara. Pg. 311 Milgram, Stanley. \"Behavioral Study of Obedience.\" Pg. 371. Goodwin, Barbara. Pg. 313 Goodwin, Barbara. Pg. 313 Shah, Mowahid. \"The Power of Moral Legitimacy\", August 27, 2004. Accessed at: Daalder, Ivo H.\"Why Legitimacy In Iraq Matters\". Center for American Progress. Accessed at: Bachrach, Peter and Baratz, Morton. Power and Poverty: Theory and Practice. Pg. 32 Such an understanding would be incomplete without some comprehension of Is political power (especially power over state actions) only available to influential groups, or can it be possessed by individuals? The pluralist view of the state claims that individuals and interest groups possess the majority of political power In the 2005 Iraqi elections, for instance, the Sunni minority's needs were particularly heeded, as \"the temporary government feared that [this group] would not participate in the elections\" otherwise Extreme pluralists argue that even groups that are Contradicting these views, elitists however hold that power lies not in individuals but in a society's elite, whether this comprises large businesses in corporatist states or kinship ties in nepotistic regimes Power, consequently, may exist both within civil society and in strictly governmental circles - depending on whether one adheres to pluralism or elitism, and on the type of regime concerned. Axford, Barrie, Browning, Gary, Huggins, Richard, & Rosamond, Ben. Pg. 35 Johnson, Bill, Peterson, Katherine, Vaswani, Nikhil, and Gentry, Zach. \"2005 Iraq Elections: Analysis of Minority Participation\". Accessed at: Bachrach, Peter and Baratz, Morton. Power and Poverty: Theory and Practice. Pg. 5 Axford, Barrie, Browning, Gary, Huggins, Richard, & Rosamond, Ben. Pg. 38 Having explored the differing definitions and locations of power in society, we can conclude that political power involves more than \"A get[ting] B to do something that he or she would not otherwise do\". Power implicates domination, agenda-setting, and preference-shaping, and is accessible to individuals as well as to elites in a community, depending on the social strata in which that power exists. When faced by such variability, the most important question for political theorists to answer concerning power may actually be, where If we can access a truly accurate definition of power, our next task may be to seriously challenge its distribution in our own societies, no longer in theory, but in practice.", "label": 0 }, { "main_document": "As an amateur in sociology, who comes to the discipline only with a yet-to-be-solved social problem at heart, I never expected the necessity of understanding the relationship between problem, theory and method until I encountered the assignment topic. Imagine various research methods are the \"tools\" available in a department store, and we are shopping there with a household problem to be fixed. Can we just get the right \"tool\" for the problem and everything is solved? We never think a \"theory\" is necessary when we are doing shopping. However, due to the delicate nature of human beings, and the complicated connections between them and society, a social problem cannot be simply handled as a problem in the household. Bulmer described how problem, theory and method altogether \"form a trinity at the heart of the sociological enterprise\" (Bulmer 1977:15). This statement will serve as the framework when I compare and contrast the two pieces of research study, namely \"Asylums: Essays on the Social Situation of Mental Patients\" by Erving Hoffman (1968), and \"Critical Realist Ethnography: The Case of Racism and Professionalism in a Medical Setting\" by Sam Porter (1993). The two pieces of study are oriented from two distinctive traditions of social research and their orientations have prominent effects on their theoretical and methodological assumptions. Goffman is a representative figure from the \"symbolic interactionism\" tradition (despite personally denying such a claim) (Cahill 2004). The tradition of symbolic interactionism emphasizes the subjective role of \"self\" in the society, \"If men define situations as real, they are real in their consequences\" (Thomas, Thomas 1928:572). Based on their ontological assumption of subjectivity, symbolic functionalists believe that humans take an active role in adapting, responding and further reshaping the social world; the social world is constantly recreated by human beings. According to Goffman, humans are the product, creator and shaper of the social world. The interpretivism stance of Goffman was further elaborated in his work \"Asylums\", where he investigated the social interaction between the individual patients and a \"total institution\" - mental hospital. In the chapter of \"The underlife of a public institution: a study of ways of making out in a mental hospital\" (which is on its own an individual study), Goffman described and analyzed how the mental patients made use of the various artifacts (known as the \"secondary adjustment\" according to Goffman) in a manner which was not officially intended, in order to modify or to reject the expectations that were imposed by the institution. For example, the mental inmates might enjoy participating in a game of volleyball not for the purpose of exercising (as intended by the institution) but to inmates of the opposite sex. A mental hospital is a total institution in the sense that every activity inside the hospital is programmed and is under close surveillance. However, Goffman could still see the \"active role\" or the \"selfness\" that the patients struggled to preserve against the institution. Goffman's assumption in the subjectivity of human being was developed to the fullest in such an extreme condition. Sam Porter's work on \"The Case of Racism", "label": 0 }, { "main_document": "be improved if recycled materials were used. There are several issues that affect the local environment, the wild fauna and flora and the local population that environmental management standards address. Apart from the application of fertilizers, biocides and waste production, the noise and light levels could affect wild life and the people that may live near the green house. Machinery of the enterprise could be a great nuisance both for human and animals. Artificial light also may be a problem. All these factors should be under serious consideration in order to minimize the impact of agricultural enterprise to the local environmental. The measures that are taken in order to increase the environmental efficiency of the green house should be documented. Documentation is very important because, it has to summarize all the actions taken by sector and date. The audits should be able to examine by documents if the goals that were set, have been accomplished. Training of the personnel, establishing of routines and emergency procedures also have to be described by documents. All the actions should be harmonious to the legal requirements. Documentation should be recorded and include the results of audits, checking and correction actions. (Piperal. 2003). Due to increased concern about the global and local pollution issues and the climatic change environmental management will more and more important. Environmental management standards meet consumers expectations. Protect public health and prevent potential environmental negative impact from human activities, by assisting the improving and maintenance of the environment. Producers are able to enter new more demanding markets and increase their profit by adding value to their product. Consumers feel safer about the quality of products and producers can decrease the functional cost of their enterprise in long term. In the past producers and consumers had the \"us and them\" culture and unnecessary antagonism was developed. Environmental and quality standards tend to minimize this kind of antagonism by providing insurance at a reasonable cost, so they help to maintain good public and community relations.( Sayre D. 1996).", "label": 0 }, { "main_document": "cloth would increase to Table 1 shows that situations Situations Supplying 500m of cloth at the same costs (situation Assuming that A-Z Cloth charges the same price per metre of cloth, profit will be 84p less per metre when supplying 640m compared to 500m. Situation Situation When increasing the amount of yarn produced internally through increased carding and combing capacity the department should realise any possible overtime costs related to overtime required to increase output, into the price paid for extra carding and combing capacity. Throughout the report it has been assumed that hours available in Vat A and Vat B are interchangeable. Only information provided has been considered, implications on staffing and other business areas should be considered by the department. No recommendations have been made on the price that the A-Z Cloth UK should charge the customer.", "label": 1 }, { "main_document": "relations have also been redefined by feminist theory. The first amongst them is the concept of the state. While the abovementioned established theories consider the state to be a unitary actor, feminist theory visualises it rather as an \"on-going process\" which needs to be studied in relation to both the internal and external factors that shape it. Studying the state against such a complex background of socio-cultural, political and economic factors, affords a more adequate understanding of this key actor in international relations, than that provided by the theories where the state is \"taken for granted\"(Peterson 1992, pp.3-6). Another critique of the classical definition of the state is offered by Sylvester. She argues that \"in the realist story man is, metaphorically fused to the state\" and this \"self-state\" is obligated by a social contract to ensure the nation's survival in an anarchic inter-national system. Yet this \"self-state\" does not draw its identity from the \"relational ties with the society under contract\" but from similar self-states \"floating\" in the inter-national system---a clear result of the association of the state with the masculine which is opposed to the \"relational\" , as Hirschmann puts it (Sylvester 1992, p.161). Thus Sylvester shows that not only the international system but also the established theories explaining it are androcentric; thereby allowing only partial definitions of the field and its key actors and concepts. In the realist and neo-realist theories of international relations a concept that is often associated with the state is that of security. However, feminist theory has redefined this association. In the words of Tickner, \" [m]any IR feminists define security broadly in multidimensional and multilevel terms - as the diminution of all forms of violence, including physical, structural, and ecological\" (Tickner 1997, p. 624). She also argues that the \"hierarchical social relations, including gender relations, that have been hidden by realism's frequently depersonalised discourse\" need to be brought to light if \" a language of national security that speaks out of the multiple experiences of both men and women\", needs to be constructed (Tickner 2004, p.100). Once again, thus, feminist theory rectifies the partial vision and concerns of aforementioned established theories, defining yet another concept of international relations in more all-inclusive and thus adequate terms. The penultimate redefinition is of the concept of power. Power is the mechanism by which the international system is run. In the realist and neo-realist tradition power is defined as the ability to control and is restricted to the level of the state and the inter-national system (Keohane 1989, p.246). But in feminist theory power is seen as multi-faceted and gendered. Elshtain disagrees with the idea of power as a unitary concept. Instead she argues, that power can be both \" She sees this lack of unity in the concept of power as possessing the potential for change; that is, the overhauling of \" Enloe too envisions power as multidimensional and argues that it \"infuses all international relationships\". She also argues that more power is exercised to maintain the international political system than classical theories of the field would have", "label": 0 }, { "main_document": "but to customers as well, allowing Iceland Excursions to store customers' information and preferences and keep them informed about special offers based on their preferences. All of the analyzed Web sites should consider improving Web accessibility as most of them were rated average on this criterion. Web sites should therefore become more user-friendly for people with disabilities. Moreover, all of the Web sites were rated with a score 5 or lower in TrustGauge scoring chart, meaning that they were not recognized as particularly trustworthy by many users. As trust has become a vital antecedent for purchasing online (McCole, 2002), companies should make sure to increase trustworthiness of their Web sites. Security is also of a great importance to maintain trust amongst customers. From the customer perspective, all of the analysed Web sites can be rated as good in terms of security, as their payment Web pages are highly encrypted and security certificates provided. It is important, however, for companies to make sure that once security measures are in place, an appropriate amount of content on the Web site is devoted to security issues to reassure the customer that online transactions on the web Site are safe (Chaffey, 2002). Expedia, Cosmos, Avis, Fosshotel, BCP and Insurefor.com devote a reasonable amount of content on their Web site to security issues; however, it could be argued that Icelandair and particularly Iceland Excursions should improve in terms of explaining and reassuring the customer that online booking through their Web sites is safe and secure. The assignment has analysed three different electronic distribution channels from a customer perspective. All of the evaluated Web sites have their advantages as well as disadvantages and the analyzed organizations should work towards their improvement to enhance customer satisfaction. From consumers' point of view, integrator's (Expedia) Web site provided the best overall holiday booking experience. Technological developments in the future will combine more powerful technology, delivering more convergence as well as more tension in distribution channel between various actors (Bowie and Paraskevas, 2006). It is widely reported that consumers will be the major winners of technological developments, as they will have more choice, more interactivity as well as more personalized products to choose from (Buhalis, 2003). According to Bowie and Paraskevas (2006), a new electronic distribution channel, the \"multigrator\", will emerge with convergence of principals', intermediaries' and search engines' offer to customers.", "label": 0 }, { "main_document": "She was seen initially as an outpatient at a paediatric dermatology clinic on the This appears to be unrelated to hair products, exacerbating or ameliorating factors. Her hair loss has occurred over the same time period and has been quite extensive in quantity. It continues to cause She has experienced severe itching of her scalp also, which is present both during the day and night. She had been prescribed 'Selsun' shampoo but this offered little benefit. Her mother reported her daughter's scalp to be scaly but found this difficult to distinguish from dandruff. The presenting diagnosis at this time was suspected to be blisters secondary to bullous impetigo. No diabetes, epilepsy, hypertension, asthma, jaundice, strokes, heart attacks, rheumatoid or osteoarthritis, cervical arthropathy, obstructive sleep apnoea, acromegaly or thyroid disease. No significant findings to note. None on presentation none known. No known significant family history. The family visited Africa in the summer of She has many friends at school none of which have similar problems with their scalps or hair loss. Tinea capitis - a highly contagious dermatophyte infection of scalp hair follicles and surrounding scalp skin. Causative organisms may derive from species in the genera Microsporum and Trichophyton. Occurs in all age groups but is particularly common in children. Preponderance in childhood is thought to be due to alteration in fatty acid constituents of sebum around puberty. Postpubertal sebum contains fungistatic fatty acids. Seborrhoeic dermatitis - the scalp is the most common site of infection. When mild manifests as diffuse scaling with dandruff-like scaling however is usually only present in infants under 18 months. Extensive scalp involvement. There is in addition commonly positive family history; this is not the case here. Psoriasis - involvement of the scalp is fairly common, may occasionally be confined totally to the scalp. However, scales tend to heap on top of each other producing a 'lumpy' texture. Psoriasis is however uncommon between the ages of 5-10 years. Alopecia areata - relatively common disorder. Unlikely however, as typically round or oval area of baldness is seen, this condition is non-scaly. Alopecia folliculitis Atopic dermatitis - scaling with itching is a typical feature however, the scalp is an uncommon affected site. Trichotillomania - patchy hair loss, underlying scalp is typically normal. However, no hair pulling behaviour had been described. Often transient in childhood, may be an indication of significant psychopathology in adults. Malnutrition - unlikely, She demonstrated no additional physical pathology. On physical examination, tinea capitis produces one or multiple patches of hair loss on an otherwise normal scalp. The scalp is typically scaly, and may be similar in appearance to dandruff. Hair may be broken off just above the surface producing a stubbly feel. Inflammatory variants of tinea capitis may be associated with painful regional lymphadenopathy and pusutle formation. Eruption of itchy papules may occur around the outer helix of the ear commonly coinciding with the introduction of systemic therapy. Occurring as a reactive phenomenon or 'id' response this may easily be mistaken for a drug reaction. A young girl who appeared fit and well. She was not", "label": 1 }, { "main_document": "This was the first time that Breakfast - 2 scoops of porridge with water and a drop of skimmed milk Lunch - tin of tuna and a spoonful of rice Tea - small chicken portion and a portion of vegetables This behaviour occurs approximately once a day (occasionally twice) on most days for the past 1-2 weeks either in the evening at home or at work. 1 During this time, From the age of 15 she ate mainly fruit during the day for a period of approx 6 months to 1 year and stated that At one point, During this time she lost approximately 3-4 times in 24 hours) a number of times during the week. In This outraged she felt that her mother resented her for being female as she wanted another boy after her brothers twin was stillborn she felt uncomfortable with her body image and hated wearing dresses sometimes she wished she was a boy her eating behaviours were a way of coping with all of these stresses. At 23, At 25, Over the last 4 years, This annoyed 1993/1994 - Jaundice, hair loss, skin changes; exacerbation of eczema, prolonged skin healing - at time when 2002 - Severe abdominal pain/cramps and bloating associated with induced vomiting, hole through nasal septum The divorce came as a relief to The relationship with her parents was described as being ok although There is Prior to this there had been no comments about her weight or dietary habits. There are no known pregnancy or birth problems, During her pre school years, she spent a lot of time in a male dominated environment, as most of the children her age were boys. At primary school she was keen on reading and had a good relationship with all of her teachers except one. During High school, Unfortunately during this time, In The relationship between them became \" In The experience was \"not great\" and \"just happened\". At the age of 19 she had her first serious sexual relationship, which lasted for approximately 9 months. In The relationship ended in After this, Her next relationship in The relationship ended in Nil She is honest and open-minded and described herself as being a perfectionist who likes order. Allergic to penicillin No medication at present (prefers not to take any) Low Well-presented lady of slight build dressed in a dark suit. Relaxed with good eye contact. A good rapport was established. Attention and concentration good. Consistent pattern of behaviour with regards to her weight maintenance: Normal volume, tone and speed Objective - \"feel fine now....felt anxious before appointment\". Subjective - cheerful mood, euthymic. Negative thoughts about her physical appearance, positive thoughts regarding her own ability e.g. at work. Initially after being sick, Content: Preoccupied with the belief that she is fat - overvalued idea. Obsessed with dietary intake and exercise regime. Form: normal Stream: normal Suicidal Ideation: not at present No hallucinations or illusions. Distorted body image (believes her legs are out of proportion) recognises that she has a problem and made a conscious decision to accept", "label": 1 }, { "main_document": "barrel in 1998 to over $40 and this acted as a drag on the global economy. America's huge current account deficit was another contributor of the slowdown of the US economy. In 2001, the US had a current account deficit of Wall Street Journal. New York. NY: May 2, 2002 pg.A.2 The tragic events of September 11 The economy's weaknesses were exacerbated by the event. An unanticipated event of the magnitude of September 11 It would spur consumers and investors to postpone or even cancel spending. In fact, the fall in consumption expenditures was one of the largest contributors of the downturn of the falling GDP growth. According to the Keynesian multiplier, the impact of such reduction in aggregate demand would be even greater reductions in output. Such incidents create uncertainty and this crisis of confidence depends on factors such as the progress in the war against terrorism. Most believe that a recession was already under way before September 11 A particularly alarming aspect of slowdown of 2001 was its synchronicity across nearly all regions of the world. This was due to the recent increase in globalization and trade. This caused a massive problem for the US economy. As an example, in the \"world recession\" in 1991, the American economy sank, but Japan, Germany and the emerging East Asian economies continued to boom, which helped sustain the level of world demand. However, extensiveness of this slowdown means that as the US economy became sluggish and it would cut down on, for instance, its imports from East Asian producers. The Asian producers then, in turn, would trim their imports, not only from the USA but from countries of Europe and Japan as well. Such nature of the 2001 \"recession\" increased the chances of a prolonged American, as well as global, downturn. The slide in the value of the US$ made it more dangerous for the US economy because a sharp fall in the dollar against other currencies would reflect weakening faith in America's economic prospects and further undermine confidence, resulting in lower consumption. Fortunately, the recession had a soft landing and the economy began to recover at the end of 2001. In contrast to previous world recessions in the past, inflation was low. This meant that there was plenty of room to ease monetary policy. The Fed cut interest rates several times. Moreover, as the US economy entered this downturn with a large budget surplus, the US Federal Reserve was able to push through huge tax cuts in order to boost consumer spending in the economy. Such timely fiscal and monetary stimulus in 2001, strong consumer spending combined with continued productivity gains lead to a turnaround in late 2001 and early 2002. Interest rate cuts also helped sustain demand through the American housing market. In 2001, houses in the USA were rising at their fastest pace for more than a decade. Such increase in consumer wealth helped offset the fall in share prices at the time allowing the American consumers to keep up their spending. America's goods and services trade deficit was still", "label": 0 }, { "main_document": "Foucault's first volume of He opens, with a chapter entitled 'We 'Other Victorians,'' sarcastically narrating: Foucault labels this set of cultural attitudes about and beliefs toward 'our restrained, mute, and hypocritical sexuality' the 'repressive hypothesis.' He swiftly undercuts the widely-held belief about Victorian repressiveness with both documentation and theorisation that in the nineteenth century there was the multiplication of discourse concerning sex in the field of exercise of power itself: This Foucauldian notion of a constant 'incitement to speak about' sex is the result of what he names a 'discursive explosion'. (Foucault, 1998: 17) Although this 'explosion' was often produced as a means to contain and control sexuality, Foucault asserts that the idea that Victorian sexuality was repressed or silent is a modern invention. (Foucault, 1998: 36-49) Thus in 'The History of Sexuality', Foucault attempts to disprove the thesis that Western society has seen a repression of sexuality since the 17th century and that sexuality has been unmentionable, something impossible to speak about. In the 70s, when this book was written, the sexual revolution was happening. The ideas of the psychoanalyst Wilhelm Reich, saying that to conserve your mental health you needed to liberate your sexual energy, were popular. The past was therefore seen as a 'dark age', where sexuality had been something forbidden. (Poster, 1984: 121 - 122) Foucault, on the other hand, states that Western culture has long been fixated on sexuality. Social convention, not to mention sexuality, having created a discourse around it, thereby making sexuality ubiquitous. The concept of 'sexuality' itself is hence a result of this discourse. And the interdictions also have constructive power: they have created sexual identities and a multiplicity of sexualities that would not have existed otherwise. Keats points out that in Foucault's initial depiction of the Victorian sexuality implying increasing silence and secrecy, he is almost immediately able to present the difficulty facing the advocates of this repressive hypothesis While analysing Foucaults ideas on Victorian sexuality, one of the issues that seem to stand out the most is the idea of confession. Historically, there have been two ways of viewing sexuality, according to Foucault. In China, Japan, India and the Roman Empire have seen it as an \"Ars erotica\", \"erotic art\", where sex is seen as an art and a special experience and not something dirty and shameful. It is something to be kept secret, but only because of the view that it would lose its power and its pleasure if spoken about. In Western society, on the other hand, something completely different has been created, what Foucault calls \"scientia sexualis\", the science of sexuality. It is originally (17th century) based on a phenomenon diametrically opposed to Ars erotica: the confession. It is not just a question of the Christian confession, but more generally the urge to talk about it. A fixation with finding out the \"truth\" about sexuality arises, a truth that is to be confessed. It is as if sexuality did not exist unless it is confessed. Foucault identifies an element of social control in this. The nineteenth and early twentieth", "label": 1 }, { "main_document": "as economic independence, labour exploitation and inequality, childbirth, childcare, violence in the home, globalisation and so on. These similarities do not diminish the importance of political, economic and cultural difference between women all over the world. The position of second wave feminists is that women share basic experiences of oppression, but are differentiated based on race, class, ethnicity and nation (Rupp, 2001: 5471-2). Women's movements around the world have developed within different historical and political context. At the global level, while it is true that the oppression of impoverished and marginalised Euro-American women is linked to gender and class relations, that of Third World women is linked also to race relations and imperialism, these added dimensions produce a different context in which Third World women's struggle must be understood (Johnson-Odim, 1991:314). Women's movements in different countries follow a distinctive course, developing structures and agendas in response to local circumstances. Chafetz and Dworkin(1986:65-66, cited in Margolis, 1993:368) studied women's revolts globally, focusing especially on the self-conscious collective form they classify as a movement, to develop a set of generalizations about factors that affect the size and ideology of women's movements. They hypothesised that urbanization and industrialisation led to increased education for women as well as role expansion in the public sphere, which in turn helped to enhance the formation and spread of gender consciousness and the amassing of personal and collective resources necessary to mount a movement (Chafetz and Dworkin, 1986 cited in Margolis 1993: 368). Chaftez and Dworkin (1986:65-66 cited in Margolis 1993: 368) also postulated that the economic structure of a nation can be used to explain both the size and ideological scope of women's movements. Katzenstein and Mueller (1987) identified three factors that can help explain differences among women's movements, 1. the degree of overall feminist consciousness 2. the opportunity to influence policy through existing political parties and 3. the nature of the state (Katzenstein and Mueller,1987 cited in Margolis 1993:386). Papanek (1993:597) in her comments on Margolis research, disagrees with her use of the hypothesis that \"women's movements tend to be larger where industrialization and urbanisation lead to increased education and role expansion for women\" because Margolis did not provide data to prove the hypothesis. Secondly, she found the generalization that women's movement are organised by educated women unacceptable, saying that literatures attest to the existence of grassroots women's organisations led by uneducated women, and that these groups qualify as women's movements. I would argue that the hypothesis is true for Uganda. As observed by Tripp (2002:1), the expansion of women's organisations in Uganda is attributed to the growth in educational opportunities for women, which gave rise to strong female leadership as well as exposure to United Nations conferences that provided opportunities for networking and communication with women around the world. Educational opportunities for women in Uganda have resulted in the creation of educated elites that has both leadership and technical skills to mobilise more constructively towards achieving women's rights (Tripp, 2002:5). Attributing the success of women's movements to their education gives evidence that Margolis hypothesis is true. Similarly,", "label": 0 }, { "main_document": "our own, was characterized by rapid and fundamental changes in every sphere of life. It is in such periods, when the traditional, customary framework of principles, implicit and explicit, is no longer able to provide an unquestioned orientation for thought and action\". (Suchting, 1962: 47) In one word, \"the human condition\" signifies an opening-up of a new historical epoch and its departure from the history and the tradition. Another more important reason for using this phrase is that it stresses the conditions under which human beings lead their life. As long as they stay on the earth, they are conditioned not only by the law of nature, biological process of their body, but also by some self-made process to a more and more considerable extent. Arendt designates three fundamental human activities under the term Each of them \"corresponds to one of the basic conditions under which life on earth has been given to man\". (1958: 7) Labor is human's metabolism with nature. It is the most natural and spontaneous category of All the products of labor are consumed immediately, leaving no long-lasting end-product behind. \"Work provides an 'artificial' world of things, distinctly different from all natural surroundings.\" (Arendt, 1958: 7) Action is most \"human\" activity among the three, which is highly related with another concept of plurality: living among man, being seen and heard by one's peers. \"...it is only in acting that a man defines himself, by making his essence into a tangible reality in the form of deeds\" before the audience. (O'Sullivan, 1976: 231) Obviously, these three activities are not of same weight in Arendt's scale. \"Labor, work, and action are not merely different forms of activity but comprise a scale in which each marks the achievement of a progressively higher level of consciousness.\" (O'Sullivan, 1976: 229) Whether the scale is level of self-consciousness is problematic or not, in Arendt's hierarchy, labor is at the bottom, which is bound up with human's biological needs and dictated by the law of nature. And action is the highest achievement of human beings. The differences between labor/work and work/action need to be closely analyzed here. Ring pointed out that \"The distinction between work and labor centers upon what each activity produces, rather than the experiences of production itself.\" (1989: 435) His reason is that the labor's product is the most perishable stuff, such as food, while work produces cloth for example, which lasts longer than food, and is therefore more stable and secure. Therefore, \"The distention between work and labor is objective, rather than subjective.\" (1989: 435) His interpretation is illuminating, but he ignores work, in Arendt's terms, is a category of human activity with higher self-consciousness. In other words, technically speaking, making clothes may be no more complicated than growing crops, but it does take into account more human elements besides the uncontrollable natural conditions. On the other hand, It may be problematic to say labor, work and action are positioned in sequence in the scale of self-consciousness. \"These categories are not categories in the Kantian sense, i.e. a-historical structures of mind.", "label": 0 }, { "main_document": "of the lack of a central government or authority to regulate states' conduct and mediate their disputes. As Osiander explains, idealists like Norman Angell, Leonard Woolf or Alfred Zimmern display certain 'realist' ideas that ask for a revised understanding of idealism. Immanuel Kant. Andreas Osiander. 'Rereading Early Twentieth-Century IR Theory: Idealism Revisited', 3, 1998, p. 423. Another important liberal idea criticized by realists is that democracies do not wage wars with one another. Influenced by Kant's ideas, Michael Doyle emphasized that democracies managed to create a \"separate\" peace that makes the possibility of international cooperation a real fact. The fundamental features of liberal democracies, namely the rule of law, individual rights and liberties, equality before the law, representative governments based on popular consultation and consent, facilitate the avoidance of conflict. Nowadays, the European Union is giving world politics a lesson about the possibility of fruitful cooperation in a region frequently affected by national conflicts. Consequently, by contrast to the realist critique, democracies can be said to be inherently peaceful to the extent that democratic states respect the national interests of other democratic states and hence do not engage in a selfish pursuit of self-interest. Michael Doyle. Norton & Co., 1997), quoted in Baylis and Smith, However, critiques addressed to the liberal internationalist theory often reiterate the experience of the League of Nations, its weaknesses that failed to prevent Japan's move into Manchuria or Italy's invasion of Abysinia. Woodrow Wilson was arguing in his Fourteen points for the need of an international organization to secure peace at the international level, but Carr concluded that the ideal of collective security was just an instrument for the United States' vested interests and not a genuine path to cooperation. At this point, the critique of \"utopianism\" may be sound to the extent that it may warn us about the potential tyranny of a certain vision. Mussolini, says Carr, has also used rhetoric of peace to conceal desires for domination. This is of course, a sound reason for rejecting a theory that takes for granted certain so-called \"standards\". But that is why democracy establishes checks and balances, accepts changes and progress while the interdependence of our world makes it much more difficult for some states to dominate the world. Moreover, the lack of commitment to collective security, i.e. the stubbornness of some states in refusing to improve relations among states within a global regulatory framework, can also be responsible for the League's failure. Or, as Angell argues \"The danger of expecting too much of a mere \"piece of machinery\" the installation of which was not accompanied by a reformed perception of international problems.\" Moreover, until 1914, the \"balance of power\" principle, which viewed the world system as a self-regulatory system, also failed to prevent the First World War. Of course, realism has never claimed that it is possible to overcome war as an inherent feature of international relations. But unfortunate events in world politics should not result in merely a rejection of the possibility for democratic cooperation among states. On the contrary, these events should give us", "label": 0 }, { "main_document": "found, from here the town would radiate outwards. This basic internal layout of the forum basilica can give archaeologists a model to go on that would enable them to identify the forum and its basic functions just by consulting the foundation plan of a site. However 'A Roman forum is a Roman forum, but there were probably not two exactly alike anywhere in the Empire, so that, while the general identification is easy, interpretation of the individual elements often depends on restoration of the buildings parts.' (Grew & Hobley 1987) it would then seem that a model cannot be lightly undertaken as a means of identifying the individual functions of the site but merely a basic overall one. Places of public entertainment were an important part of the structure of roman towns. They consisted of often the most massive structures and were built not only for their obvious function as a place for public congregation to delight in the entertainment of the theatre or gladiators but they also had a function as a status symbol and popularity booster. The Emperors it is known often built amphitheatres such as the 'Coliseum' in Rome in order to gain public adoration and raise their public profile. These huge public structures are very much a characteristic of the major towns, and because of their vast size are often easy to identify in the Archaeological record. 'Theatres, Amphitheatres and temples are in some ways easier to identify, provided that they have reasonably conventional plans' (Grew & Hobley 1987) 'Most public buildings were associated with a religious aspect, whether they were temples, theatres, amphitheatres, basilica or markets. However, there is also a secular dimension to these buildings. Their construction by an individual enhanced that person's prestige and position in society.' (Laurence 1996) Other public buildings included public baths, one good example being those that are preserved in full working order at 'Aquae Sulis,' modern day Bath. Bath-houses were often given to the people as a sign of generosity by the emperor who would pay for the construction and running of the baths. Again we see the function of public buildings not only for its obvious public service of enjoyment and sanitation but as a sign of goodwill from the emperor and a sign of his generosity towards his people in order to gain their support and keep them contented. 'Bath-houses in general present few difficulties of interpretation unless structural survival is poor' (Grew & Hobley 1987) The presence of baths are fairly easy to identify in the archaeological record from their material remains due to their highly developed structure of hypocausts etc that are not generally seen in other buildings. 'This makes monuments very different from domestic structures. They take on roles that express the power, the ideology and the identity of the society' (Laurence 1996). Private occupation sites tend to grow outward from the areas of public amenity. A highly developed infrastructure is a key aspect of Roman urbanism, and can be identified particularly in the material remains of the aqueduct system, the famous roman roads, and", "label": 1 }, { "main_document": "The report details the investigation and measurement of the characteristics specific to an induction motor. A three-phase induction motor was connected to a test system comprising a dynamometer (an electronically-controlled brake) and a tachometer (an electrical speed-measuring device) to complete this experiment. The torque on the brake was measured by a cantilever spring fitted with a strain gauge, and the signals from this, along with the data produced by the tachometer were fed to a control box. This allowed manual control of the torque on the brake, as well as feeding back the values for torque, speed and a calculated value for mechanical power output, using: Mechanical power = torque x time, or Values were tabulated with varied load applied to the motor increasing in increments of 0.5 Nm till the point that the motor stalled. Graphs were then traced showing the relationship between torque and speed, as well as another illustrating the efficiency of the motor at these varied loads. The conclusion of this laboratory serves to reinforce the need to run inductor motors at their rated power levels to obtain maximum efficiency and reduce the likelihood of damaging the motor by overloading. Whilst the entirety of this section is not utilised in the calculations, below are a number of equations relevant to this report. 1. Mechanical power = torque x time, or 2. In the absence of a measuring unit that measures power factor (p.f), it is possible to calculate the value using: or where For more theory see appendix 3, as well as websites listed in 'References and Bibliography'. The apparatus required for this laboratory are detailed in Appendix A1. The rated values were read and recorded from the motor nameplate (O1). The equipment was then calibrated to ensure that the measurements were traceable back to a known standard. The measuring unit was switched on by means of the switch marked (C), and the three-phase energy analyser on the measuring unit lit up as expected. The display was set up to the first measuring page, showing voltage, current, power and power factor. This page had the symbol '3 By pressing 'PAGE' repeatedly, the display was set to page 4, showing 'V L1, L2, L3' corresponding to the voltages on the three lines relative to neutral. The switch marked (D) was switched off, the variable output control knob on the supply panel set to zero, and the output voltage selector set to 1 (three-phase). The variable three-phase outputs, L1, L2, L3 and N on the supply panel, were connected respectively to the corresponding connections on the left-hand side of the measuring unit. The switches marked (B) and (D) were then enabled. The variable output control was turned to about 25% and the values for the three line-to-neutral voltages were recorded (O2) to check they were all within 1 or 2% of one another to enable errors to be spotted and corrected, to reduce inconsistencies at a later stage. The measurement display was then changed back to page 1 and it was noted that the voltage now displayed a larger voltage.", "label": 1 }, { "main_document": "figure 4). During pressing, the skin will be present with the juice. This will increase the extraction of phenolics resulting in a more robust wine. Mainstream fermentation at 15-20 Stainless steel vessels will be used for this process and also for maturation. During maturation the wine will be stored under a CO The Pinot Meunier and Chardonnay grapes will be hand picked from the slopes. The whole bunch will be pressed. The must produced from this will also have SO Two fermentations will take place. The first will be malolactic fermentation at 15-20 Maturation will take place in stainless steel vats under a CO This will be followed by a second stabilisation at cold temperarures. This will incorporate a filtering process. The second fermentation will then be initiated; at this stage sugar will be added with the yeast. Filling will take place immediately and the bottles will be stored on gyropallets (see figure 5) to allow clarification to take place. The bottles will then be finished by topping up and corking. The finished product should have a strong fruit flavour and aroma. Seyval Blanc wine is a popular choice for English vineyards. It produces a crisp, fruity and moderately dry wine with a alight citrus note. Pinot Meunier and Chardonnay will produce a fruity, bright and acidic sparkling wine, similar to Champagne.", "label": 1 }, { "main_document": "The aim of this exercise is to calculate the enthalpy of formation of several simple cyclic alkane and alkene hydrocarbons by using PCModel and a MMX force field. The MMX force field is a molecular mechanics method instead of a quantum mechanical method, and thus is quick to perform simple calculations. However since molecular mechanics is highly empirical, the force fields are highly specific- and can only be applied to a very similar compounds. Here the MMX force field was designed exactly for the purpose of calculating the enthalpies of formation of cyclic hydrocarbon compounds- using Allinger's( () N.L. Allinger et al., Chem. Soc. The calculations are quick and simple because the energy of the system is calculated from very trivial mathematical formulae (in comparison to electronic structure calculations such as Hartree Fock.) The force field is a sum of the individual force potentials: Each force potential represents an energetic penalty- a deviation from reference equilibrium values- for example the bond stretch term takes reference bond lengths taken from x-ray diffraction or gas phase spectroscopy, taking the form of Hooke's Law: The molecules (see structures (1)-(6) in appendix) were drawn in PC model, then its structure optimised using MMX force field, at which point the steric energy and heats of formation were calculated and recorded. Each optimised structure was examined by eye to check to see if it formed the lowest likely conformation; in the case of menthol extra adjustments were required to ensure the large bulky groups were equatorial instead of axial thus avoiding a 1,3-diaxial strain; which gives rest to unfavourable van der waal interactions leading to a higher energy conformer than the true lowest energy and most likely conformer (see structure (7) in appendix). The computed results were then compared to experimental results to verify the force field's ability to reproduce enthalpies accurately. The MMX force field calculated H Graph 1 shows the heats of formation - experimental values vs. calculated (MMX). It also reports a PMCC of 0.9878. Since the enthalpies are calculated directly by using the steric strain energy, I investigated the percentage error of MMX calculation (compared to experiment) against the steric strain energy, this is plotted in Graph 2 to investigate if the error was directly related to the strain energy. Experimental measurements incur an experimental error of approximately +/- 1 kcal mol Graph 1 shows the plot of experimental literature values of the heats of formation against those calculated by MMX force field. The black line is the product moment correlation coefficient line (PMCC)- a measure of how well the data fits to a straight line. The red line is the ideal- i.e. experiment should be exactly the same as computational results. The PMCC is 0.9878, which is a very good fit, however the line lies above and at a gradient not parallel to the ideal. Also most of the data is outside of any experimental error which would allow for anomalies within +/- 1 kcal mol Graph 2 was to investigate whether steric strain had been under estimated in the calculations (i.e. principally", "label": 1 }, { "main_document": "the standard curve (Figure 3.7) were generated by Rotor-Gene 6000 Series Software V1.7. The raw data curve uses a normalized fluorescence unit and the maximum value is 100. For the standard curve, Rotor-Gene 6000 Series Software optimizes the correlation coefficient, R The software also extrapolates the standard curve to accommodate any sample with C The raw data curve (Figure 3.6) shows very poor results, as most of the curves were crowded together, and had similar C Also note that most of the control (green colour curves) had bigger C As digested samples should have lower concentration of intact template than the control and this suggests some problem with the results. Problems also show in the standard curve (Figure 3.7), as it has a very low correlation coefficient (R All the samples also have very similar C For the following statistical analyses, we used values of sample pairwise differences (Table 3.4) (i.e. the differences in C A statistical analysis program, MINITAB There seems to be some significant differences in C And more interestingly, there appears to have some significant differences between CL and CH samples, when the pairwise comparison of C However, caution must be taken here, as the results of this real-time PCR are aberrant (see Section 4.3). Inter Simple Sequence Repeats-PCRs (ISSR-PCRs) use primers that are complementary to Simple Sequence Repeats (SSRs also known as microsatellites), and that contain a 1-3 base 'anchor' at either 3' or 5' end (Zietkiewicz 1994). Microsatellites are short tandem repeats dispersed through out the genome (Graur and Li 2000) and this makes them very useful genetic markers for exploring potential loci for any non-modelled plants. As there are numerous microsatellite sites in the genomes of most eukaryote organisms, one would expect the results of most ISSR-PCR to be littered with bands on an agarose gel. However, this was not to be the case in our results (Figure 3.1), this was also confirmed by the small peak height in the subsequent fragment analyses (Figure 3.2). Part of the reason for the poor results could be the age (a few years old) of the primers being used, as well as possible impurity in the samples that might cause inhibition in the PCR (Smith and Maxwell 2007). As different samples might have different amount of impurity and this could contribute to the different results between samples as can be seen in Figure 3.1. Also, the frame shifting effect of primers during annealing can be seen in Figure 3.2 (a) and Figure 3.2(b) and these multiple spikes were caused by the primer binding in a succession of two bases different positions in the microsatellite regions on the genomic DNA. Another main reason for this method being abandoned by me is the difficulty in accurately determining the digestion results. It is difficult to have a 100% efficient restriction enzyme digestion in a 2-hour incubation period; on the other hand, the blocking methylated sites are not completely immune from the digestion action of the restriction enzymes. The length of digestion incubation period is always a compromise between the complete digestion of", "label": 0 }, { "main_document": "3.1). On the other hand, limit stage was created and limit for \"Mass per unit length\" was filled in stage 2. The length of beam was 10m and the maximum capacity of lifting equipment was 226.796kg, so an amount of 22.6796kg/m was entered into the maximum mass per unit length box (Figure 3.2). After that the \"Result intersection\" toggle was clicked and a total number of 11 passed sections were shown in Figure 3.3. A clearer view for stage 1 after intersection was shown in Figure 3.4. \"Hot Rolled Steel (Y.S.355MPa) Universal Beam-(305 On the other hand, \"Hot Fin. Steel (Y.S.355MPa) Circular Hollow - (194 Actually, economical factor was also needed to be considered so that a new expression [Torsional Stiffness] \"Hot Fin. Steel (Y.S.355MPa) Circular Hollow - (194 By the way, in conclusion, \"Hot Fin. Steel (Y.S.355MPa) Circular Hollow-(194", "label": 0 }, { "main_document": "Secretary of State can call in a local plan and require the local authorities to make modifications to the plan. Decisions made by the local planning authority on applications for planning permission can be called in by the Secretary of State and also modified on appeal. The system and structure of local government in England has changed and evolved over time, under the influence of successive governments. There are both advantages and disadvantages to all structures and disagreement on how best they should operate. Currently there is a highly centralised system, where central government has tight control over local authorities and their planning functions. There is cooperation and in some cases partnership between different agencies and sectors, including the planning sector. Some feel this 'network' is the best way to manage and deliver services to people. Others feel that this has resulted in a fragmentation of government, and subsequently problems of duplication and inefficiency.", "label": 1 }, { "main_document": "problems and plans to work with consumers and the industry to develop a new code of practise on the promotion of foods to children. MacDonald's has recently made an attempt to rectify the problem with the introduction of healthier options such as the salad range. Marketing ploys like this are also common in supermarkets. BOGOF (Buy one get one free) offers are more likely to be on multipacks of crisps and chocolate than fruit and vegetables. Unhealthy snack foods are displayed at the tills and commodities such as bread and milk are often shelved at the back of the store to encourage the customer to buy other items that they see on route. This may be a fault of the food industry but we as the consumers cannot be considered passive to marketing. Adverts may make a marginal difference but we cannot ignore personal responsibility and consumer choice. It is also important to remember that the food industry is governed by consumer demand. A survey published in the British Medical Journal in January 2001, measured the body mass index of thousands of boys and girls in England and Scotland, aged between four and eleven. The first measurements were taken in 1974, then different children were measured in 1984 and more in 1994. The results showed that the proportion of overweight or obese children remained steady between 1974 and 1984, but increased dramatically between 1984 and 1994. 5% of English boys tested in 1984 were overweight, this figure increasing to 9% in 1994. This is significant because there have been many changes in the last 20 years which have changed consumer lifestyles, dictated consumer choice and therefore affected the increase in obesity levels. The last 20 years have seen various technological and economic developments leading the UK into a cash-rich, time scarce community where consumers are demanding food stuffs that are quick and easy to cook. Our rising real income levels allow us to pay extra for the added service which is why eating out and take-away meals are also increasing in popularity. As a society, we are moving away from generalised feeding. Families rarely have time to eat three times a day together anymore, but are more likely to graze on snacks throughout the day. Snack foods have many advantages in a time scarce society as they do not require any special storage or preparation and can be eaten on the go, but are often full of salt, fat and sugar and contribute considerably to weight gain. Working hours in the UK are amongst the highest in Europe with 1.5 million men working more than 60 hours a week. Less time at home means that convenience foods such as ready meals, pre-prepared food and snacks are becoming increasingly popular. These foods are again full of additives, salt, fat and sugar. The modern lifestyle as a whole not only encourages snacking on unhealthy foods but also sanctions a distinct lack of exercise. Occupations are far less physically demanding than 20 years ago as we have seen industry move from manufacturing to the tertiary", "label": 1 }, { "main_document": "Culinary taste may be thought of as a combination of particular characteristics, such as our food preferences, the way we act towards them and our lifestyles that will define what we eat, how we eat it, when we eat it and why we eat it. As simple as the definition of culinary taste may seem, it has been causing a great deal of controversy over the decades. Some sociologists such as A. Warde (1997), D. Seymour (2003) and specially P. Bourdieu (1984) defend that the construction of taste is a result of our socialisation process, into a determined class and that, regardless of any changes (ex: wealth or occupation), our taste will always remain the same, because it is inherent in us. Others, such as Z. Bauman (1988, 1990), and M. Featherstone (1991), suggest we have reached a stage where the limitations of social class no longer tie us down and the socialisation process we go through, does not have the influence stated by Bourdieu. They advocate that there is much more freedom in making consumer choices and that with the right amount of economic independence, we can define our own cultural and culinary taste, and adopt whichever lifestyle(s) we wish. This essay will try to conclude how taste is constructed and the factors that influence its construction. Socialisation is the process through which we learn what is correct and how we should act according to different situations. It starts at childhood and according to Bourdieu (1984) it is what will define our social class and our consumer behaviour. He determines that culture establishes food preferences and that there is no room for individual taste. He advocates that food is a reflection of social aspects such as status, wealth, group partisanship, and therefore, our customer behaviour towards food should be regarded as having a cultural meaning rather than constituting a necessity. Warde (1997) proposes that social positioning can be recognized in eating procedures, and that our consumer behaviour can be defined by our class positioning and through social evaluation. Bourdieu (1984) argues that socialisation constitutes the basics for our construction of taste and concluded that different social classes have distinct culinary preferences and that they are influenced by factors such as lifestyles, class positioning, geographic location, religion, income and interests. He stated that gender differences also have a strong impact on culinary taste engaging the issue of meat consumption. Men would abundantly consume meat as a sign of their status and women would be deprived thereof, reflecting their inferior position. Fish, however, would mostly be consumed by women, as it was not manly, and would remount to eating procedures not applicable to men. However, according to Mennell (1996), this was mostly noted in the past, where hierarchically higher positioned classes had a strong consumption of meat, and working classes had their nourishment restricted to cereals, vegetables and rich foods (Seymour, 2003). Mead (1931) notes that this differentiation was noted in Britain where \"the poor ate to live, while in too many cases the rich lived to eat\" Mead (1931:p9) Bourdieu (1984) defends", "label": 0 }, { "main_document": "cells will contain both viral and host RNA as BEV mRNA contains a VPg protein covalently attached to the 5' end which causes translation to switch to cap independent translation so the genome encoded polymerase can be translated. The viral RNA polymerase will then replicate the genome using a replication intermediate. The determinations of RNA polymerase activity can therefore be manipulated to remove 'background' RNA polymerase activity from the count results. If actinomycin D inhibits RNA polymerase, these values can be subtracted from the count data to give levels of actual RNA polymerase activity in normal cell conditions. As the determinations of activity were performed in duplicate, average results for each cell culture can be determined. The percentage of BEV infected RNA polymerase activity values can be plotted graphically as demonstrated by figure 2. The BEV genome can be used to directly translate protein therefore new mRNA need not be for translation to occur. New RNA is only produced when the BEV genome is replicated. Once the polymerase enzyme and the poly-binding proteins have been synthesized, the genome forms a replicative intermediate. Up to five replication complexes and therefore polymerase enzymes can move along the replication templates at one time which are held apart by the polyprotein. New single stranded positive sense mRNA genomes can then be used for further replication and translation or be packaged in virion particles. Upon processing of the RNA dot-blot, a photographic copy of the RNA was produced as illustrated by figure . Where the film has turned dark compared to the background it indicates that RNA is present on the filter. A labeled prove became bound to the RNA which can be identified when an antibody binds to it. A second antibody carries an enzyme which generates light from a chemiluminescent substrate. The second antibody binds to the first antibody and therefore the more exposed the film is the more bound RNA is therefore present. Table 2, demonstrates the results interpreted from figure 3. The results above are expressed as percentage concentration of RNA but for purposes of graphical representation log values are more appropriate. Therefore the end point dilutions for the time points are demonstrated below. The logged values can now be plotted graphically as shown by figure 4. The results above are expressed as percentage concentration of RNA but for purposes of graphical representation log values are more appropriate. Therefore the end point dilution for the time points are demonstrated below. The logged values can now be plotted graphically as shown by figure 4. Upon translation of viral structural proteins and replication of the BEV genome to produce progeny genomes, BEV can begin to assemble new virion particles. The polyprotein is cleaved to Vp0, Vp3 and Vp1 which can forma 5S structural unit which upon association with other units eventually forms a 75S empty capsid. BEV genome associated with the Vpg protein can be packaged into the capsid. Once Vp0 is cleaved to Vp2 and Vp4, the BEV particle becomes infectious and leaves the cell by lysis. Table 4 illustrates the end point for", "label": 1 }, { "main_document": "4, when it should have been 0. The results were calibrated by increasing each value by 4, therefore virtually calibrating the manometer readings. In this investigation into the performance characteristics of a centrifugal pump at different speeds many things were realised. Firstly, it was seen that at the two different speeds the characteristics were very similar. They were similar due to the forms and gradients of the graphs being very close to one another. However, small differences still existed such as the spread of the results and slight variations in gradient, such as with input power in Graphs 1 and 2, where for Graph 2 it has a slightly steeper gradient than in Graph 1. It was also discovered that the pump would run up to 8% more efficiently at 3000rpm than at 2000rpm. Secondly, it was seen that in Graph 3, for the non-dimensional results, if speed were to be increased then and would theoretically move closer together. It can be seen that this sort of investigation into centrifugal pump performance characteristics is extremely useful in analysing how well a pump will work in certain situations. The graphs derived would be invaluable in a situation where you had to pick a pump to be used in a system. For example, you could use them to determine what speed and power intake you would need in order to get a particular discharge. Overall, the techniques used in this investigation and their results are a versatile tool in analysing the performance of pumps.", "label": 1 }, { "main_document": "highly different from both Cadbury's and Galaxy with the same extent. Additionally, Galaxy appears to be significantly different from Waitrose as well. The same case happens to the attribute of Melting (Mouthfeel), whereas all the significance levels of difference are lowered. Besides, ASDA seems to be the most significantly different from Galaxy. Referring to Mouthcoating (Mouthfeel), the exact same pattern of differences between ASDA and Cadbury's, Galaxy is observed. Furthermore, Cadbury's and Galaxy are both significantly different from Waitrose as well. By comparisons, both Cadbury's and Galaxy are more significantly different from Waitrose than from ASDA. Last but not least, in the case of Bitter (Aftertaste), again both Cadbury's and Galaxy are found to be significantly different from ASDA and Waitrose respectively. Besides, Cadbury's is slightly more different from ASDA than from Waitrose. As discussed so far, we have already known about the way and the extent to which the four chocolate samples differ from each other. However, in order to quantify the significant difference lying between samples, a Spider diagram has been plotted to offer you a clear view of the overall sensory characteristics of each sample by the 7 key attributes, as referred to the appendix-4. As shown on this diagram, Waitrose Belgian Milk turns out to be the brownest among the four samples, also it has the least mouth-coating effect in terms of mouthfeel; thereby, it is believed to have the thinnest texture among the four. Furthermore, it seems to possess the strongest chemical aroma among the four and a very strong bitterness in after effects, too. In terms of ASDA sample, it has the greatest after effects of bitterness and mouthfeel of first bite, so it seems to be the hardest in texture. Also it possesses the least browness and shininess in appearance and the least melting rate of mouthfeel. Again, it seems to be very thin in texture due to its low mouth-coating effects in mouthfeel. In contrast, in terms of texture, both Cadbury's and Galaxy Milk are the highest in the mouth-coating effects and in melting rate of mouthfeel. Consequently, they appear to be the thickest in texture among the four. Also, they turn out to be the softest in texture among the four due to the lowest values of first bite. In the sense of appearance, Galaxy Milk is found to be the shiniest and the least brownest among the four, while Cadbury's is fairly shiny and brown as well. Apparently Cadbury's has the least content of artificial content in the formulation due to its lowest chemical-note in aroma. Both of them are almost free from bitterness in after effects, which again possibly represent the natural characteristics of their formulations. Last but not least, the appended PCA plot represents us with a very general idea about what attributes virtually drive the significant differences among the samples. This plot is well in-line with the discussions so far. As seen from this plot, the attribute distinguish Waitrose from the other is browness in appearance, and ASDA stands out among the four due to its strong hardness and", "label": 0 }, { "main_document": "and in fact, it may be practically difficult due to their large sizes. Fence on the right bank on the bottom of the section was bending and some wood posts fell over, hence it requires repair. There is great abundance of vegetation in terms of its coverage in the channel and banks which has the possibility of supporting various invertebrate species. However, the presence of foot paths on either side would cause a substantial disturbance to flora and fauna, preventing the migration of invertebrates between the grazing land and the stream in particular. One of the main purposes of the stream is to provide water flow to the mill, hence there could be much pressure on management for this use other than for wildlife enhancement. The existing habitat is of general interest for recreational reason, providing the reasonable ecological landscape, but less in conservation value. There is not a great potential to enrich wildlife along the stream mainly due to its purpose and the adjacent land uses, college in particular, but it should be maintained so as not to diminish the existing features.", "label": 0 }, { "main_document": "Adequate disc brake design is an important aspect of motor vehicle safety, contributing to the overall performance of a motor vehicle. This study investigates some of the theoretical design considerations of disc brakes, and aims to relate the theory to the practical design of motor vehicle disc brakes. The theoretical optimisation of disc brakes is primarily associated with the geometry of the disc and pad assembly, along with the friction coefficient of the pad material. In terms of practical design, there are many considerations which are made for instance, optimising temperature dissipation during prolonged use, and minimising brake squeal. By designing disc brakes practically, manufacturers are able to address issues which would not arise from theoretical design alone, which makes practical design vital, to ensure the optimal braking performance is achieved. Motor vehicle braking systems are a critical component of a vehicles safety and as a result, rigorous research and development has aimed to optimise the braking performance of motor vehicles. The functional requirement of the vehicle braking system is to dissipate the kinetic energy of the vehicle into heat energy, in order to bring the vehicle to a controlled stop. The performance of the braking system can be expressed as the rate at which the retardation occurs, and it is this factor which braking system design aims to optimise. This study aims to investigate the design of motor vehicle disc brakes in terms of both the theoretical and practical design considerations. The principal objective of this study is to investigate the variables associated with the design of a disc brake, for a typical saloon car weighing 1000 kg. Disc brakes are a type of friction brake in which the braking forces are generated by the pressure of a high friction brake pad, against the surface of a rotating disc, or brake rotor. There are two types of disc brake in use, namely fixed caliper and floating calliper disc brakes. Floating caliper disc brakes utilise a single piston which presses the brake pad against the rotating disc. The reaction force causes the caliper housing to shift to the right, pressing the remaining brake pad against the opposite face of the disc (see fig.1) Upon release of the brake, piston seals with defined deformation properties retract the piston from the brake. The fixed caliper system however, uses a separate piston to actuate each brake pad on each face of the disc. Although the fixed caliper disc brake is the simplest in terms of manufacture, and is the stronger of the two types, the system has been eliminated by the floating caliper design, since many road vehicles are restricted by available space and the floating caliper system provides a compact alternative. In addition, the fixed caliper disc system is found to be sensitive to high temperatures caused by prolonged use, which can result in brake failure due to overheating brake fluid. The floating caliper system is found to be far less sensitive to high temperatures, and consequently, this type of brake failure is uncommon. As a result, the floating caliper system is most commonly", "label": 1 }, { "main_document": "costly. On the other hand, acquisitions are also costly and do not guarantee complete transfer of competitive assets. Those reasons make the IJV more attractive to the foreign firm. IJV is also attractive to local firms which can extend existing capabilities to cover a wider product line without full investment in production facilities (Chenal. 2002). However, Chinese firms cite the access to technology (better transfer with IJVs) and to foreign market (Daniels, Krugal. 1985). In general, a firm is more likely to cooperate with \"indigenous\" firms when the economic, political and cultural systems of the foreign country differ from the local country (Luoal. 1995). For example, the country is the world's largest producer of oil and second largest steel producer. (Fey 1995) In China, personal relationship ( The Chinese can use their \"guanxi\" to promote products through sales-force marketing (Luoal. 1995). However, guanxi alone cannot ensure venture success at all and is not a substitute for basic organizational fundamentals. National Semiconductors and Hitachi. From 20% in 1981, US import restrictions protected 35% of manufactured goods in 1984. For the Japanese, the trend was a clear signal [to invest locally] (Reichal. 1986). Moreover, in some sectors, several countries like India or China governments may have strict rules ('the stick') for foreign investment, giving no choice other than setting IJVs (Hall 1984). Access to local markets might require local equity participation. Moreover, through incentives ('the carrot'), the Chinese government encouraged IJVs setup to conserve foreign exchange, increase efficiency and create jobs (Luoal. 1995). Faced with seemingly unbeatable foreign competition, many US companies decided that it is more profitable to delegate complex manufacturing to their Japanese partners: e.g. Bendix (Machine Tool Company) can produce a small turning machine for $85,000 in the US; the same machine produced and shipped by the IJV with Murata (MT company in Japan) cost only $65,000 (Reichal. 1986). To reduce costs currently, there is also the opportunity for IJVs to mix and assemble bulk materials shipped [from high to low costs], for final distribution. Also, government special incentives helped firms opening IJVs (Danielsal. 1985). Developing new products and penetrating new markets are too expensive for a company to always go it alone. For example, ICL (UK computer firm) could not develop mainframes without Fujitsu (Hamelal. 1989). Costs or risks sharing confer benefits through joint projects in areas characterized by extremely high development costs and uncertain demand or short product or technology life cycles (Jeongsukal. 1991). These economic rents can be the result of risk reduction, economies of scale and scope or production rationalization (Luoal. 1995). Can we still suggest that some companies engage in IJVs to exploit those benefits? Most probably, the financial market shows interests in the potential economies of scale. Parents forming joint ventures in the identical and related-complementary categories reported higher gains in abnormal returns than those forming other types of ventures (Jeongsukal. 1991). Despite these expected advantages from IJVs, the failure rate is important. Depending on studies, IJV failure rate is between 50% and 70% (Seungal. 1996). To benefit from IJV advantages, we will need to", "label": 0 }, { "main_document": "the lenis/ fortis opposition cannot be said to be truly voiced or voiceless in English, it is true to say that voicing, and the various effects of voicing in the environment of the obstruent is the principal basis of this distinction. In a lenis obstruent, while the consonant itself may or may not be voiced, there is greater voicing in the environment of the consonant - the voicing is 'closer', and this is manifested and perceived in a variety of ways. In a final fortis, there is more [-voice] time due to the clipping of the previous voiced segment and the consequent lengthening of the voiceless closure time and the more 'forceful' plosion or longer the frication. In initial position, the delay in the onset of voicing in the vowel following a fortis produces the characteristic voiceless aspiration. Most of the apparently disparate phenomena of the fortis/lenis contrast, therefore, can be ascribed to actions or by-products of voicing, which is increased for lenis, and decreased for fortis. We can summarize by saying that the one unifying feature of the contrast is the voice, although it appears in various forms. In this respect, using the terms voiced and voiceless is arguably the closest we can come to a single unifying feature, although as a complete description it is inadequate. To use the description voiced and voiceless is adequate only in as far as i) voice plays a role in the distinction and ii) these terms are widely used and understood. In a teaching situation, however, where a learner is experiencing difficulty between two sounds where the only distinction is [+/- fortis], it is misleading to describe the difference in terms of [+/- voice], when it may be far more productive to draw attention to, for example, aspiration and clipping as distinctive features of English fortis, or to reassure them that the clusters /sp st sk/ are, in effect, no different from /sb sd sg/. It can be said that there is no single discrete productive or perceptive feature that will predict whether a sound falls into what we feel to be the discrete categories of fortis or lenis, but rather we can think of various phonological features acting on multiple continua which the listener uses to reach a decision about the category of fortis or lenis: The voiced/voiceless question is an important reminder that a speech sound, like other forms of perception, is a complex array of various and variable fragments of information which combine to produce an apparently simple, singular and unified identity in the mind of the listener, and that a description such as voiced/ voiceless or lenis/fortis is necessarily a convenient over-simplification.", "label": 1 }, { "main_document": "the VL and three from the VH domain. The Phosphorylcholine-IgG complex is stabilised by van-der-Waals forces and relies on the binding of a positively charged trimethylammonium group, which electrostatically interacts with two negatively charged IgG glutamate residues inside the cavity, as well as binding of a negatively charged phosphate group by three CDR residues, namely positively charged arginine, lysine and tyrosine. The availability of both complexed and uncomplexed structures for the same antibody permits the evaluation of the possibility of conformational changes occurring upon ligand binding. In the case of phosphorylcholine or other small molecules, no conformational changes take plus. But later studies with larger ligands revealed that significant changes can occur upon binding (7). Hen egg-white lysozyme, whose interactions with IgG have been extensively studied, shall serve as an example for the binding of macromolecules. There are several antibodies raised against lysozyme, and each one interacts with it in a slightly different manner, but in general IgG's amino-terminal domains do not exhibit significant conformational changes except for \"opening up\" a little to allow for more intimate contact with lysozyme, and all six CDRs are involved with the epitope across a region that spans about 30 x 20 Lysozyme binds with a rather flat surface, except for the protrusion of a glutamine side chain into the antigen binding site, which is surrounded by three aromatic side chains and hydrogen bound to a carbonyl oxygen atom (11). Formation of antibody-antigen complexes has different results depending on the type of pathogen involved. Binding certain gram-negative bacteria, leads to activation of a serum complement cascade that results in C8 and C9 drilling themselves through the bacterial membrane and causing lysis of the bacterium. In the case of gram-positive bacteria complement is activated to form a C3 and C5 complex, which in turn attracts phagocytes. Also, certain bacterial toxins can be neutralised by binding to them, as well as viral adherence sites, whose inactivation then renders the virus incapable of adhering to and penetrating a potential host cell. Experimental proof for this direct antiviral effect is that monovalent fragments of IgG, exert similar neutralization activity as the intact antibody but are - contrarily - unable to activate complement, since this function is performed by the Fc fragment (3). Several protein-carbohydrate interactions involving the heavy chain in Fab and Fc fragments of IgG have been discovered by X-ray crystallography. These are post-translational glycosylation sites, which are especially important in the Fc fragment of IgG. The Fab fragment is also glycosylated, but it's glycosylation sites are not conserved. Three can be found on VL, but one oligosaccharide complex is also attached to an asparagine residue within CDR2 of VH, yet its composition and structure differ from the biantennary complex found in the CH2 domain, since it is of the high mannose type. The following figure illustrates which regions of the molecule become glycosylated (8): One conserved glycosylation site in the CH2 domain has been found in a turn-segment between two -sheets. Bound to the asparagine 297 residue is a biantennary oligosaccharide complex, roughly 3 x 4 nm in", "label": 0 }, { "main_document": "one generation, all gene frequencies in repeated probability tables, turn out to be the same number. This is known as Hardy-Weinberg equilibrium, from the work of the German physician in 1908. He also formulated the following formulas for genotypic frequency for a genetic locus with two alleles. Taken from Boyd and Silk (2003: 57): 2 Therefore if sexual reproduction is just random mating between individuals, it cannot be the only force leading to evolution. Of course, sexual reproduction could be considered an infinitely more complex contest. Matt Ridley (1993) would regard describing sex amongst humans as random mating very questionable, together with most evolutionary psychologists. Socially stratifying psychological hierarchies exist between individuals and societies, genetic selecting processes such as the competing dichotomy in sexual selection theory of 'Fisherians' of the neo-Darwinist and 'Good-genes'. Matt Ridley (1993) might even go so far as to attribute the mechanism of sexual reproduction as a response to the perennial battle against parasitic invasion, within which the genders themselves were determined, separating us from an asexual nature. Modelling goes to show other genetic mechanisms must be involved in evolutional change. To illustrate the quote, consider a genetic disease such as Tay-Sachs which usually kills the individual with the homozygous genotype by the age of four. Every generation that passes, all homozygous individuals with the lethal allele will be removed from the population. Following, two alleles for every homozygous individual will be removed from the affected population, leaving only heterozygous individuals with the allele of interest. Therefore substantially lowering the overall gene frequency within the population (unless other selective factors are involved with regard it's original popularity), and increasing the gene frequency of the non-detrimental allele. The strength and direction of selection depends on the environment it operates in. If medical care is available for the treatment of a potentially deadly genotype, the strength of selection against the otherwise deleterious allele is negligible. While the example of deleterious alleles been removed from a population shows how selection changes gene frequencies, it does not show how selection can lead to the evolution of new adaptations, and the change in gene frequencies which ensue from it. It is important to note all organisms are capable of producing more offspring that can survive (Mathus). Of the offspring that do survive, on average, they are more likely to have an anatomy, physiology or behaviour that better prepares them for the demands of their environment. The principles of environmental variation are often associated with the 'modern synthesis' a body of theory based on empirical knowledge in the attributed to the biologists Wright, Fisher and Haldane in the 1930s. They were also known as 'Neo-Darwinists', so by called due to the prior popularity of theories based on Mendelian genetics following its own rediscovery and subsequent increase in popularity. Mendel's theories were perceived incompatible with Darwin's theory of evolution by natural selection at the time. The Neo-Darwinists showed how Mendelian genetics could not be used to explain continuous variation, leading to a Darwinist theory of inheritance, and accounting for the way variation is maintained within", "label": 1 }, { "main_document": "Quintavelle v HFEA S13(5) HFE Act 1990 Mance LJ, R ex parte Quintavelle v HFEA Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, S13(5) HFE Act 1990 Idea from Sally Sheldon and Stephen Wilkinson, Hashmi and Whitaker, an unjustifiable and misguided distinction? Medical Law Review, 12, Summer 2004, pp.137-163, Oxford University Press 2004 The consequences of Effectively, where PGD is used, the HFEA can authorise the \"selection of embryos for blue eyes, blonde hair or desired sex, without reference to Parliament\". Brownsword suggests that when Phillips justifies HLA and PGD as methods to ensure embryos are \"suitable for the purpose of being placed within [women]\" It is at risk from \"undue influence,\" In chorus, the STC's Confirmed by the HFEA when asked by Law Lords during the Hearing into Fertility Regulation, Tuesday, 8th March. Sourced from Sourced from the CORE pressure group; online at core.org, 2004: Lord Phillips MR, R ex parte Quintavelle v HFEA Lord Phillips MR, R ex parte Quintavelle v HFEA ROGER BROWNSWORD, REGULATING HUMAN GENETICS: NEW DILEMMAS FOR A NEW MILLENNIUM, Medical Law Review, 12, Spring, pp. 14 - 39 Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, ROGER BROWNSWORD, REGULATING HUMAN GENETICS: NEW DILEMMAS FOR A NEW MILLENNIUM, Medical Law Review, 12, Spring, pp. 14 - 39 ROGER BROWNSWORD, REGULATING HUMAN GENETICS: NEW DILEMMAS FOR A NEW MILLENNIUM, Medical Law Review, 12, Spring, pp. 14 - 39 Sally Sheldon and Stephen Wilkinson, Hashmi and Whitaker, an unjustifiable and misguided distinction? Medical Law Review, 12, Summer 2004, pp.137-163, Oxford University Press 2004 Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Robert G. Lee and Derek Morgan, Human Fertilisation and Embryology, Regulating the Reproductive Revolution, Blackstone Press Limited, 1st Edition, printed in 2001 Science and Technology Committee Science and Technology Committee Report on Reproductive Technologies, March 24th 2005 British Fertility Society Lord Robert Winston, sourced from Robert G. Lee and Derek Morgan, Human Fertilisation and Embryology, Regulating the Reproductive Revolution, Blackstone Press Limited, 1st Edition, printed in 2001 This In The HFEA distinguished Realistically, only the latter argument distinguishes Whittaker from Hashmi: if the potentially dangerous PGD procedure is used to prevent a child from being born with a serious illness, the \"benefits outweigh the risks\", Diamond Blackfan Anaemia Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and", "label": 1 }, { "main_document": "us to communicate out of lesson time, this ensured all the group were well informed on possible decisions to be made and times and locations of group meetings. Subsequently this issue, which could have really hindered our progress, did not impact any of our decisions or the interview itself. I feel this is a good reflection of the excellent communication skills in our group and reflects our ability to work well together. We all had equal roles and did not have a specific leader and therefore everyone was valued equally and everyone's opinion in the group was considered and respected. We had a few disagreements in regards to small issues, such as one person wanting one question to go before another. To resolve such issues we simply discussed why one felt a certain order would be more appropriate and through majority voting we were able to come to fair decisions. Luckily we were all dedicated within our group and had similar styles of working which enabled us to consistently be organised throughout. In Pagne's ' I feel that by working within a group these criteria's have defiantly been met. I have learnt a lot from my group and the experience of having to do the Responding to Others Interview. One aspect of the RTO interview that could possibly be changed or modified in order to improve it would be having a few specific closed questions so we could get clear answers to specific questions we had. For example questions that simply required yes or no answers. These could verify the responses we already received as even though we had open questions they were quite specific and therefore having a few closed questions after an open question could have brought a bit more clarity. Also possibly doing the interview in a less structured manner, for example we relied a lot on the questions we had prepared, whereas I feel we would have made the candidate feel slightly freer to be more open if it had felt more like a structure 'chat'. If we were to conduct the interview again, maybe this could be an issue we could discuss and try to put into action. Despite these possible suggestions for change I feel the interview went very well and each individual made a very important contribution to the group. My personal contribution within the group varied throughout, each member was given a role initially, which at the beginning we stuck to, however with time our roles became less structured and we each took turns in fulfilling the duties of each group role. My initial group role was 'innovator' which meant the person who had the creative and imaginative input. I also helped with the 'organiser' role, especially nearer the end of the process when we were not being as organised as I liked so therefore took matters into my own hands and organised the group myself. During the whole process I feel each member fulfilled certain aspects of all the roles identified by Cottrell (2003), these include 'Chairperson', 'Timekeeper', Record-keeper', 'Organiser', 'Resource investigator' and", "label": 1 }, { "main_document": "Bronze Age Levant underwent some substantial changes from its Middle Bronze Age character. Egyptian and Asiatic attacks had tremendous impact on life in Levant during 18 General decline in both population and urban life can be noticed, where the latter resulted from the former. Egyptian control imposed on the region after the military campaigns enabled the Pharaohs to send people from Levant to Egypt in order to obtain a cheap labor force. Moreover, the invasions themselves undoubtedly motivated some of the inhabitants of towns to escape from the enemy. In addition, the tribute that the Egyptian imposed on the native people also makes some of them move to satellite towns or villages in order to flee from the taxes. Therefore the decline in number and size of the Levantine towns has been recorded in archaeological record, but it is difficult to trace the exact movement of people. The question that arises is that of the distinction between ruralisation and decline. Here again the answer is not straightforward and regionally unified. The political and economic situation in this area of the Mediterranean contributed to the development of trade, therefore some coastal sites and harbors were established or survived, if not flourished. On the other hand, more inland people seemed to turn to local production of goods mainly for subsistence (and to pay tribute in some places), therefore smaller settlement were favored to the big cities. Within the towns themselves some changes also inevitably occurred, as there were fewer people, hence not so many buildings were necessary and due to Egyptian control fortifications were often just slightly repaired or left to decay, in all probability because the Egyptians wanted to prevent the rebellions in the towns. For this reason also the established administration centers, garrisons within towns and separate fortresses. Paradoxically, some signs of prosperity have been recorded in archaeological evidence, however this comes from wealthier towns and in addition from elite residences and palaces, which are not representative of the population as a whole. Therefore, the argument is that the elites facing the Egyptian influence and unsteady political situation within their city-states had to struggle more to maintain their power, hence showing more material wealth as sign of prosperity. Therefore, it might be more appropriate to use the term \"change in lifestyle and subsistence methods\" than decline. Indeed we can speak about a decline in population and most definitely in the occupied area, however there is no clear evidence for the decline in material wealth. In fact, there is sign of prosperity in elite groups as well as some significant improvements in the quality of dwellings of regular inhabitants. However, in order to perform a valid investigation of the topic in the future, it is crucial that more sites are archaeologically examined and excavated, especially in the hill country, as there is little evidence from that region.", "label": 0 }, { "main_document": "non-heterosexual individuals; highlighting the importance of family relationships. Many websites are available for same-sex couples who are planning to have a child through AID; whether formal or informal. Examples of websites such as The definition We must not assume that only legal parents will have parental responsibility because the latter can be acquired through law by other individuals e.g. local authority in a form of a care order; albeit in a limited sense. Beatrice who is the legal parent has automatic parental responsibility according to section 2(1) and (2) of CA 1989; specifically section 2(2)(a) Annabel who is not Celia's legal parent although genetically linked to her does Section 3(1) Children Act 1989: In this Act \"parental responsibility\" means all the rights, duties, powers, responsibilities and authority which by law a parent of a child has in relation to the child and his property. Section 2(2)(a) CA 1989: Where a child's father and mother were not married to each other at the time of his birth-the mother shall have parental responsibility for the child Annabel could potentially acquire parental responsibility through section 4A CA 1989. However the latter option is only available to married couples or civil partners and not cohabitants. Do Annabel and Beatrice have a civil partnership It seems unlikely since the words 'living together' indicate cohabitation. But they can subsequently register a civil partnership; allowing Annabel to acquire parental responsibility through section 4A(1)(a): parental responsibility agreement or section 4A(1)(b): court order. Both subsections require that the person acquiring parental responsibility is Does Annabel qualify? We have determined that Annabel is not the legal parent although she is the natural parent. Therefore if 'parent' in those sections imply legal parent; then Annabel can subsequently apply for parental responsibility. However it seems more likely for the law to mean Furthermore step-parent is defined as \"a person who is married [or a civil partner] to the father or mother of a child but is It would seem absurd for the law to allow a natural parent who does not have parental responsibility to acquire it through a provision entitled \"Acquisition of Parental Responsibility by Therefore Annabel cannot be granted parental responsibility through section 4A since she is Celia's natural parent. However if donor egg was used Annabel could technically be able to acquire parental responsibility. The Civil Partnership Act 2004 came into force on the 5th of December 2005. Editor: Martin, Elizabeth A., Section 4(1)(b) CA 1989. Section 4(1)(c) CA 1989. If Annabel and Beatrice had split up shortly after the birth of Celia, it would still not be possible for Annabel to apply for parental responsibility since she would not qualify as 'step-parent'; being Celia's natural parent. Furthermore even if the law were to read 'parent' as legal parent and In order to apply automatically for a residence order Annabel has to be in a civil partnership with Beatrice according to section 10(5)(aa) CA 1989 It is likely that Beatrice will consent to such a resident order since there is no indication of any altercation. Annabel can still apply for a", "label": 0 }, { "main_document": "The muscle, bone and fat will develop and the rate of growth can be controlled by the breed, the type of nourishment and the sex. Above is a drawing of a cow and the parts of the body highlighted are those that need to be looked at in order to see what proportion is carcass fat. Too much fat has to be removed and takes time, so selecting the correct weight of the animal and the fat cover is vital. At Manydown, the Aberdeen Angus is killed at a weight between 620-630kg, which is a reasonably good size and provides a long loin which produces good quality cuts of meat. In the Manydown Farm Shop there is a wide range of meats, pies, bacon and sausages. It started in 1994 and now is very popular with the locals and surrounding areas. The Farm Shop has won many awards; the most recent was the Hampshire Meat Producer/Retailer of the Year in 2004. All the meat is slaughtered in an abattoir in Farnborough and then the carcasses go back to mature for 3 weeks at Manydown. They are hung for tenderness and this gives quality of the meat. Compared to supermarkets such as Tesco, where by the time the animal is killed, sent to an abattoir and put on the shelf it can be less than a week. That is why people are starting to prefer the freshness, the quality of meat and customer service at independent butchers or Farm shops. Customer relations are very important at Manydown because they want to provide a good service and show they care about their animals, the environment and the community. Manydown also has 800 ewes which are a mixture of Dorset ewes and North Country Mules crossed with Hampshire rams. The ewes are grazed in parkland or permanent pasture. They lamb four times a year: Christmas, Late February, April and May. Nutrition comes from a winter forage of stubble turnips and concentrates and pellets are also used for lambs, which are then weaned at 4 months. Free range chickens are a recent occurrence and they are free to roam in an enclosed area (away from any danger.) They are fed GM free foods and are slaughtered at Manydown. Manydown has an old fashioned rare breed which is the large black pig. They are extremely easy to manage and produce good quality meat. Finally, Manydown grows oil seed rape, wheat, barley, beans, linseed, herbage seed, poppies and silage maize. Manydown has many conservation systems such as beetle banks and hedge trimming. There are many butterflies, rare plants and animals roaming these areas. The labour force is enthusiastic, skilled, loyal and motivated. There are 8 employees in total who work in all different areas of the farm. They are all very flexible and meetings are held to encourage teamwork and negotiate objectives. In conclusion, Manydown has a lot to offer for the community, wildlife, its workforce and environment. It takes into account many areas and issues that are seen to be very important today. The beef cattle is", "label": 1 }, { "main_document": "An old-fashioned debate between free trade and protectionism is recently re-emerging in the form of wrangling between multilateralism and bilateralism. From the liberal political economy perspective in IPE, bilateral trade treaties are regarded as attempts to incorporate each party into a global market, thus these are completely compatible with the postwar international trade regime based on non-discriminatory multilateralism. In contrast to this, economic realists condemn the discriminatory nature of the PTAs on the grounds that bilateral or regional trade relationship is contradictory to the postwar arrangement based on the spirit of 'freer and fairer' trade that the GATT earlier committed. This article first analyse the inter-relationship between multilateralism and bilateralism through the examination of historical episodes in international trade realm since the second half of the nineteenth century. This will be carried out with the categorisation into three chronological phases. The first phase as a prologue shows the bilateralism does not necessarily undermine the trade liberalisation. On the other hand, the wave of protectionist regionalism among LDCs in the 1960s-1970s and the explosion of bilateral PTAs after the end of Cold War imply that the absence of hegemonic leadership in international trade facilitated the enhancement of bilateral PTAs. Finally, this article concludes that such increasing number of bilateral PTAs inevitably erodes the multilateral regime, but still, more salient emphasis would be placed on the existence of hegemonic leadership rather than the dichotomy between multilateralism and bilateralism. The postwar international trade regime launched under the auspices of the General Agreement on Tariffs and Trade (GATT) has been challenged by two strands of criticism. Some observers who centered on the unequal nature of international trade criticised the Most Favoured Nation (MFN) clause (Evans 1968, 92-93) and the principle of reciprocity (Spero 1981, 188) on the grounds that, in practice, these key provisions of the GATT promote unfair free competition between less-developed members and industrialised advanced countries. On the other hand, the liberal purists advocating multilateralism showed great concern for the emergence of a bilateral trade treaty among the GATT contracting parties, which might undermine the global free trade order (Krugman 1993, Bahgwati and Panagariya 1996). The former issue still remained as a significant cause which might endanger the multilateral free trade regime, although a series of measures for internal reform such as the introduction of Generalised System of Preference (GSP). However, a much more dreadful scenario is likely to caused by the latter. Given that reckless mercantilist interstate competition during the interwar period triggered World War They point out that Preferential Trade Agreements (PTAs) The PTAs are regarded as contradictory to the non-discriminatory rule on which the GATT is based, because they allow excusive access right for selected partners' markets. Although the GATT article In this article, Preferential Trade Agreements (PTAs) will be used rather than bilateral Free Trade Agreements (FTAs). This term will reveal the nature of discriminatory arrangements of bilateral or regional FTAs contrasting the original stipulation of the GATT. PTAs are defined as a regional institution by both bilateral and more than two countries' arrangement for the elimination of trade barriers.", "label": 0 }, { "main_document": "home.\" The United States, the dominant power in world politics nowadays, has always spent only a small amount of aid on countries of little strategic interest. Also, the kind of aid offered is inappropriate (in terms of technology and absorptive capacity of the recipients) and can actually harm the poor by asking for more sacrifices on their part or by keeping them still away from the possibility to benefit. The self-interest of the donor countries may actually be deleterious for the population in recipient countries. Tony German, Judith Randel, \"Trends towards the new millennium\", 2000 (London : Earthscan, 2000). Stokke, \"Foreign Aid: What Now\", p. 107. Mosley, Moreover, sometimes the recipient countries do not give high priority to poverty reduction. These negotiations are eventually futile since once the implementation of the policy or project starts, there is no actual control of the process anyway. As Hyden mentions nobody actually owns aid because while the donor imposes some strings, it also has to accept the recipient's interests and hence in the actual practice control become very ambiguous. In addition, political actors in recipient countries seem to be more interested in seeking a clientelistic or cartelistic allegiance which encourages 'bargain over a slice of the pie rather than how to make the pie bigger\". Cassen, Robert & Associates Goran Hyden, \"From Bargaining to Marketing: How to Reform Foreign Aid in the 1990s\", Olav Stokke (ed.) Hyden, \"From Bargaining to Marketing: How to Reform Foreign Aid in the 1990s\", p. 199. However, the good news is that we witnessed a post- Washington consensus: the World Bank has actually acknowledged that economic growth and poverty reduction has not been successful in the 1980s and 1990s while financial liberalisation (with weak regulation and volatility of capital flow) generated severe disturbance in economic performance. Consequently, the New World Development Report (2000 and 2001) has been surprisingly 'inconsistent' with the previous neo-liberal approach. These new pillars demonstrate a broadening manner of understanding poverty and its causes. Selectivity, aid being directed to 'good', namely, democratic governments has replaced conditionality. Moreover, the focus on sector aid has its own shortcomings too since it may become just another name for project-aid, which has proved to be poor-oblivious, uncoordinated, fragmented, unsustainable and, given the donor's pursuit of commercial interests. \"One aspect, then, of the Bank's retreat from liberalisation is a simple change in expository style: from aggressive advocacy of specific policies to a much more agnostic posture (...)\". Paul Mosley, \"Attacking Poverty and the 'Post-Washington Consensus'\". White. \"Will the New Aid Agenda Help Promote Poverty Reduction?\". Mosley, \"Attacking Poverty and the 'Post-Washington Consensus\". Consequently, it is generally said that aid has failed to reduce poverty. Aid is indeed an instrument for poverty combat but its power to reduce poverty is only minor. Even if the target of the donors in adopting the Shaping the 21 As the chapter has tried to emphasise, the crisis of foreign aid is actually part of the overall crisis of our changing globalised world since the discourse of globalisation has captured the practice of foreign aid institution too.", "label": 0 }, { "main_document": "Dietary survey is an assessment of current intake and is an essential tool in assessing the relationship between diet and disease. It is also used on clinical medicine and assists in policy making. Weighed food intake, in theory, is considered the 'Gold standard' as there is no error in portion size. It takes account of weighed and estimated dietary intakes. Another advantage is that it is a prospective method hence des not rely on the memory of the subject. However, Garrow (1974) was correct in stating \"The measurement of habitual food intake of an individual must be amongst the most difficult tasks a physiologist can undertake\" Errors are likely to be introduced by food composition tables, as well as coding errors (where exact matches cannot be found) and it is often difficult to assess food that is not prepared by the individual. This method is consequently not ideal for subjects who frequently eat away from home. Individuals are also likely to change their diets under scrutiny as well. It is however more precise than estimated intakes or past intake methods. The use of the CompEat foodbase program database has also made what was in the past a very laborious task very practical. Please see method detailed in practical booklet. All food and drinks consumed over a 6 day period were weighed and recorded Please see attached dietary survey forms and Foodbase diary analysis results Some definitions The dietary reference values ( Generally the distribution of requirements for a given nutrient within a population is assumed to be normal or Gaussian and it covers An amount of the nutrient that is enough for only the few people in the group who have low needs An amount of the nutrient that is enough or more than enough, or more than enough for about 97% of people in a group. If the average intake of a group is at RNI, then the risk of deficiency is very small Estimated Average Requirements for energy In general the foodbase analysis showed that I (the subject) consumed less than the Dietary reference values of energy per day; 1720 kcals compared with recommended 1940kcals. From observing the dietary survey form it appears that too little energy was taken in on some days and hardly any on other days. Also as a lot of food consumed was pre-prepared e.g. sandwiches and soups it was difficult to estimate the amount of oil used so it was left out completely. This may be responsible for the low values. It also showed that very much Another negative point was t These are rich sources of insoluble non starch polysaccharides (NSP). NSP-rich foods are generally less energy dense, more bulky and may induce greater satiety than NSP-free foods; their laxative effect has been well documented (Department of Health, UK 1991) As concerns vitamins and minerals The following minerals were consumed at levels higher than the DRV showing most 'excessively' consumed first Taking into account the definition of DRV's only those values vastly different from the DRV were considered causes for concern .For example: The", "label": 1 }, { "main_document": "examine from where their ancestors had originated in order to create a sense of common identity of mankind. As with the classification of all extra-Europeans as pagans, this was done to make them seem less threatening and easier to control. Ibid., p. 532 Ibid., p. 536 Quite paradoxically, however, the Europeans were also determined to set themselves apart from the extra-Europeans for reasons of self-advancement. They were anxious to depict the natives as savages in order to justify the conquering of their lands, and even to invent a system of control for native peoples through the advent of slavery. Anthony Pagden is of the opinion that, 'Europeans had always looked upon their own cultures as privileged and upon all other cultures as to some degree inferior.\" This view was based on a claim about the way the world had been constructed. Scientific and technological cultures were considered civilized while those which were not were seen as barbaric and savage. There were many fanciful preconceptions about the New World - Anthony Pagden, Ibid., p. 11 This shows that the Europeans had preconceptions about extra-Europeans before they even arrived in the New World and were unlikely to change their opinions even after meeting them. They were determined to present the natives as savages for their own altruistic purposes. One of the oldest and most important preconceptions that Europeans had about extra-Europeans was that cannibalism was prevalent in their world - 'The sheer distance of Columbus' voyage led him to expect monsters.\" Indeed, there was a European obsession with cannibalism which is often overlooked today. This was done to set them apart from the Europeans in order to depict them as inferior and was often ascribed to various peoples with only minimal evidence as 'cannibalism... is a way of defining other people by locating them in a system of values which is an inversion of one's own'. Michael Palenica-Roth argues that, 'In any intercultural encounter, the way people are viewed has a great deal to do with how they are treated.\" As the Europeans arrived in the New World expecting the extra-Europeans to be savages - 'Columbus and others expected to see cannibals in their first voyage but did not' This maltreatment was heavily linked to the advent of slavery. The Cannibal Law of 1503 was the first instance of slavery in the New World. It was undertaken by the Spanish in order to control unlawful elements of the extra-European community. The term 'cannibal' was often extended to include those who were not even flesh eaters, for the sole purpose of inducing them into slavery - 'The Europeans found that the newly discovered peoples... could be used as an almost inexhaustible supply of forced labour.\" Therefore, the Europeans arrived in the New World expecting cannibalism to be rife. When they discovered that it was not, they merely exaggerated how widespread it actually was in order to give them an excuse to subjugate the local peoples and even to invent slavery. Michael Palencia-Roth, 'The Cannibal Law of 1503' in Jerry M. Edwards & Robert E. Lewis", "label": 1 }, { "main_document": "child bone must have been processed in some way to allow the preservation (Mellars 1996:379). Also at so at this site possible symbolism has been found in the form of a bone with fine incised parallel lines (Stringer and Gamble 1993:161). This could be symbolism, but it could also be a sign of a stage of bone production, i.e. removal of flesh. Stringer and Gamble also assert that the sites may not be Mousterian and may be dated to a later time which would change the importance of these finds (Stringer and Gamble 1993:183). The evidence at La Ferrassie seems to be inconclusive as to whether this is a burial however, as with Shanidar the number of burials found suggests a form of tradition surrounding the treatment of the dead. I shall now analyse the arguments against Neanderthal burials. One Archaeologist who argues against the idea of Neanderthal burial is Robert Gargett. Mellars sees Gargett's view as stating that although they may have buried there died this was not a symbolic or religious ceremony (Mellars 1996:375). Pettitt has outlined Gargett's arguments in his paper The Neanderthal Dead. Gargett's evidence is from studying sedimentology, taphonomy and stratigraphy. He asserts that finding full remains does not necessarily mean burial. His arguments take the form of four distinct points in answer to what he sets out as five key questions. Firstly, pits are not necessary for burial. Secondly, rock falls and natural death could account for the complete findings. Thirdly, caves create a good preservation environment; he also defines good sits as out-of-the-way-places. Lastly more burials in caves are a reflection of the increase in cave dwelling hominids (Pettitt 2002:3-5). These are the arguments in favour of Neanderthal burials. Pettitts response is that Gargetts ideas over simplify the evidence. His first point relates to Gargetts view that pits are not necessarily burials uses the 'old man' of La Chapelle aux saint, which is found in a pit. Gargett describes this pit as being a natural formation, but is has straight sides, is a regular depth and is small so the body would have had to have been placed into it. His second point is a response to Gargett's he disagrees that full skeletons are the result of cave-ins, he sites the Kabara skeleton which is in a pit which has been cut through two hearths on lower occupation levels (Pettitt 2002:4) which would suggest that the pit was dug with a purpose and was not just natural. Gargett's third point is based on the hypothesis that most burials are in the edges of the cave were sediments may have been deposited naturally over time to preserve the body, however many burials are found in the centre of caves, which would not be a good place for natural preservation. Gargett's last point is based on the idea that we find more skeletons in caves from this time because they were inhabited more. But evidence from modern day and ancient cultures shows that burials are not always in the living spaces (Pettitt 2002:5) and are generally outside", "label": 1 }, { "main_document": "Viking-age shipbuilding has always been an important part of Danish cultural heritage. The techniques of this craft as well as its cultural implication has long been known to the people, however only with the discovery of Skuldelev ships, such a thorough analysis became possible. In the second quarter of 11 These ships then formed a barrier, part of a whole system of barriers protecting the southern part of Roskilde Fjord and the town of Roskilde. The Roskilde Fjord project is one of the most detailed and planned investigations of ships and shipbuilding techniques in the world. It is also outstanding in terms of its duration, from the early investigations in the late 1950s. up till now, when more works on the publications and further interpretation and exhibition are still being carried out. The Danes are a very marine nation, hence the salvage and conservation of the ships is hugely significant for them, as being a part of their national tradition and cultural heritage. The Skuldelev ships are interesting both because of the unconventional purpose they have been used for in the past and of the innovative techniques that have been used to excavate them. By far the most detailed and recent report from the excavations that has ever been published is the volume written by the two site directors, Ole Crumlin-Pedersen and Olaf Olsen, This is the main work that this evaluation will be based on, although it is important to remember about numerous smaller publications that appeared since the 1960s. Undoubtedly, a strong point of this project is the approach with which the archaeologists started their works. The investigation has been carefully targeted since its very beginning. Firstly, its aim was to document and confirm the date of what has previously been regarded as a Queen's Margreth's Ship of 14 In addition, the newly developing techniques of scuba diving for archaeological excavations could have been applied on this site, since it was only 0.5 - 3 meters under the water surface. There was a need to practice these new techniques and people preferred to practice them on the less significant sites. However, shortly after the first surveys and initial excavations, the site revealed unexpectedly a lot more evidence than it was ever hoped for. Bearing that in mind, the aims of the project altered to the salvation, conservation and exhibition of finds. Undoubtedly an important advantage was the extensive funding received from various organizations interested in promoting the Danish cultural heritage. In 1962, in order to raise the ships safely it was necessary to drain the whole area of the site (ca. 2,500m It could not be drained completely straight away, because the timbers needed to remain moist, otherwise they would corrode and become virtually impossible to lift. Additionally, the stones from ships' cargo had to be prevented from sinking into the soft wood of the construction elements, which would inevitably cause further damage. The volume published in 2002 is an excellent example of a detailed, in-depth and well structured publication of a project report. Crumlin-Pedersen and Olsen have skilfully gathered", "label": 0 }, { "main_document": "The Institute of Ecology and Environmental Management (IEEM) is a company limited by Guarantee: No. 263 9067. The institute is a membership based organisaiton comprised of over 2300 professionals working in various sectors in ecology and environmental management. It was established in 1991 as a result of the deliberation of a working group formed by the British Ecological Society, The British Association of Nature Conservationists, the Institute of Biology and Royal Geographical Society (BES&IEEM, 2001). The principle aims of the Institute are 'to raise the profile of the professions, to establish and improve the professional standards, and to promote an ethic of environmental care within the possession (IEEM a, 2006, Intute: Science, Engineering and Technology, 2006)'. This report will firstly outline the following topics; structure, membership, policy objectives, function of the Institute, secondly demonstrate the recent achievement of its work with an example of the production of guideline for Ecological Impact Assessment, and thirdly discuss the role of Rural Environmental Scientists in the Institute. The Council serves as the decision making body and activities of the Institute is coordinated by five committees and five geographical sections (IEEM a, 2006). There are five sections: The Scottish Section, The North East Section, The shadow North West section, The Shadow Irish Section, and The Shadow South West Section, working at regional level. The Geographical Sections organise local and national conferences, and events. Most of the regional sections have been launched recently, it is expected that there would be an expansion of regional activities in the future. The institute itself is a member of The Society for the Environment (SocEnv), The European Federation of Associations of Environmental Professionals (EFAEP), and the International Union for the Conservation of Nature and Natural Resources (IUCN) (IEEM a, 2006). In terms of the funding, due to the restriction on the provision of information available to the public by the Institute, there was no evidence of funding sources supporting the operation of the Institute. Since the Institute is a company which needs to gain earning from its services, it is estimated that the Institute is likely to obtain income from the sales of publication and consultation fees as well as membership fees. There are over 2300 members drawn from individuals engaged in a broad spectrum of work in ecology and environmental management in the public, voluntary and industrial sectors (Fig 1). Depending on the level of qualified profession, memberships are categorised into five classes: Full, Associate, Graduate, Affiliate, and Students. Full memberships cost For example, PhD holders need to have at least two years of practical experiences, honours degree holders in a relevant subject are required to have a minimum of four years experiences as an ecologist or an environmental manager in order to obtain eligibility to apply for full memberships (IEEM a, 2006). The following five policy objectives are stated by the Institute (IEEM a, 2006): The services provided by the Institute are divided into four areas: Training workshops, Conferences, Publications, Provision of opinion for ecology ad environmental management through consultation (IEEM a, 2006). Training workshops: Workshops are not intended", "label": 0 }, { "main_document": "for the motivation of the employees, but also to provide a great work environment, leading in increased productivity and satisfaction from the employees as well as from the clients. As the strategic orientation will first be ethnocentric, Monarka will need some expatriates (PCNs) as key managers. Expatriation refers to the process of international transfer of managers, although the term expatriate is mostly used to describe PCNs (Harzing, 2004a). Even though recruiting PCNs as key managers has many advantages as the familiarity with the home office's policies and practices, it also has disadvantages (appendix 6). When recruiting PCNs, it will be important to ensure they will be able to cope with the Nepalese culture in order to minimise failure risk. As English is the business language in Nepal, it is not necessary that the PCNs speak Nepali, though it would be a great advantage and would facilitate their integration. In order to recruit qualified managers from the UK and to keep them, Monarka will offer them compensations such as housing allowance, automobile, home leave, family allowance and probably hardship and mobility premium. These offers will be negotiable in order to suit each manager's need and requirements, as much as possible. In order to ensure that the employees will be able to deliver the promised level of service to the client, the training of the workforce will be highly important. Training can be seen as a long-term investment, as it enhances the employee's commitment toward its employer, and diminishes the tendency of the employee to voluntary leave the organisation (Torrington , 2005). In fact, as Price (1997:190) points out, \"human resource development (HRD) is a strategic approach to investing in human capital\". As discussed previously, a human resource management approach will be taken toward the employees, as opposed to a personnel management approach. This implies that the company will consider training as an investment and will aspire to improve the quality of its recruits. Also, with the low uncertainty avoidance feature, the company will focus on empowerment of the staff, i.e. the ability to take decisions independently. Nevertheless, formal on-the-job training will be crucial for the operational staff. In fact, training in Monarka will be seen as an integral part of the organisational strategy. As the company aims to move from an ethnocentric orientation to a polycentric and, with strong internal labour market, it is logical that the company promotes continuous development of its workforce in order to encourage promotions within the organisation. It will be important that the international managers (PCNs) are trained or have worked in another Monarka hotel before having the opportunity to go in Nepal. To receive training on the Nepalese culture would also minimise the potential inability to cope with the cultural difference. Moreover, as the local workforce is very low-skilled and as the organisation wants to integrate with the local culture, it will refer to local organisations, governmental and institutional programmes for the training of its core workforce. Finally, as the organisation wants to move toward a geocentric approach, it would be appropriate to provide the chance to", "label": 0 }, { "main_document": "DelElement and DelElement2 together take a list and a variable and remove the first instance of the contents of the variable in the list. DelElement2 keeps iterating with L1 getting an item smaller each time until L1's head matches LookingFor. At that point it returns the list with L1's head removed. To do this, it stores the items in the list that it has already checked in L2, and concatenates the two lists as it terminates. The function will always terminate after a finite amount of time if L1 is finite, as L1 becomes an item smaller each iteration, and the worst-case scenario is that it terminates when it's empty. As it only looks at each item in the list once, and stops when the item has been found, it is running at optimal efficiency. The relatively inefficient concatenate operation is only ever run once. The grand idea of IdenticalList is this: if L1's head gets removed from both lists with each iteration, then if the lists are identical apart from their order, then both should become an empty list at the same iteration. Therefore, if one gets to an empty list before the other, then they are not the same, and a value of false can be returned. If they both get to an empty list at the same time, a value of true can be returned. This function will always terminate if L1 is finite, as it is getting smaller with each iteration until it terminates at an empty list. It may terminate before this point, however, if L2 is smaller than L1. This solution appears to work in both extreme and usual conditions, with different data types. As both of the recursive functions are tail recursive, and both terminate as soon as they can, they can be said to be efficient. There are no efficiency-costing concatenate functions in the recursive bits of the functions, just once as a value is passed out. As I got into this piece of work, I enjoyed doing it more than I expected. There's a great moment on each question where you press enter after putting in a lot of code or chasing a bug and it all just works. I have learnt a lot about using lists in Caml, as I hadn't that much experience with them before. I have previously written two or three functions that act on or use lists, but I've never written a function with two lists as its arguments before. I think the thing I found most difficult was remembering to use the right brackets all the time in the functions, and remembering which variables contained lists, and which contained single variables. I often got type synthesis errors when it tried to match int list list list with int list list, when everything should really just be int list. Once I gained experience of identifying where the mistakes were, I was able to program the functions in the latter questions with very few errors and bugs. I think that after doing this coursework, I have a much more", "label": 1 }, { "main_document": "target for these companies to increase their market share in the UK and is also a proper fit for their businesses. This above argument can also be supported by the increase in the share price of Centrica over speculation of its take over. Centrica shares went up to 8% during a fevered period of trading inspired by the speculation of the Scandinavian energy outfit Norsk Hydro leading as a potential buyer along with Shell and Gaz de France as mentioned in reports by Bloomberg. [10] In fact, Norsk Hydro was negotiating with Centrica on take over issues but the talks later faltered on the reservations of Norwegian government. [10] Also, Dow Jones considers Centrica as an attractive takeover target mainly because UK gas market bottleneck will ease after from 2007. [11] All above confirms Centrica PLC to be a good take over target. Centrica is the UK's biggest residential gas and electricity supplier. Its policies towards environment are as follows [2]: (a) To reduce the direct impact of Centrica's activities on the environment. (b) To reduce the indirect impact of Centrica's products and services by helping the customers to make informed decision about its use. The above policies are implemented by [2]: (a) Implementing sound environmental management system. (b) Improving resource efficiency especially in transport, energy and waste. (c) Contributing environmental stewardship. (d) Working with stakeholders to improve communication. As a public limited company having huge stakeholder base mainly as shareholders and customers, Centrica need to have policies which adhere to strict environmental legislation. This is important for the long term success of the business. It is operating in the energy and related services sector which has higher impact on the environment as compared to other industry sectors like software, telecommunication, etc. The direct and indirect impact of Centrica's activities on different areas of the environment is as shown in the figure below [2]: From the figure above, it can be seen that the major environmental impact areas are climate change, air pollution, waste and hazardous chemicals. These impact areas are related to the gas and power provision industry which forms major business of Centrica. Recently, the impact have increased due to the acquisitions of 7 gas - fired power stations in the UK, 2 in North America and a gas storage facility in UK. [2] This is because power stations and storage facility by their very nature raises air pollution through emission of hazardous substances and climate change gases. Also, British Gas being UK's largest domestic central heating and gas appliance installation company, it is responsible for significant emission of greenhouse gases. [2] Also, use of resources like fuel emission from vehicle transportation and building lighting and heating also add to the impact. Since the impacts are very severe on the environment, Centrica is minimizing them using the policies stated above by implementing sound environmental management system (EMS) and involving stakeholders to work together towards achieving the targets and objectives. The successful implementation of the policies can be understood from its compliance with most of the environmental legislation and its target", "label": 1 }, { "main_document": "from the home to the host country. Although Cooper and Kleinschmidt (1985:41) stated, that organisations which adapt their export products according to their targeted customers and host country will perform better than those which do not, the Lakeside Group is generally adopting Levitt's (1983) idea of a standardised marketing strategy for multiple reasons. First of all, a standardised approach will help to ensure a continuing high quality service standard. Secondly, more efficient control and planning processes facilitate marketing of overseas units (Whitelock 1987). Thirdly, lager firms tend to choose a standardised strategy in order to better compete directly with their competitors divergent to smaller organisations which often operate in niche markets (Chung, 2003:69 & 74). See also Appendix G for an overview of standardized and adapted features. The target market of the Lakeside Group as stated above requires a constant high and exclusive service. The Group ensures this through standardised processes and quality checks throughout the service process. The organisation, however, adapts to its host countries and regional settings within some few areas. First of all additional services or attractions certainly have to be offered according to the local settings. Second, buildings and especially rooms reflect the local culture and traditions. In Kenya there are the cottages and beach lodges furnished with African style furniture and fine art from local artists which forms one part of local community support as stated in the Group's mission statement. Altogether, one can say that there are only few adapted elements which always will be planned by the head office in order to ensure consistent quality and consensus with customers' needs. See also Appendix D for more attributes and products offered. The Group follows a marketing oriented approach of pricing which considers costs, competition, elasticity of demand and product positioning (Kotler et al., 1999 and Kasper et al, 1999). Furthermore, differential pricing is widespread in the hospitality industry. In hotels, yield management is implemented to achieve high revenue (Kimes, 1989) Competitors' prices for the cheapest double room range from EUR 50 up to EUR 120 (see Appendix D). As the organisation adopted an ethnocentric strategy, its focus is on a stable price level throughout its units in order not to outperform one against another. Prices are always stated in Euro in order to overcome the problem of inflation and exchange rate. However, due to competitors' prices which start very low (see limitations), the Group has to adapt and set their price level slightly lower than in other countries. This, however, is not seen as a large problem due to the distance to Europe and the UK and due to lower costs for the company within Kenya. The Group therefore decided to sell its cheapest double rooms from about EUR 110. In general, however there are different price rates for different segments and time of booking (Concept of Yield Management). Sales departments yet obtain standardised manuals and training on how to open and close price categories. The right choice of location is imperative for organisational success (Middleton and Clarke, 2001:59). The Group's hotel will be close to", "label": 0 }, { "main_document": "those without. The critical T value at 33 degrees of freedom is smaller than the calculated T statistic for this test right up to the 1% significance level, which means if the result is statistically significant; we can be 99% certain in rejecting Ho and accepting H1. The T statistic in this test gives a P value of 0.001 which means the test statistic is 99.8% significant, so for Sainsbury, a customer having kids does influence the amount spent on fair-trade chocolate. The graphs show that this influence is positive, that customers with kids spend more on chocolate than those without. In this case there is only a 1% chance that a type one error could have occurred. The tabulated T value for this test is smaller than the calculated T value only up to the 20% significance level; therefore if the result is statistically significant we can only be 80% certain in rejecting Ho and accepting H1. The P value given in this test is 0.738 which is clearly above any acceptable level of significance which means we cannot accept these results as conclusive. The critical T value for this test is not less than the calculated T value (-0.019) at any acceptable level of significance therefore if the result is statistically significant, Ho would be accepted and H1 rejected. As reflected in the graph there is very little difference between the purchases. However, the P value of 0.985 shows that the result is almost 100% insignificant. From the graph and table above it is clear that medium TV watchers have the highest average expenditure on all three types of chocolate. Light watchers on average do not purchase fair trade or organic chocolate while heavy watchers have the lowest average expenditure for standard chocolate. The table and graph above show that Sainsbury's medium television watchers, on average, spend the most on all three types of chocolate and that light TV watchers do not purchase chocolate at all. Heavy watchers are shown to purchase considerably more of fair-trade and organic chocolate than standard, whereas medium watchers purchase more of standard chocolate than the other types. Ho: The amount of TV hours watched is irrelevant in explaining chocolate expenditure. (The means are all equal) H1: The amount of TV hours watched is a relevant factor in explaining chocolate expenditure. (The means are significantly different) From the results of the test summarised in the table above, the F statistic exceeds the critical F value of 4.605 where P = 0.01 which means that if the result is significant, we can be 99% confident in rejecting H0. The P value of 0 indicates a 100% significant result therefore H0 is rejected and H1 accepted. The amount of TV hours watched is relevant in explaining organic chocolate expenditure. The F statistic in this test does not exceed the tabulated F values at any acceptable level of significance (F The P value of 0.106 is below the 90% significance level (0.1) but only just which indicates that the result is not extensively insignificant and can be accepted", "label": 1 }, { "main_document": "faecal enteroccoci, enterbacteriacae and (9.7 x 10 The meat sample was particularly difficult to pipette because some larger (fat) particles became stuck in the pipette. While making the dilutions and inoculating the media the sample taken may not have been representative of the whole sample. It is acceptable for fresh meat to have a level of contamination of up to 50 Enterobacteriaceae bacteria/g and 100 Cooked meats should have less than 10/g of each of these bacteria. The source of the initial contamination from meat can either be from the internal microflora of the animal itself or the environment it is initially exposed to when slaughtered. The acceptable level of contamination reflects the fact that fresh meat will have a certain level of contamination from either (or both) of these sources. During processing the meat will undergo heat treatment which will kill bacteria and bring the contamination level down to a point at which it can be consumed without carrying any risk of causing illness. This is why the acceptable level of contamination is lower in fully cooked meat products. Post-processing contamination of the meat would occur if the cooked product came into contact with a surface containing unacceptably high levels of contamination. This could be a surface that has been in contact with fresh meat and would infect the cooked product with potentially harmful levels of bacteria. The storage conditions of meat are fundamental in keeping the numbers of microbes to a minimum. If kept refrigerated at 5 In cooked meats heat- resistant bacterial spores can survive processing. Therefore, appropriate storage conditions should be maintained even when the bacterial count is low. The counting method used does not give an exact number of microbes present so the processed and raw meats are given a range on which to access their acceptability. It is acceptable for fresh meat to have a level of contamination of up to 50 Enterobacteriaceae bacteria/g and 100 Cream cakes should not contain detectable numbers of these bacteria. This is because fresh meat will be cooked before consumption killing the bacteria present whereas cream cakes will not undergo any further heat treatment so should not have any detection of these microbes. The source and nature of the initial contamination of cream cakes is the bacteria in the cream itself. These bacterium exist naturally in relatively low levels as the milk comes into contact with the environment on exiting the cow's udder. The milk, and cream which is separated from it, should be kept chilled and processed soon after milking to prevent multiplication of the bacteria. During processing the cream will be heated to pasteurisation temperatures. This kills bacteria and brings the contamination level down to low enough levels that, if stored at chilled temperatures; such low numbers of bacteria will exist that multiplication will be very slow. The cream should be packed in aseptic conditions to prevent post-processing contamination. If post-processing contamination did occur the amount of bacteria in the cream would pose a risk to the consumer. When the packaging which separates cream from the environment is", "label": 1 }, { "main_document": "conduct for the transfer of technology' W. Glaser, op. cit. As pointed out by Silver and Arrighi, there is a Silver, Arrighi (2001 : 1) The labour force is no longer united as one entity. Even if the transnational labour force population (the migrants for work) is increasing, there is not one global labour force united around one principle of struggle, but a fragmented global labour force, divided along the education and skills level (low-, middle-, and high-skilled workers). How can law encompass this fragmented reality? Is this movement possible among the high-skilled workers, who do not have the feeling to be exploited (except for the ones who are employed in jobs that do not correspond to their qualifications in a phenomenon of brain waste: they can ask for more information about the opportunities, a better enhancement of their formation abroad, less discrimination) because they receive high wages? The exploitation in this case is outside the worker himself and concerns the society that he left. Does he feel a duty towards his home country? Is a labour movement efficient if not massive? Is a conscious possible? This paper showed the ambiguity and the difficulty of the debates raised by the issue of the brain drain. The emigration of high-skilled workers is not a bad thing in itself. But, as other phenomena of the globalization (movements of goods, services and capital) it has to be regulated in a fair way.", "label": 0 }, { "main_document": "a dangerous house price boom. This view is backed up by Royal Institute of Chartered Surveyors (RICS) ' Part of what makes entry risky is Royal Institute of Chartered Surveyors: Unknown (2001): If the UK joins the euro interest rates will decrease as they are lower in the EU. This means that mortgage rates will also decrease; thus borrowing will increase and house prices will rocket leading to a boom in the housing market. As with most things after a boom comes the slump and the economy would go into recession. The Keynesian Theory of Consumption, which has been looked at with reference to the UK by the HM Treasury have discovered that on HM treasury: EMU Study: In conclusion I don't think the UK, in its current state, should join the euro. The system of financing mortgages and the issue with regards to the lack of supply of new housing needs to be resolved before the UK can safely join the EU. ' John Hawksworth (2001):", "label": 1 }, { "main_document": "physiological and biochemical activities (as basal metabolic rate) and also the physical activities, which can be written as the equation as below. (Fig.1) However, it is not necessarily to maintain this equilibrium in a day-to-day basis, human body has the ability to overcome this problem. Surplus energy are stored as fat in the adipose tissue, which can be broken down to provide when it is needed. The actual energy balance equation can be written as below. (Fig.2) The initial BMI of the person study was 18.9, which is in the category of underweight. During the five-day study period, the total energy expenditure exceeds the total energy intake. Energy has been drawn form the fat in adipose tissue to target organ in which energy is needed. Due to the fact that the person study is under the underweight category, it is essential for her to maintain a balanced diet to avoid excess weight loss. Although the person study has gained 0.5 kg during the five-day period, it can be refer as negligible. Possibilities for this error are: inaccurate measurement of weight due to the weight of clothing. On the other hand, although human body has the ability to store energy as fat as an energy back up, it is still very important to maintain a balanced diet overall. Dietary excesses are contribute to obesity, heart disease, bowel disease, etc. in contrast, dietary deficiencies are contribute to starvation, marasmus, kwashiorkor, etc. it is very important to maintain a suitable uptake of vitamins and minerals in daily basis, as the human body has a uptake limit of vitamins and minerals. The excess intake of vitamins and mineral will just go though the alimentary canal and eject with faeces. Sudden and large amount of vitamin/ mineral intake will cause vitamin/ mineral toxicities. People suffer form very serious disease in the deficiency of vitamins and mineral, for example: scurvy (deficiency of vitamin C), anemia (deficiency of vitamin A).", "label": 0 }, { "main_document": "an attitude that might best be described as a sense of \"limited agreement\". Quite rare is the rise-fall, often dubbed the \"gossiping\" tone, as it conveys a sense of surprise, or of intense approval or disapproval (Roach 1991:139). Much more could be said on the functions of different tones in different environments, but as this essay is mainly concerned with form and notation, the above sketchings will have to suffice. What has to be mentioned however is the way the pitch changes over the different syllables of a tone-unit, because the tonic syllable is not the only significant place in terms of pitch. The head of the tone-unit can be either high or low, too, meaning higher or respectively lower in pitch than the beginning of the tonic syllable (Roach 1991:154-155). One of the more complicated issues, especially in terms of notation, is how the tone is spread over the syllables of a polysyllabic utterance. For although it is traditionally said that the tonic syllable carries the tone, in case there is a tail, i.e. syllables following the tonic syllable, the tone spreads to the end of the tone unit. In falling and rising tones, the downwards or respectively upward movement begun on the tonic syllable continues step by step with each following syllable until the borders of the speaker's individual pitch range are reached. If there are still other syllables remaining, they are continued at the borderline pitch reached at the end of the progressive movement (Roach 1991:149). If a fall-rise occurs in a tone-unit with a tail, the falling movement takes place on the tonic syllable and then the pitch remains low until the rise occurs on the last stressed syllable of the tone-unit, or on the very last syllable of the unit in case there is no stressed syllable in the tail. The reverse happens in rise-fall tone-units with a tail (Roach 1991:153). Roach marks pauses, i.e. the boundaries between utterances like this: Tone unit boundaries are indicated by: Underlining identifies tonic syllables: The tone is indicated by a symbol right before the tonic syllable. In Roach these appear in superscript, but for technical reasons they will be written in subscript in this essay. ` fall Example: Stressed syllables in the head are preceded by one of the symbols below, depending on whether they are high or low. ' high head Example: Stressed syllables in the tail are preceded by this symbol: Extra pitch height, as sometimes found in emphatic utterances, is marked by the following symbol preceding the particular syllable and the tone symbol: Roach acknowledges that more detailed notations of pitch movement and height is possible, but argues that it would not be profitable to do so (Roach 1991:142). David Brazil's objective is to help advanced learners of English to achieve a greater proficiency in intonation, particularly to avoid misunderstandings in communication. Therefore his approach aims to be very applied and functional. Like Roach, he starts by breaking speech up into pieces called 'tone units' (Brazil 1994:7). However, the symbol he uses for tone-unit boundaries is different", "label": 0 }, { "main_document": "engaged in mass production and many less developed countries are still struggling to give their population access to mass consumption. Another feature of the new global production chain that can be interpreted in terms of neo-Marxist dependency theory is the role of elites in the support of underdevelopement. As Hunt (1998, 201) points out, the 'condition of dependency is sustained through the voluntary collaboration of the dominant class interests in the periphery'. It is not a case of perpetrators and victims. There is a conscious and willing acceptance of dependence by the elites in the developing countries (Frank 1977, quoted in Hunt 1998, 202). The adoption of EPZs is a current example of this idea: it represents the choice made by states (on behalf of the elites) of the dependent path on exports. Dependency theory, however, cannot explain all subtleties of the new economic order. Stopford and Strange (1992, 19-23), for example, have focussed on the increased competition among states for shares of the world's wealth, what led to a change in the way governments and TNCs negociate as well. There is more interaction between states and firms and among corporations themselves. As a result, the emergence of a privileged transnational business civilisation, in the context of triangular diplomacy (i.e. state-state, state-firm, firm-firm), has limited the independent option for state governments. For Dunning (1997, 12-17, 46), the new global order is requiring the adoption of a new form of capitalism, which he calls 'alliance capitalism'. Firms are making cross-border strategic business alliances to reduce costs, lower capital investment, benefit from research and development, reduce political risks and gain knowledge about markets. He points out that, despite the integration of international production and cross-border markets by TNCs, the developing countries' slice of the cake is not so tasteful; it constitutes the low value-added part of the chain, with labour-intensive activities. The solution for them, he says (1997, 37), is to build their own networks; their firms should establish alliances with corporations from other developing countries in the same region. Although his advice seems sensible, it has not been the rule in the majority of developing countries; the bulk of their alliances is still with corporations from developed countries. In Mexico, for instance, more than 70% of the maquiladoras belong to American, Spanish and Japanese companies. The remaining 30% belong to developing countries, such as China, Korea, Taiwan and Philippines. This shows the current stage of interaction among developing countries: they use EPZs in their own territories and, when expanding abroad, look for EPZs as well. For Mexico, the first idea of establishing EPZs was to foster short-term jobs in periods of economic recession. Three decades (1950-70) of import substitution industrialization had not been proved efficient for the Mexican economy and, in the begining of the 1980s, the country was strongly dependent on petroleum export revenues. The 1982 international oil prices crisis made impossible for Mexico to service its foreign debt. It declared moratorium afterwards. The International Monetary Fund offered conditional help. The terms were deregulation, privatization, and liberalization of trade and investment policies.", "label": 0 }, { "main_document": "have a negative perception to Liverpool. This might be a result of the media parties such as newspapers and TV programs since they have high power to effect tourists' intention. As Liverpool undertaking the regeneration project, investors are also the key player in the industry. They act as the suppliers to provide services or product with the tourist, for example, 'Liverpool 08' project has led to new investments in theatres, concert venues and national museums and galleries. This includes the transformation of the Museum of Liverpool Life into the world's greatest city history museum Therefore, they have high interest in the tourism industry and have great power to affect the industry. Especially, in the Liverpool case, their role can be really important to support the tourism since it has more museums than any other UK city outside of London Also, Liverpool's cultural importance is represented by its range of traditional and modern galleries However, they should make sure that there are sophisticated arrangements and a wide range of facility in the attractions which are suitable to everyone such as children, the elders, disabled people and family. For example, Liverpool Museum has flat-access exits as well as toilets for the disabled As disabled people cannot move easily, these facilities can make them much easier to visit museums or galleries. Similarly, Liverpool Museum has many facilities and enjoyable activities for children. One of them is called 'the Big Draw' and it allows kids to develop their creative ideas for drawing These activities can encourage more children to feel like going to museums, galleries or theatres, even if they are unfamiliar with those. As a result of it, this would increase the number of family who visit to these attractions. To create the atmosphere where family can spend fun time together might be a key aspect for the future decision. So far, it seems that Liverpool has many advantages and gives tourists the image of well-organised city, but there are also some problems should be considered. For example, the closing time of the attractions is too early (around 5 to 6 o'clock) Therefore, it is hard for most of the business people to visit these attractions. In order to deal with it, they can have a special service for busy men such as making particular days when the attractions open until relatively late. As Liverpool is a victor in the contest to be named European Capital of Culture 2008, it should possess some competences. However, those competences are not necessarily making Liverpool to be succeeded, although European Capital of Culture 2008 has created ideal opportunities for enhancing cooperation among its key stakeholders (As stated above). In order to analyse the effectiveness of Liverpool model, benchmarking may be a useful tool to compare with other tourist destinations and identify the opportunities for improvement. Greenwich was a tourist destination chosen for the host of the Millennium Dome exhibition in 2000 which is similar event to European Capital of Culture 2008 in Liverpool. Furthermore, Greenwich and Liverpool both gain the status as a World Heritage Site and maritime", "label": 0 }, { "main_document": "human spirit rather than the cultured arts - 'Philip looked away, as he sometimes looked away from the great pictures where visible forms suddenly become inadequate for the things they have shown us' (p152). His love for both Caroline and Gino is his connection with the world outside himself; he has become part of it rather than a spectator. Through this process his search for personal and social fulfilment has progressed significantly, as he knows 'Romance only dies with life' (p41). Ricky has a similar experience in Characters unlike these two, such as Stephen and Gino have a greater ability in seeing opportunities for connection and fulfilment because they aren't constrained by conventional society. They recognise the valuable things in life, such as connecting with the earh or loving one's child unconditionally. Forster demonstrates what can happen to a person if their innate abilities are denied them through Leonard Bast in He represents a generation sadly disconnected from his more fulfilled ancestors. He is infected by his internalisation of modern values, and upon seeing him Margaret wondered 'whether it paid to give up the glory of the animal for a tailcoat and a couple of ideas' (p170) - by not connecting with his true self he is unable to achieve any kind of fulfilment. Although Philip finds fulfilment through his relationship, there is a paucity of heterosexual relationships in Forster's novels, possibly as a result of his homosexuality. Mary and Henry, Lilia and Gino, Ronny and Adela and others all fail to make truly meaningful connections. This may be because they are often in opposing binaries, but, like Ricky, who wishes that there was a special place to record the importance of friendship, Forster also appears to value unconventional relationships, with connections where you wouldn't necessarily expect to find them. One of the most unusual relationships is that of Aziz and Mrs Moore in As a homosexual writer, Forster would have been fearful of being labelled. Rather than labelling helping us to connect with something, it seems to distance us from the essence of what it is, an idea explored in The bond between Mrs Moore and Aziz is so unusual that it is indefinable as a relationship - a notion that Forster seems to enjoy. Despite their differences, they are able to see the beauty in each others souls and make a connection. Their meeting significantly occurs in the idealistic 'Mosque' section, and so it is fitting that Aziz finds a friend there - a place that is an 'expression in architecture of a recognisable human aspiration' (p161, Messenger, 1991). Their relationship seems idealized, yet it gives the most pleasure throughout the novel. The friendship is frequently frustrated, yet it continues beyond death as their connection is of great value to Aziz even after Mrs Moore has died. This suggests that whilst life is fragile, truly fulfilling connections gain a place in a spiritually higher sphere. Unconvincing heterosexual associations coincide with weak sexual relationships. It has been said that in It is equally unsatisfying in Even odder is the situation between", "label": 1 }, { "main_document": "inherits an ApoE gene from each parent. It seems that people who inherit an ApoE-E4 gene have a higher risk of developing Alzheimer's disease. Abnormalities on three other genes - PS1 (presenilin type 1), PS2 and APP (amyloid precursor protein) - have been identified as a cause of the rare inherited form of the disease, but the mechanism is not clear. (Caton et Al 2002). Researchers are still very interested in links between Alzheimer's disease and Down's syndrome, research into this began when physicians noticed that the majority of people with Down's who lived into their 30s developed degenerative brain disease remarkable similar to Alzheimer's . People with Down syndrome always have an extra chromosome 21. One gene on chromosome 21 creates amyloid precursor protein (APP). Abnormal deposition of amyloid in the brain occurs in Alzheimer's disease. It seems the extra gene results in an excess of amyloid. Consequently, almost all people with Down's syndrome over age forty develop Alzheimer's disease (Cutler N & Sramek 1996). There are also some common environmental factors that are responsible for a familial frequency such as a higher risk in people who live in high background radiation areas such as Edinburgh which is built on radioactive granite. Trace metal contamination is another theory proposed to explain the development of the disease. Animal studies (Toates 2001) have identified degenerative neurofibulatory tangles similar to those found in Alzheimer's disease after rats were given aluminium salts. Post mortems have found that there is a 10-30% increase in aluminium deposits in brains of Alzheimer's sufferers. It is not yet known if a reduction in aluminium could be of benefit to patients. Most aluminium contamination is found around the amyloid proteins. Some researchers think that there could be a link between Alzheimer's and They believe that as we age, the immune system may lose the ability to recognise the individuals own body and begin to attack it. As a result, we develop \"anti-brain\" antibodies which cause neural degeneration. There is to date little evidence to support such a theory. The link with multiple sclerosis is the main reason that this is being investigated as since with MS the brain becomes visible to the rest of the body and it begins to attack it. Another theory is that it may be a slow virus. This idea comes from a study of tribes in Papua New Gineau who transmitted Kuru (Alzheimer's disease society 1997) after eating the brains of their deceased tribesmen who had died from it. Another area being investigated is that of neurochemical disturbances. Enzymes related to the neurotransmitter aceytocholine are greatly reduced in the brains of individuals suffering from Alzheimer's disease, especially in the hippocampus. In this essay I have discussed the pathology and neurological correlation of Alzheimer's disease and how this has informed practice. With an aging population and a climbing incidence of the disease, it is more important than ever for healthcare professionals to understand the treatment needs of this group of individuals. Alzheimer's disease has a lot of similarities to the normal aged brain; it seems that", "label": 1 }, { "main_document": "of the customer. It may not be that all of the solutions are bespoke as, such as in Dell Computers, there may be a core range of components which most customers need and which are adapted quickly and easily to make the final solution - however the solution appears to be fully bespoke to the customer. This change in emphasis has turned the high level strategising on its head. Rather than Fujitsu executives deciding upon the products to be developed, these executives have moved towards a relationship management approach and are investing their time and that of their employees in discussing customer business issues and coming up with solutions for the problems, rather than fitting the problem into one of their already developed solutions. The marketing focus has shifted from a product focus to a collaborative problem solving one and whilst to some it may seem like a waste of valuable executive time, this kind of focus often reaps dividends as it has done at Fujitsu. A new factor in Fujitsu's client engagement strategy is to bring in external experts to discuss business problems with senior customer executives, making use of the existing relationships to exploit this channel. This demonstrates that Fujitsu is making a financial investment in its new customer led strategy of understanding customer problems and then developing solutions. It recognises that in order to really understand the nuances of each customer's issues it may need to buy in specialist resource and is making the investment to do so. Linked to this is the fact that in order to develop end to end solutions that fully meet customer specifications and fully solve their problems, Fujitsu is partnering with other organisations that have the required specialist knowledge to enable it, under it's own brand name, to deliver the full service. In this way Fujitsu is making a name for itself as a full service provider, increasing market awareness and brand equity as well as satisfying customers. In summary, Fujitsu's updated appreciation of buyer behaviour has led to a more consultative sales approach to ensure a full understanding of the customer's business and issues, and a more collaborative approach to solving these problems, focussing on client led development. This has clearly been a success for the company, resulting in increased profitability, client base, repeat business, customer satisfaction and employee morale.", "label": 1 }, { "main_document": "decentralisation, the process of globalisation has caused a large increase in the inward application of armed violence in the form of intra-state civil wars. Of the 61 major conflicts from 1989 to 1998, 58 were civil wars. It seems that humankind is not willing to give up its capacity for war that easily, and the globalisation phenomenon has, whilst amalgamating states, left conflict where there was once unity. UNDP (1999) The common misconception that globalisation is replacing traditional methods of international interaction may cause it to appear threatening and unfamiliar. However, when one realises, as Koenig-Archibugi does, that \"[g]lobalization is not supplanting traditional lines of social conflict and cooperation, but is redrawing them\" The desired direction can change in the view of different groups with varying interests; the 'overclass', as mentioned, seek to maintain the exploitative link between the 'core' and 'periphery' through the creation of elitist regulatory institutions. The egalitarians, however, state that those benefiting from globalisation should compensate those who do not gain from it, perhaps through the creation of a global welfare state. Koenig-Archibugi, M. in Held, D. and Koenig-Archibugi, M. (2003), Stiglitz offers a resolution to the current overclass situation: voting power in institutions such as the IMF and World Bank should be less skewed; there should be an increase in the transparency of decision making and an overall change in the approach to crisis management; a change in the rules on bankruptcy when countries are unable to pay their national debt should be considered and there should be further concern for employment and working conditions. It can also be argued from a Therefore, the question of whether or not globalisation is a good thing and should therefore be increased is almost entirely dependent on the point of view one takes; whether a However, one theme is clear throughout all stems of thought: globalisation, whether a new phenomenon or not, needs to be treated with great caution and respect, the process may create losers as well as winners and what is important is for those better off to protect and subsidise others in order to maintain equality and preserve the good name of globalisation. Stiglitz, J. in Held, D. and Koenig-Archibugi, M. (2003),", "label": 1 }, { "main_document": "a reported 20 billion at current prices. Documentation has found that the \"dominant concern of policy makers was their aspiration to maintain stable levels of demand for British beef in domestic and export markets, and their desire not to increase commercial costs or public expenditures... A concern with public health was never a dominant factor in ministerial decision making\" Wallace. 1998. p.4 Phillip James, in James.1998. p.45 Van Zwanenbergal. 2003 p.29 The second factor leading to the erosion of consumer confidence was the fragmentation of responsibilities for food within government between MAFF and the Department of Health. This fragmentation led to significant confusion of responsibility and public accountability. Increasing authority on the part of the DoH, especially in terms of the food nutrition and safety issues, witnessed the growth of a strong public health and food policy culture. The dichotomy of the food sector led to \"inevitable turf battles between the two departments over food\" and hence exacerbated the lack of control of the MAFF over its environment of operation. Marsden, p.197 Thirdly, and perhaps most significantly, was the lack of regulative control of the MAFF's operation. With regard to the BSE crisis, in light of the reluctance of Ministers to take costly preventative measures, it was the flawed regulatory system which allowed for the dismissal of certain scientific facts. Reports have highlighted the way in which this flawed structure enabled the vilification of scientific conclusions advocating the need for governmental precautionary action. This meant that ministers and officials could \"hide behind the advice of the scientists displacing responsibility for policy on to committees of experts.\" Regulatory gaps even stretched as far as to allow, in some cases, scientific committees to be set up so that they produced outcomes favouring particular policies; as one MAFF official acknowledged \"you have to turn to external bodies to try to give some credibility to public pronouncements, [but] you are very dependent therefore on what the Committees then find... Really the key to it is setting up the Committee, who is on it, and the nature of their investigations.\" Marsden, et al p.186 Hood, et al. cited in Millstone et al, 2003, p.44 BSE Enquiry Transcript 1998, cited in Millstone et al,. p.45 Alongside these main factors, there were a number of other contributing shortcomings, such as its failure to implement appropriate enforcement strategies, leading to situations where for some local authorities food policy was a high priority but for others it was not. Flynnal. 2004. p. 7 James, 1998.p.45 Created in the light of the shortcomings associated with the prevalent structure, the main aim of the FSA is: \"to protect public health from risks which may arise in connection with the consumption of food and otherwise protect the interests of consumers in relation to food.\" Three core values underline its establishment; to put the consumer first, to be open, accessible and transparent, and to be an independent voice; all of which appear to reflect the shortcomings of the policy approach of its predecessor, the MAFF . FSA 2001 cited in Lang, T. et al, 2003", "label": 1 }, { "main_document": "Backing, is a deviation from normal speech development, for example in [g Dental abnormalities and malocclusion between the upper and lower jaw may be present in children with cleft palate and interfere in speech production of fricatives / s /, / z /, / A compensatory substitution for labiodental [f] and [v] with bilabial [ Phonetic impairment may lead to phonological impairment of reduced phonological contrast, for example when the child produces both alveolar fricative / s / and postalveolar fricative / The child with cleft palate may be subjected to language delay. Hearing impairment resulting from chronic otitis media as well as malfunction in the opening and closing of the Eustachian tube can reduce auditory input and feedback, which consequently slows the development of language comprehension and expressive language. Besides taking a thorough medical history of the size and extent of the cleft including any surgical repair done or any usage of palatal training appliance or obturator, a descriptive profile of the child's speech and language abilities is made, for example by collecting a phonetic inventory as well as conducting the Phonological Assessment of Child Speech (PACS). Instrumental articulatory phonetics such as the electropalatograph may also be used. Phonetic and phonological analysis helps to estimate the developmental level areas associated with language acquisition. Intervention is performed through play. Articulatory therapy, with preparatory work focusing on auditory discrimination using minimal pairs, should be progressive; plosive sounds should be tackled first before the fricatives and affricatives in coherence with the normal development sequence. Initial focus is placed in facilitating the production of sounds in isolation and then into simple nonsense CV syllables and followed by meaningful words. Further consonants of a similar group might be elicited through the first consonant acquired, for example, firstly acquired [ Complexity of the phonetic context of target sound should be gradually increased, for example, practice / t / at the final position of the word as in 'hat' before going into initial position as in 'top' which requires additional aspiration component. Appropriate progression will strengthen the sensorimotor patterns and achieve desired modification of the child's Sound Pattern Template (Lancaster & Pope, 1989, cited in Grundy & Harding, 1995). Speech production requires rapid and precise movements. Therefore, the child with cleft palate is encouraged to experiment with his articulatory ability to try a wider range of speech and to boost sensory feedback of oral kinesthetic and proprioceptive information. Next, let us look into aphasia in adults. Aphasia is the loss or impairment of language function caused by brain damage. Brain damage can be due to various problems such as cerebrovascular accident, brain trauma or encephalitis infections. Aphasia can affect different modalities of language, not only speaking, but also reading (such as alexia) and writing (such as agraphia). In this discussion, I will focus only on spoken language in aphasics. There are many classification systems for aphasia. Some uses localization of brain damage such as Broca aphasia (frontal, where Broca's area is associated with serial organization and motor programming), Wernicke aphasia (temporal, where the Wernicke's area is associated with", "label": 1 }, { "main_document": "of different ethnic groups, and that Asian groups often have a lower healthy weight than that of other ethnic groups. The two methods of measuring body fat (skinfold thickness and BIA) yield rather dissimilar results in this study. The reason for this may be that the BIA method requires strict guidelines to be followed. The present study was done after subjects had eaten breakfast, and they were not asked to follow the guidelines in advance, therefore this might have had an affect on the results. Another factor may be that performing skinfold measurements require experience and skill. For the majority of the assessors in this study it was their first time using this method, and the execution might not have been very accurate. Lastly it is important to point out that the anthropometric measurements have positive and negative areas, and that they should be used within the appropriate contexts. This citation from the WHO is a good indicator of this \"... all physical characteristics result from the interaction of heredity and environment ... Body measurements may not always be used safely for comparing the nutritional status of genetically different populations nor for an assessment of nutritional status by reference to a world standard. They are, however, useful for follow-up of physical state over periods too short for genetic selection to affect the population in a significant way, provided gene flow is negligible.\" (WHO, 1970, cited in Fidanza, 1991). The study assessed a group of subjects, and found 9 out of 11 of these to be of good nutritional health. The measurement methods were found to be reasonably quick and easy to carry out. However, the validity of results measured by first-time assessors may be questioned. And further, some of the techniques, and especially the skinfold measurements, require a lot of practice and expertise, as well as being prone to between-assessor variances. The use of the assessment as well as the situation and subjects measured must be evaluated when making a decision as to which nutritional assessment should be carried out.", "label": 0 }, { "main_document": "motivation. Since it has been suggested that ineffective managers are often associated with McGregor's Theory X (Sturdy 2005, personal communication), content theories based on Theory Y should provide a better explanation for employee motivation. Herzberg conducted a set of interviews with a selective group of professionals, consisting of accountants and engineers. The interviewees described when they felt satisfied and dissatisfied about their job. The incidents representing sources of satisfaction included \"achievement, advancement, recognition, autonomy, and other intrinsic aspects of work\" (Fincham & Rhodes 2005: 199), whereas those representing sources of dissatisfaction such as \"working conditions, salary, job security, company policy, supervisors, and interpersonal relations\" (Fincham & Rhodes 2005: 199). Herzberg termed the sources of satisfaction as Motivators stimulate people to work hard continuously to achieve job satisfaction, whereas hygiene factors only reflect the \"acceptable work environment\" (Fincham & Rhodes 2005: 199) and affect job dissatisfaction. Since \"motivators reflected people's need for self-actualization, while hygienes represented the need to avoid paid\" (Fincham & Rhodes 2005: 199), both factors stem from completely separated origins. The key motivators identified in the sample of interviewees are the sense of personal progress, responsibility and recognition attained from the profession. The interviewees' positive attitude towards regular managerial feedback also shows the important effect of the competence and achievement motive in this theory. There are, however, many questionable areas in Herzberg's Firstly, the selective group of professionals may have established a bias by attracting a similar group of achievement-oriented employees. A study conducted by Schneider and Locke in 1971 also discloses how job satisfaction and dissatisfaction are dependent on both motivators and hygiene factors (Fincham & Rhodes 201). This contradicts the idea of motivators and hygiene factors having independent origins. A more important point to consider is the tendency for interviewees to internalize explanations of successes, and externalize explanations of failure (Fincham & Rhodes 2005: 201). The subjective and personalized experiences of employees have probably created biased definitions for motivators and hygiene factors. An alternate theory is Maslow's idea of individuals being motivated by a This hierarchy separates individual needs into two sections, so that self-actualization and self-esteem are listed under higher-order needs, while social, security and psychological needs are listed under deficiency needs (Fincham & Rhodes 2005: 195). Maslow argues that there is a 'psychological growth' from the deficiency needs to the higher-order needs. This means that once a need at one level of the hierarchy is satisfied, its impact on our behavior decreases. The need at the next level will then become the more influential impact on our behavior (Fincham & Rhodes 2005: 193). Although Maslow makes many generalizations in his theory, he also accepts discrepancies resulting from individual influences (Fincham & Rhodes 2005: 197). An example is a hunger striker who satisfies higher-order needs by going on strike, despite having the unsatisfied psychological need of hunger (Fincham & Rhodes 2005: 198). This theory accepts that the 'psychological growth' from deficiency needs to higher-order needs is disrupted in such cases. Maslow's acceptance of discrepancies also shows that he is aware of Schien's 'complex' model of human nature", "label": 0 }, { "main_document": "speaker is about to say (as found in the conversation transcript). This led to problems in categorising it, however when looked at without preconceived definitions (as CA aims to do as a data driven approach) it might be possible to say that this is an example of recognitional onset because although not continuing what the speaker would have said TB would have to project what the speaker was likely to say in order to disagree with it (TB is protesting against the speaker's unwillingness to discuss the big questions- which JH signals by his use of 'but'). Finally I want to approach my data using Interactional Sociolinguistics (IS). This is an approach which studies 'the interaction between self and other and context' (Schiffrin 1994: 105), enabling it to combine aspects of the Ethnography of speaking and CA. For instance it makes explicit the subconscious structures and techniques which CA is concerned with, but correlates the observations with external factors such as context (which is so central to the Ethnography of speaking). To demonstrate this I intend to study the role of discourse markers (a microanalytic observation) in relation to context in both sets of data. The function of a discourse marker according to Stubbs (1983, as cited by Pridham 2002: 30) is 'to relate utterances to each other or to mark a boundary in discourse.' In my own analysis I would not only like to find out the specific functions of the discourse markers in my data but to see if there are any interactions between the function of discourse markers and the context or speech event they appear in. To begin the analysis I took Schiffrin's example of trying 'to specify the conditions that would allow a word to be used as a discourse marker' (Schiffrin 2001: 58) and found that the two most prominent discourse markers were the interjection 'oh' (occurring only in the conversation data) and the adverbial particle 'well' (found in both sets). Firstly 'oh,' which appeared when an anecdote was being recalled but its distribution was incredibly specific, preceding any direct quote: 'She said oh um,' 'she um said oh I'm really sorry,' 'I was jokingly said oh yeah.' These systematic examples seem to indicate that the condition for 'oh' is to precede a direct quote, functioning as an introductory marker. This reinforces what Schiffrin (1999: 286) describes as 'oh's' role in marking 'information state transitions.' 'Well' only occurred once in the conversation data: 'Well sorry I thought we were sharing it' where it is in initial position framing an exclamation; a function supported by Stenstrom (1994: 207). In comparison to the interview data this observation remained relatively consistent, with 'well' almost always occurring in initial or near initial position: 'Well I think' 'Well let me' 'Yes well let's.' In this context it is again acting as an introductory mechanism although to both the beginnings of questions and responses: 'Well let me just... you are proud of them?' 'Well I think where we haven't been... what we have done.' The one exception to 'well' not occurring in", "label": 1 }, { "main_document": "essential to its successful evolution and global recognition. Bohr's model may have been simplistic and rather 'bodged' together but it spurred on many other scientists and encouraged them to think about developing the existing model, such as Dirac and Pauli, or developing an alternative one of their own, challenging a view that had become front page news in the early parts of last century. If it wasn't for Bohr's brave attempt at incorporating two hugely massive and contrasting physical principles in explaining atomic processes, we may never have the complicated yet technically brilliant quantum development of the atom that we have today. It really boils down to whether you are prepared to accept that great physical ideas have flaws and whether you are prepared to ignore these in search of the bigger picture. Alternatively you might think it is better to exploit these existing defaults and slowly build up an alternative theory that gains momentum not from brilliant initiative or imagination but from the definite exclusion of previous 'grey areas' and ideas. I believe that the Bohr model of the atom has been useful despite its flaws. If you pick up any textbook or scientific publication today you will still find his model as the centrepiece of the explanation of atomic processes, despite the fact that we now understand the complexities involved and recognise that there is a lot more to the atom then a pretty picture of a nucleus and orbiting electrons. In its final form it represents just about the last model of the atom that bears any relation to the images we are used to in everyday life. Bohr's rough-and-ready approximation of atomic processes has allowed us to bridge the gap between Newtonian physics and atomic quantum ideas. Bohr's theory was famously recognised by Einstein as an, 'insecure and contradictory foundation', that appeared to him as a miracle in its significance to chemistry. Bohr's model proves that scientific idealism need not be perfect and need not be supplemented by mathematical vigour but instead, can be rooted firmly in the soul and passion of any upcoming scientist with the desire to succeed.", "label": 1 }, { "main_document": "coding theory. Coding theory will always play a part in our lives, especially in this computer era. Interestingly, coding theory exists outside of computers; nature also chose the 'parity check' for DNA and proteins. Satellites sent into space will begin to use more complex codes so that very weak signals from the vast distances of space can be received and understood and the advances in quantum computing will call for stronger codes to be invented and used. Cryptography is an area of maths that deals with information security. In the modern world this is closely linked with computers, however cryptography existed many years before the traditional computer. Cryptography involves taking a piece of data and encrypting it, using a key, to make the data impossible to read unless the user can decrypt the data. This can be compared to putting a piece of information into a box and locking it. Only somebody with the same key could unlock the box and read the information. Secrets between people have existed probably since the start of mankind. Sometimes secrets have to be passed on to a particular person; how do we do this without letting anyone else hear our secret? Naturally this is quite hard; people can overhear a conversation or intercept a letter. Is it possible to come up with a system so that even People quickly found that it was possible, either by creating a new language or coming up with a 'code' that nobody could understand unless you were taught it. In the beginning, cryptography was concerned with Most codes were language based, this means that all that was really needed was a pen and paper. There were two main systems in use; transposition ciphers and substitution ciphers. Transposition ciphers involved taking a message and rearranging the letters in the message and substitution ciphers entailed replacing letters by other letters, or indeed groups of letters. One of the earliest ciphers was the Caesar cipher. This involved shifting each letter in the plain text (the original message) along the alphabet a certain number of times. Caesar used a shift of three. For example, A would go to D, B to E, C to F... Z to C. This makes the Caesar cipher a substitution cipher. Multiple encryptions would add no more security to the data, as a shift of 1 applied twice is exactly the same as applying a shift of 2 only once. Unfortunately, we have no way of knowing how successful the Caesar cipher was at keeping secrets safe. In Caesars favour, most of the people at that time could not read, let alone create some way to decipher his messages. Frequency analysis was discovered in the 9 For English, 'e' is the most common letter. Shift ciphers, (the general form of a Caesar cipher) can be broken using frequency analysis, the most common letter in the cipher text would be the equivalent of an 'e'. It would be possible to then count the distance between this letter and 'e', giving the shift value. Brute force can also be used", "label": 1 }, { "main_document": "There's a saying: Recently, the trend of Lijiang is named as 'Paradise of It is really interesting to find out the reason behind this noticeable societal phenomenon, especially for Lijiang Tourism Bureau manager. They should learn why Lijiang and This essay begins by introducing the consumer group and the tourism destination. Then marketing segmentation is used to analyze the tourist characteristics and the framework of Purchase-Consumption System Model (Woodside and King, 2001) is applied to explain consumer behaviour in leisure travel. During discussion, a series of recommendations is presented to Tourism Bureau manager. Let us begin with the two 'leading actors': It represents a newly emerged lifestyle that no academic literature has discussed ever. Given the remarkable tendency, the topic was raised by newspapers, magazines and such publications. For example, the behaviour of It gives a comprehensive description of Accordingly, They live in big cities such as Beijing, Shanghai; have stable job with secure income. They work hard and know how to enjoy life using disposable income and time. Even though they are not wealthy, they are not hesitated to spend money in things they really like, such as travel. Most of them are web surfer, so Internet is a necessity in their life for keeping touch with friends, searching information etc. Like the American 'Yuppie' of the 1980s was defined by Rolex Watch, Gucci briefcase (Solomon, 1994), For example, using brand products such as LV bags; having 'elegant' interests, such as watching classic films. It is difficult to say how this lifestyle has come out; however, it is not a unique phenomenon so far. In 2000, a similar group 'Bobos' was heralded by an American journalist David Brooks in his book - David named the new upper class 'Bourgeois Bohemians' that mixed '60s-style liberalism' with '80s-style conspicuous consumption'. (Brooks, 2000) The Bobos is quite similar to Chinese Lijiang is a famous tourist destination for five main attractions: Jade Dragon Snow Mountain, Tiger Leaping Gorge, The First Bend of the Yangtze, The Baisha Murals and The Ancient Town of Lijiang. Especially the old town, which was named as a World Heritage Site by UNESCO in 1997, is the ancient capital of the Naxi people who have a long history and rich cultural heritage, in terms of Dongba music, script, architecture and clothes. Lijiang is 'a dreaming garden for people longing for a haven of peace different from today's modern cities'. (Ancient City of Lijiang, 2004). People who have visited give comments like this: \"Lijiang seems to have a magic power to attract me there; I will never be tired of it.\" \"A short stay in Lijiang refreshed my spirit and gave me new vigor to everyday life.\" \"Tired of city life's hustle and bustle, I especially love the peace and quiet of Lijiang.\" \"Naxi people and their Dongba culture, the strong cultural atmosphere makes Lijiang special.\" (Sina travel forum, no date) Tourism in Lijiang has developed quickly in recent years and a dramatic increase in domestic tourism contributes to 93-96% of tourists to Lijiang. (Duang, 2000) Today, tourism has replaced agricultural industry", "label": 0 }, { "main_document": "than one type of knowledge and illustrates the split between sense experience and reason. Although the two Ways appear to be in conflict with each other, they are also dependent upon each other and complement each other; giving a more complete view of the world. Parmenides writes the Way of Seeming in order to show not only the incompatibility of a theory of knowledge based entirely on reason and interaction with the real world but also the incompleteness of it. Parmenides shows the audience the importance of the senses and of language, and that even in theories of reason, sense data and language are needed in order to make sense of reason. Rather than promoting a view based entirely upon reason as Parmenides explicitly does, he is also implying the need for a worldview based upon the senses, which is where he started, and where he returned.", "label": 1 }, { "main_document": "decentralisation, the process of globalisation has caused a large increase in the inward application of armed violence in the form of intra-state civil wars. Of the 61 major conflicts from 1989 to 1998, 58 were civil wars. It seems that humankind is not willing to give up its capacity for war that easily, and the globalisation phenomenon has, whilst amalgamating states, left conflict where there was once unity. UNDP (1999) The common misconception that globalisation is replacing traditional methods of international interaction may cause it to appear threatening and unfamiliar. However, when one realises, as Koenig-Archibugi does, that \"[g]lobalization is not supplanting traditional lines of social conflict and cooperation, but is redrawing them\" The desired direction can change in the view of different groups with varying interests; the 'overclass', as mentioned, seek to maintain the exploitative link between the 'core' and 'periphery' through the creation of elitist regulatory institutions. The egalitarians, however, state that those benefiting from globalisation should compensate those who do not gain from it, perhaps through the creation of a global welfare state. Koenig-Archibugi, M. in Held, D. and Koenig-Archibugi, M. (2003), Stiglitz offers a resolution to the current overclass situation: voting power in institutions such as the IMF and World Bank should be less skewed; there should be an increase in the transparency of decision making and an overall change in the approach to crisis management; a change in the rules on bankruptcy when countries are unable to pay their national debt should be considered and there should be further concern for employment and working conditions. It can also be argued from a Therefore, the question of whether or not globalisation is a good thing and should therefore be increased is almost entirely dependent on the point of view one takes; whether a However, one theme is clear throughout all stems of thought: globalisation, whether a new phenomenon or not, needs to be treated with great caution and respect, the process may create losers as well as winners and what is important is for those better off to protect and subsidise others in order to maintain equality and preserve the good name of globalisation. Stiglitz, J. in Held, D. and Koenig-Archibugi, M. (2003),", "label": 1 }, { "main_document": "the calculations the calculated strain converges towards the actual strain energy within the structure. Using shape functions in a displacement distribution leads to an over-stiff solution that is not valid for every point within the structure but is valid globally. This over stiff solution leads to stresses being overestimated, which is not beneficial when designing. An element must be able to possess constant strain throughout in order for the finite element method to work on any assumed displacement distribution and a reliable model for a small element to be achieved. This condition is reached when the second term in the displacement distribution polynomial is linear. The plate shown in the diagram below has the following properties: The plate above deforms such that the analysis is small displacement and linear elastic. Considering element 1 as shown below: As element 1 is a 3 noded triangular element, it has six generalised coefficients and, therefore, six nodal degrees of freedom. The Element force vector {Fe} and the element nodal displacements { To calculate the element stiffness matrix [k] it is necessary to calculate the matrices [B] and [D]: For this element, [B] can be obtained from the co-ordinates of the nodes 1, 2 and 3 Where, in this case: These values can be substituted into the equation below to form matrix [B]. To calculate the element stiffness matrix, [k], it is necessary to calculate the transpose matrix [B] The material property matrix [D] is obtained from the general matrix form of In plane stress problems, for example in this case of a plate, the Equation (2.29) can therefore be used to obtain the material property matrix. The material property matrix for element 1 is calculated below: To calculate the element stiffness matrix, the equation This is the element stiffness matrix for element 1. Considering element 2 as shown below: The Element force vector {Fe} and the element nodal displacements { To calculate the element stiffness matrix [k] it is necessary to calculate the matrices [B] and [D]: For this element, [B] can be obtained from the co-ordinates of the nodes 1, 2 and 3 Where, in this case: These values can be substituted into the equation below to form matrix [B]. To calculate the element stiffness matrix, [k], it is necessary to calculate the transpose matrix [B] The material property matrix for element 2 is the same as that for element 1, as the terms within the matrix are governed by the material properties of Young's modulus and Poisson's ratio which are the same for both element 1 and 2. To calculate the element stiffness matrix, the equation This is the element stiffness matrix for element 1. The structural stiffness equation for each element can be used to calculate the governing structural stiffness for the entire structure. The two element stiffness matrices are given below and these can be combined to form the structural stiffness matrix. Element 1: Element 2: The governing structural stiffness equation: As there will be no deflection at node 2 and the deflection at node 4 will be only in the", "label": 1 }, { "main_document": "birth rate is also not desirable as is witnessed from the experience of certain 'ageing' European countries. Additionally, the saving rate is also vital in determining a nation's propensity to grow and converge. The Japanese 'growth miracle' from 1953-73 is a case in point for the massive advantages of a high savings rate During the time her per capita GDP grew at a spectacular rate of over 8.1% per annum. To achieve this growth rate itself required large increases in annual gross investment, which was made possible by an increase in output available for division between consumption and saving, a higher saving rate. Both corporations and households contributed to the high rate of private saving, which rose from 16.5% of GNP in 1952-54 to 31.9% of GNP in 1970-71. See How Japan's Economy Grew So Fast- Policy should therefore encourage the development of stable financial to institutions to encourage saving. Additionally, the most direct way for government to influence national saving is through public saving. It can increase public saving by following a 'contractionary' fiscal policy. However, if reducing the budget deficit via a tax rise, it must be careful as higher tax on saving and investment actually discourage savings. Finally, a country can achieve high rates of growth by a careful restructuring of its social institutions. In doing so, it will be able to efficiently extract from the world pool of technology. For \"the storehouse of modern technology is great\" Opp Cit. Baumol", "label": 0 }, { "main_document": "the semantic field of waiting which runs throughout the advert. The words are above the iconic image of a pint of Taboo language is an area that produces many strong opinions, and so carrying out interviews concerning the topic will provide an interesting set of data to analyse. According to Peter Trudgill (2000:18), taboo language is used to refer to taboo behaviour - 'forbidden, or regarded as immoral or improper'. The interviewees are: a twenty year old female 'A' (see Appendix Two); a twenty year old male 'B'; and a forty-five year old female 'C'. I chose these people in order to investigate the differences or similarities in opinions which may be caused by gender, and also age. Interviewees A and B both admitted to using taboo language in the form of swearing in their daily speech, with B declaring that it is 'alright in society today'. C (the eldest out of the three interviewees), regarded the use of taboo language as unnecessary and offensive. The two females (A and C) agreed that the aggression behind the spoken word is worse that the sound of the word itself. B differed here, stating that his use of taboo language is, 'for humour' rather than anger. When discussing the situations where using taboo language in the form of swearing is acceptable, there were many similarities with the interviewees' answers. They agreed that it is inappropriate in any formal situation, with interviewees B and C specifically stating a job interview. Interviewee B decided that using it does not help to present himself, 'in the best possible way'. When describing the people they would or would not use taboo words in front of - they all confirmed that they would avoid it in front of relatives and children. Interviewees A and B, both aged twenty, said that they use taboo language where their conversation partner has already done so, A doing this with figures of authority such as lecturers and teachers and B with his older sister. Interviewees B and C avoid taboo language around people they do not know, as not to unknowingly offend. All three interviewees made reference to the taboo word Interviewee C could only describe it using its initial consonant, 'the c-word'. C also found the word fuck to be unacceptable, whereas B, the twenty year old male, uses it regularly in front of friends. The interviewees agreed that taboo words concerning sex and genitals were distasteful, and ones that describe faeces, i.e. 'crap' and 'poo' were softer; with A arguing that she does not even considered them taboo. When asked to recall a situation where the use of taboo language has shocked them, the interviewees all expressed annoyance at the level of swearing used in front of children: whether by the parent; or by other children. Interviewee B added that elderly people using strong language was shocking, and unexpected. The shock felt here is explained by Janet Holmes (2001:206) as she states 'people [...] tend to use more vernacular forms as they get older', and so it would be unusual to", "label": 1 }, { "main_document": "Graphologically it would appear that the extracts share a similar form, as both are examples of dialogic prose that use the presence of a narrating voice to engage with the reader. Similitude in thematic content emerges as both centre on the nature of nighttime in the streets below, but the images generated are far from analogous. Divergence in tone also surfaces as one warmly welcomes the reader in, while the other promotes a firm sense of exclusion. The first extract opens with a declarative minor sentence that instantly plunges the reader into the word described; 'From my window, the deep solemn massive street.'(l.1) Heaped pre-modifying adjectives and the absence of a verb make it reminiscent of a stage direction, as it directly illustrates the scene below. Commencing the following line with; 'Cellar-shops,'(l.1) fortifies this impression by centring our attention on the physical aspects of the street, making the setting easy to envision. Prolific adjectives intensify this vivid visual realism while generating a sombre atmosphere as; 'dirty,'(l.3) 'shabby'(l.5) and 'tarnished,'(l.6) create an impression of gloom and despondency, consolidated by the bleak nuances connoted with 'shadow.'(l.2) The simile; 'houses like shabby monumental safes,'(l.5) engenders a sense of decaying past decadence as the paradoxical; 'bankrupt middle class,'(l.7) and 'tarnished valuables,'(l.6) imply loss and fading former glory. Elongated syntactic structures seem endless and impossible to exit, reinforcing the district's claustrophobic density. Copious personal pronouns imply the text is a dramatic monologue. Opening with the possessive pronoun 'my,'(l.1) firmly imbeds the voice of an involved first person narrator in the text, but they do not appear to be speaking aloud as no reporting clauses or graphological markings denote a speech act. 'I am a camera'(l.8) de-personalises the narrator through metaphoric objectification as they are; 'open, quite passive, recording.'(l.8) 'Not thinking'(l.9) conveys a feeling of numb disenchantment that de-humanises them further by suggesting that they have disengaged the mind and simply exist. Remaining 'open' implies a longing to be part of the community described as the narrator appears firmly external, possibly because of their foreign status. Distance is suggested by using 'the man'(l.9) and 'the woman'(l.10) as opposed to the names of the neighbours. This implies a lack of contact between them that erodes the normal specificity and intimacy generated by using definite articles; they are familiar only because of their physical proximity. The allusion to 'kimono'(l.10) suggests a language barrier may prevent them making contact, reinforcing the narrator's alienated position and sense of longing. The unwelcoming, harsh, glottal plosive /k/ in; 'clock'(l.13) and 'locked,'(l.14) is virtually onomatopoeic, making the sense of exclusion that permeates the extract almost tangible. The progressive present tense animates the scene by providing immediacy. Activities such as 'shaving'(l.9) and hair washing imply it is morning but the insistence generated by the modal verb 'will,'(l.13) engenders a temporal elasticity that accelerates the advancement of nighttime. 'Will' metamorphoses into the present tense, transitive verb 'are,'(l.14) bringing us back from the future to the present, creating a disorientating and unexpected time shift with; 'The shops are shut.'(l.14) It is the shortest sentence and creates a rhythmic", "label": 1 }, { "main_document": "divisions have been overcome in recent years? Firstly, cessation of direct links with PCF, during the CGT's forty-sixth congress in February 1999, symbolises a shift in strategy and with a severing of the 'umbilical cord' linking the confederation to the PCF permits the union to pursue its objectives with greater autonomy ( Furthermore, the CGT rejected PCF calls for joint mobilisation in 1999, presumably indicating a shift in trade union ideology away from wider political objectives to pursue more narrow objectives in industrial the industrial sphere. However, it is evident that, in spite of officially breaking links with the PCF, the union and party 'remain ideological partners' (Financial Times 1999). Moreover, the CGT's 'new' strategy is 'summed up in the watchword protest-mobilisation-proposal-negotiation' and it is argued that this reflects continuity of ideology and union strategy (Rehfeldt 1999:3). The strategy's emphasis is on protest and mobilisation and does not 'represent a complete break' from the CGT's 'traditional tendency of mainly basing union action around protest' (1999:3). Consequently, it is proposed that the CGT continues to embrace an anti-capitalist ideology, promoting consciousness of the inherent conflict between labour and capital. It could be argued that inclusion of negotiation as part of the confederation's strategy implies that the CGT have recognised the legitimacy of capital. Yet, this measure is, perhaps, superficial, and falls short of overcoming inter-confederal ideological divisions. The CFDT tends to adopt a more accommodating stance towards management and it's objective of 'trade unionism based on negotiation' (Rehfeldt 1999:3) and portrayal of the strike as an 'archaic tool' (Daley 1999) continues to separate the unions' on ideological grounds. Secondly, in June 1998 the CGT and the CFDT held joint talks to encourage inter-confederal unity. The CFDT's 'olive branch' was accepted by the CGT (Rehfeldt 1999:4) and the confederation 'exchanged ideas from conference documents, while respecting the other's identity, in order to deepen their respective approaches to the concept of trade unionism' (Bilous 1998:1). In spite of the apparent strengthening of ties between the CFDT and the CGT, it is argued that 'no assumptions should be made, as union alliances fluctuate according to the issue at hand' (EIRO 1999:7). The discussion above highlighted the contingent nature of inter-confederal unity and, whilst the 1970s unity was facilitated by the political unity of the left, recent unity is presented, firstly, as an outcome of the introduction of working time legislation which has strengthened the presence of unions and secondly, as a result of the trade unions' desire to increase membership. Thus, the recent united front is not a significant indication of a discontinuity of inter-confederal ideological divisions. Thirdly and finally, MEDEF's For instance, the 'employers confederation succeeded in splitting the fragile trade union pact three times and was able to strengthen what appears to be a budding alliance with the CFDT' (Rehfeldt and Vincent 2001:7). The CFDT's willingness to make agreements with MEDEF has serious implications for the recent inter-confederal unity evidenced above and makes explicit the 'reformist' nature of the CFDT. The CGT has been and is vehemently opposed to any agreement with MEDEF and", "label": 1 }, { "main_document": "Frankel, 'Lenin's Doctrinal Revolution of April 1917' pp.124-127 Lenin, 'To What State Have the Socialist Revolutionaries and the Mensheviks Brought the Revolution?' pp.312-315 Vladimir Il'ich Lenin, 'The Tasks of the Revolution', Sept/Oct. 1917, in In opposition the Bolsheviks were political masters. Under Lenin's direction they steered a steady course constantly adjusting to circumstance yet adhering unerringly to their Marxian ideals. It was much easier to offer opposition in the atmosphere of disaster that haunted war torn Russia. The Bolsheviks got a rude awakening when it was their turn to entertain the want of the masses that they had stirred up. Unpopular decisions had to be made in order to ensure 'economic revival', to heal 'the very severe wounds inflicted by the war upon the entire social organism of Russia'. The tasks of the revolution were to secure a decisive majority, crush the opposition including the remnants of the bourgeoisie and to bring about complete, disciplined organisation. For the latter to succeed it was ignorant of necessity to deny that 'without the guidance of experts in the various fields of knowledge, technology and experience, the transition to socialism will be impossible, because socialism calls for a conscious mass advance to greater productivity of labour'. Predictably this would mean the use of bourgeois officials, managers and technocrats but 'every thinking and honest worker and poor peasant will admit, that we cannot, immediately rid ourselves of the evil legacy of capitalism'. The positive facets of this legacy had to be harnessed for the good of the revolution and the negative un-socialist principles ignored in the meantime. Vladimir Il'ich Lenin, 'The Immediate Tasks of the Soviet Government', Apr. 1918, in Lenin's belief in productionism originated in the development of twentieth century industry and production line methods. Taylorism and the ideal of ergonomics seemed to provide the perfect basis for extending productivity in Lenin's eyes and when coupled with the 'introduction of the best system of accounting and control' the socialist tide would be unstoppable. Its tone was antithetical to capitalism but was swamped in visionary hopefulness - it planned the ideal future of the modernising process but it seems somewhat utopian. The rejection of society's subjugation to market forces was very Leninist and it reaffirmed the need to plan and control the economy in order to benefit the whole of society. Concretely speaking it was not a complete washout. Its conception saw the convening of the Supreme Economic Council 'to set up a number of expert commissions for the speediest possible compilations of a plan for the reorganisation of industry and the economic progress of Russia'. The whole point is that we have yet to learn the art of the approach, and stop substituting intellectualist and bureaucratic projecteering for vibrant effort'. Lenin, 'The Immediate Tasks of the Soviet Government' p.664 Lenin, 'The Immediate Tasks of the Soviet Government' p.684 Vladimir Il'ich Lenin, 'Integrated Economic Plan', Spring 1921, in The publishing of It was not, however, given the complex political language going to be, as intended, \"an elementary textbook of communist knowledge\". In the few years prior", "label": 1 }, { "main_document": "aspects ensure possible bias are identified and clearly documented. All the components mentioned above are vital to ensure valid and reliable findings. It is hoped that sound results will be gained and clear analysis found, which can be published and used as a reference for others, but primarily impacting the current policies and guidelines at Kingston hospital and the practice of nurses to improve patient well-being. Does the number of cultures present on the hands of nursing staff vary once given additional training on hand washing and drying technique? Hand hygiene is commonly regarded as the most important activity for reducing the spread of infection (Reybrouck, 1983), yet evidence suggests that many health care professionals do not wash their hands as often as they should or use the correct technique (Emmerson et al., 1996, as cited in Kerr, 1998 & Taylor, 1978). Smith-Temple (1994) recommends washing hands for one to two minutes to be effective. However effective hand washing is dependent on a good technique. Investigations into the techniques of hand washing are limited (Nurse Network, 2002), which highlights a gap in research and a need for this study. The National Audit Office (NAO) report and the Controls Assurance Standards require infection control to be a part of the induction of all staff (NAO, 2004). For example, at Kingston Hospital the infection control nurses continue to have a slot on the corporate induction programme, however due to the pressure to include other subjects, the time slot has been reduced from seventy-five minutes to thirty minutes. This means that there is limited opportunity for participants to practice their hand washing skills (Kingston Hospital annual report, 2004/2005). This example is not uncommon, and even though hand hygiene is crucial to reducing infection, the time spent discussing this topic and ensuring a good technique is limited. Therefore this research aims to look at whether the amount of bacteria on hands is reduced after additional training on accurate techniques. This research will benefit the hospital where the research is undertaken as its results could lead to more time spend on hand washing techniques in training and hopefully lead to a reduction in hospital acquired infections (HAI). It will also benefit the nurses involved, as hopefully the research will prompt them to ensure they use a good technique. In turn, this should benefit the patients who will come into contact with these nurses. The aim of this research is to identify if there is a difference between the numbers of cultures present on the hands of nursing staff. There will be one group having standard training and another group receiving additional training on hand washing and drying techniques by using a randomised controlled trial. The objectives of this research are to discover if technique is a significant factor in hand washing and drying, if the number of cultures present on hands is reduced and to determine whether the standard training received is adequate. The null hypothesis is that there will be no difference between the number of cultures of the hands of the nursing staff in either", "label": 1 }, { "main_document": "In individuals with rheumatoid arthritis (RA) of the hand, does exercise treatment with the use of a resting hand splint increase (or maintain) range of movement (ROM) and muscle strength better than exercise treatment alone? There is a difference in effectiveness between using exercises alone, or exercises with the use of a resting splint in the treatment of individuals with RA of the hand. Exercise treatment with the use of a resting hand splint will not increase (or maintain) ROM and muscle strength better than exercise treatment alone, in individuals with RA of the hand Exercise treatment with the use of a resting hand splint will increase (or maintain) ROM and muscle strength better than exercise treatment alone, in individuals with RA of the hand (Robson, 2002). 'RA is a chronic, progressive inflammatory disease primarily affecting joints, characterised by pain and fatigue' (Turner, Foster and Johnson, 2002, p543). The disease course is extremely variable between individuals and psychological status and social support affect its status. RA develops from early adulthood onwards, usually starting between 25 and 55 years (Turner, Foster and Johnson, 2002). Exercise and rest strategies, and splinting to allow proper alignment of deformed joints, can be part of physical, and occupational therapy interventions to help manage RA symptoms (Hansen and Atchison, 2000). 'Muscle strength and joint ROM should be maintained through exercise, full ranging of joints during daily activities and therapeutic activity programmes' (Turner, Foster and Johnson, 2002, p558). Splints 'support the joint, reduce stress to the joint capsule, reduce pain during motion and help decrease inflammation (Turner, Foster and Johnson, 2002, p558). Randomised controlled trials (RCTs) are a simple, clear design in which a variable is manipulated and change observed (Abbott and Sapsford, 1997). They are seen to produce the most rigorous evidence which will result in objective and statistical analysis of the intervention outcomes (Taylor, 2000). This randomised controlled trial will encompass the following key elements: Random selection: a computerised selection from a group of people with RA, so the selection is not biased. Random allocation to treatment groups: a computerised selection of half the selected participants to one treatment group, and half to the other. Single blinding of the researcher. To estimate how large a sample will be required, a statistician will be consulted to perform a power analysis (Buckeldee and McMahon, 1994). This will be done to ensure that an adequate number of participants will be included in the study to test the hypotheses and to have the power to determine whether the interventions made a difference in the study outcomes (Depoy and Gitlin, 1998). There are a number of considerations in determining a sample size; the data analytical procedures that will be used, statistical levels of significance, the strength of difference in sample values expected, and statistical power (an 80 % power level is considered the minimum level of acceptability from the power analysis) (Depoy and Gitlin, 1998). These are all to be considered before a sample size is chosen. For the purpose of this proposal, 'X' will be used to represent the number of", "label": 1 }, { "main_document": "Free indirect discourse or This is achieved by combining grammatical and other features of the character's 'direct speech' with features of the narrator's 'indirect' report. It is a technique to allow the reader to interpret the protagonist's thoughts as the author enjoys a seemingly objective stance. Gustave Flaubert was a 'pioneer' One advantage he enjoyed was that it enabled him to find a style 'suitable to each object, each place, each circumstance [and] each being' This Brombert, V., Faguet, Starkie, E., The reader is often told how things are seen rather than merely what is being seen. One example of this is Charles' description of Emma as he frequents Rouault's farm; she offers him a drink. Flaubert uses The description is, ergo, extremely detailed and it is clear that it portrays the thoughts of Monsieur Bovary. He sees Emma: '[ Flaubert manages to describe Emma in a highly suggestive way and yet manages to keep the semblance of objectivity through the ' Flaubert, G., Ibid. Brombert, V., op. cit., p. 46 This Flaubert, however, ensures his rigid objectivity as, later on in the novel, the reader will be able to judge Emma's character for himself when the The reader will see her dissatisfaction with her life and especially with Charles but 'Flaubert's ' It is evident that it is not the author who is making this comment but it is in Emma's thoughts. By adopting This technique also enables Flaubert to ensure that the text is impersonal while simultaneously conveying judgments of characters and events; as 'impersonality depends not on what is said but on the fact that no identifiable narrator speaks.' Flaubert, G., op. cit., quoted in Culler, J., Culler, J., op. cit., p. 110 It is often impossible to tell who actually is speaking as the 'speaker may be the narrator, or one of the characters, or both, or neither.\" We can see this in incidents where the distinction between character and narrator appears to disappear completely; ' It is impossible to tell here whether the narrator is quoting Emma's thoughts or feelings or whether he is, in fact, expressing his own opinions. Secondly, this intermittent use of 'absent' narration creates an illusion of objectivity and detachment by pushing the character into the foreground as the narrator recedes into the background. Due to the fact that the source of the narration is indistinguishable and that there are not multiple narrators, nor does the narrator have 'a distinct set of characteristics' The sequential effect is one which underlines the authenticity and objectivity of the text. Ginsburg, M. P., Flaubert, G., op. cit., p. 40 Ginsburg, M. P., op. cit. Ginsburg, M. P., op. cit., p. 103 Ibid. This Through We only see the characters through what they notice of each other through what they perceive of each other. Another example of this is where Emma's beauty as she lies in the hotel bed is appreciated by both L ' Emma's eyes are such incongruities; 'on one occasion he gives Emma brown eyes; on another deep black eyes; and, on another, blue", "label": 1 }, { "main_document": "Ferrocene, otherwise known as bis(cyclopentadienyl)iron (II), was discovered in 1951 and since then there has been a vast increase in the amount of complexes containing aromatic ligands such as the cyclopentadienyl anion that are bound to a metal. The termed aromaticity is best described using Huckels rules for aromaticity, which are that a considered molecule is aromatic if it is flat (planar), cyclic, and has a continuous overlap of p-orbitals around the ring. This system is stabilised (aromatic) if it contains 4n + 2 electrons, but is destabilised (anti-aromatic) if it contains 4n electrons. The common feature of bonding in these complexes is that of donation of electrons from ligand - orbitals to the metal and to a lesser extent back-donation from metal d-orbitals into anti-bonding orbitals of the ligand. Because of its structure Ferrocene reacts in such a way that resembles benzene and because of this it is sometimes referred to as inorganic benzene. During this experiment Ferrocene will be acetylated with the products being separated by two types of Chromatography TLC and column, and will finally be reduced to a Ferrocecyl alcohol. Ferrocene does obey the 18 electron rule as both of the C This could be regarded as a relatively high-risk experiment as caution should be taken when using a variety of the chemicals being used, the risks associated with them are as follows: - A conical flask (50mL) was fitted with a Calcium Chloride guard tube, which was temporarily removed for the addition of the following materials, 1.5 g (1.5011g) of Ferrocene, and 5mL / 5.25g of acetic anhydride.1mL of conc. Phosphoric acid (85%) was then added drop wise by use of a pipette, whilst the flask was being swirled. The guard tube was refitted and the flask was left to stand in a boiling water bath for 20 minutes. During this period every 5 minutes the flask was removed and swirled gently, after which it was placed back inside the bath. 20g of crushed ice was placed inside a tall beaker, which the reaction mixture was poured over, when the ice had melted the mixture was neutralised using solid Sodium Hydrogen Carbonate until CO The mixture was cooled inside an ice bath for 30 minutes, after which time it was dried by filtration under suction, and the solid was washed with water until the filtrate appeared a very pale orange colour. The solid was then dried for a further 15 minutes. Thin layer Chromatography was then carried out on the sample, which I have described in more detail later on. This was followed by Column Chromatography, which is also described in more detail later on, after which the solvent was removed completely from the solutions separated during the column chromatography using the rotary evaporator, the solutions were then weighted. The melting point of each of the samples was then measured and it's Infra red spectrum measured. Firstly, a bottle of acetylferrocene was collected from the stores, next inside a 25 mL conical flask 0.32g (0.3237g plus an extra 0.1557g as the reaction mixture had not turned yellow", "label": 1 }, { "main_document": "sequences by Huber and Hollstein. This successful amalgamation of their two sequences was possible because both used oak samples, a species highly suitable for radiocarbon dating. Oak has long been a commonly used building material across Europe, with a strong presence in the archaeological record. Oak trees are sizeabe, enabling the large samples required to be taken, and even allowing for radiocarbon dates to be taken as well. However, a substantial number of samples are needed to build a chronology in the first place, as oaks are short-lived. The relatively low sensitivity of oak makes it 'less prone to abnormalities', (Aitken, M.J. 1990. p45) abnormalities that can limit the success of dendrochronology as a dating method. Aitken makes special mention of sensitive conifers such as pine and spruce as being problematic to analyse. (Aitken, M.J. 1990. p37-8) In years of especially harsh conditions for growth, conifers are prone to have missing rings, with other instances of double rings occurring. He highlights the paradox presented to archaeologists; that in sensitive trees that react extremely to climate, growth patterns are very clear to see, yet 'there is highest risk of these abnormalities'. (Aitken, M.J. 1990. p38) However, one species of conifer, the bristle cone pine, has been of great importance to dendrochronology in North America. The unusual environmental conditions in the Californian White Mountains has left the ancient wood there near-perfect for use in dendrochronology. The high altitude and low temperature created an inhospitable environment for the micro organisms that would otherwise have speeded decay in the trunks, which were also extremely well preserved by their high resin content. (Bowman, S. 1990. p19) The pines were also highly suitable because of the great number of narrow rings in each trunk; samples with 100+ rings are preferred in dendrochronology to guarantee a unique sequence. Dead and alive pines, some up to 4000 years old, were used to create a master chronology stretching back 6700 years. Interestingly, this was validated by another from the area, and calibrated also by radiocarbon dates of the pines, as well as those of the 'lowland oak chronology in Europe' (Aitken, M.J. 1990. p44). Suess used the bristle cone pine chronology in constructing the first calibration curve, which shall be discussed later. Aside from irregularities present in the wood, more difficulties can arise just through sampling it. As with all archaeology, sampling is destructive of the material. Dendrochronology has especial potential for this destruction, given the importance of getting 'samples which run out to the bark surface' that indicate the all-important date of when it was felled. Baillie cites the MC18 chronology that was constructed using late medieval painting boards, valuable objects in themselves (Baillie, M.GL. 1995. p45); to avoid unnecessary damage, the technique of coring was used. Even less destructive methods still have been utilised in dendrochronology. Taking a photograph or mould of a tree ring sequence ('contact lifting') can be just as effective. However, even delicate sampling methods are of no use with samples that have not been preserved well. Wet environments have the best survival rates for ring", "label": 1 }, { "main_document": "The domain structures of the various regions of the Immunoglobulin G molecule, as well as post-translational modifications such as glycosylation, are depicted with respect to their biological functions. Furthermore, one of the most characteristic structural elements: the immunoglobulin fold, which can be found in a multitude of other proteins, as well as the formation of an antigen-antibody complex, are discussed. Immunoglobulin proteins are an indispensable part of the humoural immune system. They are synthesised by B lymphocytes in response to the presence of an antigen and have two main functions: recognising and binding to antigens, for example a bacterial toxin in order to inhibit its action, and recruiting other components of the immune system to eliminate the pathogen. The term immunoglobulin is used for all antibodies that are constructed in a similar fashion from two identical heavy and light polypeptide chains each, which contain constant and variable regions. But differences in the constant regions and functionality have given rise to five different classes of immunoglobulins in mammals, namely IgM, IgD, IgA, IgE and IgG, the latter being by far the most abundant and constituting about 75% of serum immunoglobulins. IgG is mainly involved in binding antigen during the secondary immune response, activation of the complement cascade as well as recruitment of macrophages and neutrophils. In addition it is the only immunoglobulin to cross the placenta and become secreted in breast milk since the neonatal immune system lacks fully functional antibodies. IgG can be further divided into four subclasses on the basis of distinctive conformations found in the constant regions of the heavy chains and the hinge region, which also influence the antigenic properties of these subclasses. Approximately 70% of serum IgG is IgG1, which is mainly concerned with neutralising protein toxins. IgG2 constitutes about 20% and recognises mostly antigenic polysaccharides while IgG3 is mainly concerned with viruses, making up about 8% of the IgG serum concentration. The fact that each B cell secretes antibodies specifically recognise one certain epitope and the massive variation needed to ensure coverage of all sorts of antigens results from immunoglobulin gene rearrangements, i.e. the recombination of the genes encoding constant and variable regions of both types of polypeptide chains. The immune system is capable of generating millions of different molecules from a large pool of variable domain alleles. Two sets of V genes encode the variable domains of immunoglobulin heavy and light chains. The two chains are produced separately, but the mechanisms by which their diversity is achieved are similar in principle. The constant domains on the other hand, do not undergo a great deal of variation, as their name implies. IgG is a monomeric and fairly large molecule with a molecular weight of about 150 kDa resulting from the presence of two However, individual immunoglobulins are symmetrical and always contain identical light chains and identical heavy chains. As a result an antibody's antigen-binding sites are always identical, which is crucial for the cross-linking function needed to antigen-antibody clusters. The structure of immunoglobulin G has been extensively studied since IgG is the simplest of all immunoglobulins, thus", "label": 0 }, { "main_document": "in recent years. In relation to the expansion and intensification of rice cultivation, increased cropping intensity meant that nutrient uptake by rice plants in a season has increased, resulting in the rapid decline of soil organic matter in rice field in the short term. Together with the climatic constraint that temperature in tropical Sri Lanka is sustained high throughout the year which accelerates decomposition of organic matter, organic matter content in the soil of Sri Lanka is often poor, less than 1 % in Dry or Intermediate Zones (Amarasinghe and Liyanage, 2001, Dhanapala, 2000). Several problems in relation to inferior farming practices are identified as major constrains to decrease the productivity. Misuse of fertiliser and pesticide, often of excessive use, in the long term has caused environmental pollution downstream and contamination in water, and risk to human health. Seed quality is often poor, contaminated with weed seeds, hence magnifies weed problem (Gunatilleke, 1994). Non-seasonal cultivation with short duration rice cultivars (3-3.5 months) has reduced productivity, as it neglects the optimum use of climatic parameters such as temperature and rainfall. In non-seasonal cultivation, the growth stages of rice plants between fields differ depending on the timing of cultivation. This practice encourage pest and disease outbreak due to overlapping of crop cycle within a small scale of area. Inappropriate or inferior implement for ploughing used by local farmers further reduces the yield. Plough layer is shallow which inhibits the growth of roots, hence, the insufficient nutrient uptake result in the low yield (Ahikarinayake, 2005, Amarasinghe and Liyanage, 2001). Lack of storage facility in local villages in the close proximity to individual farmers is a very important constraint on productivity. Lack of money to invest on constructing storage facility for rice is a common problem in Sri Lanka. Post-harvest loss is very high; there is 15 % loss of the grain yield by improper harvest time, incorrect processing and storage. Fact that a few oligopolistic groups has control over milling, processing, or marketing of rice further inhibit the farmers ability and incentive to improve or establish facility for processing and storage (Ahikarinayake, 2005, Wijavaratna, Poor health and poverty suppress the labour efficiency. The rural poor accounts for the majority of people under poverty in Sri Lanka, inherent poverty in those areas induce low labour inputs in farming. Illness or sickness is not only caused by poverty but also by health problems associated with exposure to toxic chemicals due to inappropriate use of agrochemicals and storage. This is often primarily due to the lack of knowledge among local farmers on how often or when to apply the required amount of agrochemicals and how to store them under safe condition (Gunatilleke, 1994, Jones, 2002). Increase in intensive use of insecticide and often its misuse have resulted in the health hazard. Sickness or deaths due to the toxic effects of agrochemicals to humans and poor sanitation status have been widespread. Health hazard by misuse of agrochemicals further reduce the farmers' labour input Inferior quality inputs and lack of extension service to provide information on technologies for improvement and", "label": 0 }, { "main_document": "and most important source of the white man's power in native America was his superior technology.\" It was not uncommon for Natives to literally worship guns and knives that came into their possession; such was the feeling of admiration towards these objects. Native Americans were often in awe of European technology. The most noticeable and immediate affect of guns was a psychological advantage due to the roar created when firing as well as the physical destruction they could cause - 'time would soon show how lethal lead balls were and how horribly they shattered bones and tore tissue.' Axtell, Ibid, p. 194 Whenever the English sought to impose their will in Virginia they used firearms. James H. Merrell's study of the Catawba tribe of Carolina shows their reluctance to engage in warfare with the Europeans in order to defend themselves - 'Speeches were poor weapons for mounting a defence of one's homeland.\" Evidence of English strength in battle can be seen 'in a two-day battle in July 1624, sixty Englishmen cut down eight hundred Pamunkeys on their home ground' Direct military confrontation with European powers was considered suicidal. Native Americans were always likely to come off worst in a traditional European battlefield sense. James H. Merrell, Axtell, However, far too great an emphasis has been placed on European weaponry and technology in their conquest of North America. Despite great admiration for European technology, Native Americans 'had a terrific superiority complex, not only at first contact but long after'. Natives 'very soon realized that Europeans were... seriously flawed in their character and culture... The idea that the native stood in rapt, prolonged awe of Europeans has been exaggerated'. Native American bows were far more accurate and were able to operate in all weather, unlike guns which were rendered useless in rain. Also arrows from bows could be fired at a much faster rate than guns, which were often slow and cumbersome to reload. The English were also forced to learn many battlefield tactics from the Native Americans, indicating Native military superiority. Ibid, p. 142 Martin, The Native Americans were notably stronger in this sense, particularly during the early years of European settlements which were often at the mercy of the Natives. Gallay, This can be seen when the Apalachee chiefdom inflicted heavy defeats on Spanish exploring parties in the early sixteenth century, forcing them to reconsider further exploration and abandon notions of settlement. If the Europeans wanted to survive and prosper they needed to maintain positive relations with Native American tribes. They were forced to seek Native assistance when at war with other Native American tribes and even other European powers. The Spanish, English and French all fought for control over the South and united with Native tribes in alliances. However, intertribal warfare led to the downfall of the Native American in the long-term, the causes of which will be discussed later. 'The Creek, Cherokee, and Chickasaw did not come to fear the English as much as they feared one another. No English army could come against them.' Ibid, p. 333 As stated", "label": 1 }, { "main_document": "This is an important question to attempting to address the viability of an island Archaeology whereby it is possible to implicate ethnography, models of interaction and the active role of material culture. There is scope to talk about island societies rather than islands because an island is not always a social unit. Besides, by examining a number of islands in the Neolithic we can address the very nature of Neolithic societies and contrast with the village-settlements in Thessaly and the salience of agriculture on land that was frequently flooded. We can agree with Broodbank that the seas united rather than divided Neolithic communities in the Cyclades. Firstly, the extreme barren environments of the Cycladic islands created an imperative for external links and contacts. Secondly, one would expect from ethnography a nexus of contacts between islands as part of survival strategies (exchange and exogamy). Thirdly, the archaeological similarities in material culture between different islands appear to reflect the social ties between islands communities. The procurement of obsidian from Melos and the Final Neolithic procurement of metals are also indications of intensive sea-faring. Since the Cycladic islands are inter-visible, one is inclined to want to cross to a nearby island. Given that the arable land on the Cycladic islands is very restricted, it is important for the inhabitants of these islands to invest in social storage so as to secure the help of neighbouring communities in times of crop-failure. In archaeological terms that cultural horizons that cover the Neolithic period are the following: The Saliagos culture is characterised by villages of substantial sizes and village communities in touch with one another. The concentration of Saliagos sites is in the Southern and Eastern Cyclades. These villages were definitely not self-sufficient but sent out special task groups to procure foods and resources at certain points in time. There is evidence of foreign flints and Melian obsidian on Saliagos villages. All these island communities appeared to have been sharing the same pottery because there is similarity in terms of decoration motifs (light-on-dark pottery). This is wider pottery horizon stretching from the East Aegean to Attica and Euboia and shows that there were far-reaching links between different sites during the Neolithic period. Thus, extensive interaction is not a distinct Cycladic feature. The Kephala sites are mainly concentrated on the island of Kea. It is interesting that they are poorly located for access to the best arable land. It is likely that factors, such as visibility from Euboia were important in the location of these sites. There are two pottery spheres of interaction: On Kea there are similarities with Attica and Euboia. On Zas, on the island of Naxon, there are links with Samos. It is unclear whether the travel routes for prestige items corresponded to directional travel. The Grotta-Pelos phase is characterised by expansion and fission. There are now smaller communities inhabiting these island and the links between sites increase. The networks of contact now become dense and there is a marked rise in the number of objects and exotic items being exchanged. As the South Eastern Cyclades", "label": 0 }, { "main_document": "as being indicative of a rift between Brown and Milburn (46-54) White merely dismisses Milburn as 'careless' (49) and points out that 'for the second day running Mr. Milburn endorsed Mr. Brown's stress on the centrality of the economic message' (67-70). In examining The Independent's headline article it seems evident that the first half of the model is also erroneous and is in need of further refinement. There is no doubt that this article is the most controversial of the four, taking what Brown said out of context and too the extreme (column right of the picture) so as to justify the main thrust of its argument that that Brown's speech was a 'coded attack' or 'assault by stealth' (title and subtitle) creating a dramatic opening of divisions (67-82). That the apparent split between Brown and Blair is probably sensationalised to a greater degree by the Independent than the considerably more conservative Daily Telegraph reveals that the political orientation of a newspaper is not necessarily the overriding variable that dictates how favourable a newspaper's report will be. An obvious contributory factor that was unaccounted for in the model was the newspaper's desire to be sensationalist for the sake of being eye-catching. While this maybe a surprisingly crude strategy for a widely well regarded newspaper like The Independent to employ, it could be attributed to the fact that as noticed by many critics, since it has adopted a tabloid format in 2002 its drive to become unique, innovative and engaging has resulted in the erosion of its intellectual and critical substance. Moreover with a change of format there was no doubt a concomitant slight change in leadership, with customers more likely to be more attracted to digestible rather than nuanced reporting. In studying The Times headline article it is very noticeable that another variable was not considered. A newspaper's reports may not only be coloured by the implicit sympathies it has for a party, but also a political figure. This perhaps explains why the paper inaccurately depicts Blair as the towering figure of the Labour party who is trying to mollify his squabbling underlings viz. Brownites and Blairites (Title and 70-75). In reality Blair is only one of two pre-eminent figures within the upper echelons of the government- Brown is one of the most powerful Chancellor of the Exchequers in British history, having a huge amount of control over the direction of domestic policy and enjoying huge amounts of support from the backbenchers and party members. Through examining this article it maybe fair to consider it a Blairite newspaper, or at least one in which the political editor has sympathies for Blair. This partiality to Blair maybe due to the fact that ever since he has become leader of the Labour Party in 1994 he has made great efforts in courting the Murdoch press. He has frequently visited the tycoon trying to impress him with his market friendly and pro-America 'War on Terror' speeches, and has presided over the repealing of the anti-monopoly allowing him to spread his media empire further. In return", "label": 1 }, { "main_document": "The modern definition of a skyscraper as a tall building was entering common parlance towards the end of the nineteenth-century; previous meanings ranged between 'high-standing horses... a hat or bonnet... a very tall man... [and] fly balls in baseball and cricket.\" The current description refers to 'a building of great height constructed on a steel skeleton and provided with high speed electric elevators.\" The provision of lighting, ventilation, heating, communication systems, fire and alarm systems, and other utilities necessary for worker productivity, can be added to this contemporary definition. Sarah Bradford Landau and Carl W. Condit, Francis Mujica, In the historiography of the skyscraper, a number of reasons have been postulated as to why Americans built them, integrating commercial, financial, and architectural motivations. There have also been numerous debates concerning the origin of skyscrapers. Although 'the question of primacy in skyscraper design - Chicago verses New York - was set aside long ago as unresolvable' Following Chicago's famous fire, in 1871, which decimated many buildings, the restoration plan focused upon taller buildings; 'skyscraper design offered...speed, efficiency, and economy in rebuilding.\" Chicago was held up as a model for New York, as it became a focal point for American commerce in the late nineteenth-century. However, the skyscraper became an integral part of both states' topography, and 'leadership after the turn of the century passed to New York.' Landau and Condit, Donald Martin Reynolds, Paul Goldberger, The late nineteenth-century witnessed a period when the American economy recovered from the downturn it experienced in the 1880s and had embarked upon a course of upward economic mobility. During this period major cities, particularly New York, flourished as a result of rapidly expanding foreign trade that dramatically impacted the urban economy. Increased shipping and improved railroad transportation led New York to emerge as America's pre-eminent city and economic capital during this era. Spiralling growth in key sectors, such as finance, transformed New York from a port city into a leading industrial and commercial metropolis. Coupled with escalating real estate values, this transformation was decisive in enabling ideas of taller buildings to become a feasible reality. Sophisticated technological advances of the era prompted the rapid and bewildering development of skyscrapers in America. The history of the skyscraper is frequently equated with the development of the elevator, which made tall buildings practical. There exist two contradictory arguments regarding the connection of skyscrapers and elevators; 'on one hand, that the invention...of the elevator followed from the high-rise building, on the other, that the skyscraper was entirely a consequence of the elevator.\" Although many factors contributed to the rise of the skyscraper, the elevator was unquestionably essential in their evolution and thus skyscraper history must pay homage to the pioneers of this technological masterpiece. Landau and Condit, In 1850 Henry Waterman invented the first platform freight elevator. His idea developed radically as within a decade the first passenger elevator, designed by Elisha Graves Otis, and was being utilised in hotels and residences. In the era of the staircase 'five stories had been found to be the maximum beyond which no tenant", "label": 1 }, { "main_document": "As already sketched by the title, Joy Hendry's work Due to the limited scope of this review, the following paragraphs will briefly summarize the main issues and arguments about the element of wrapping in material as well as linguistic context and further comment on the value of this ethnography, as well as its contribution to the study of the Japanese culture. Where possible, further comparisons with the Western model will illustrate the similarities and differences between Japanese and occidental societies. The ethnography is organized into eight sections each focusing on a different level of the concept of wrapping. Starting from the more material perspective, the analysis gradually moves towards symbolic and abstract 'layers' of discourse. It is therefore important at this point to take notice of several facts: A gift expresses different things about the relationship between people. The attitude towards wrapping may have a great significance because it reveals a major difference between the Westerns and the Japanese. (Hendry,1993,pp.13) While the focus of the former is on the object inside, for the Japanese the wrapping itself may be of far greater significance than the actual object being wrapped. Matinez, D.P. (1998) As Martinez points out, true nature of anthropology, whether from structuralist or postmodernist perspective, is to observe and study 'the interaction between the Hendry, Joy (1993). Westerners regard wrapping as a means to obscure the object, while Japanese perceive it a means of refinement - adding layers of meaning, which it could not carry in its unwrapped form. pp. 27. In Japanese gift-giving, unlike Western, the element of surprise hardly figures; it is often more important to chose an object of an appropriate value, and the main thing is to recognize the fact that the gift has been presented in a certain way, appropriate for the status (i.e. elaborately wrapped for a person of formal relationship, unwrapped as for offerings to the deceased, or minimally wrapped in a cellophane for an intimate friend) appropriate to the occasion (i.e. welcoming of the New Year, when presents represent the continuation of the relationship rather than true affection), with the appropriate words (i.e. It is easy to notice, that the significance of status and reciprocity, discussed in the lectures (Sadgwick,2005), plays a key role in this context. First of all, it is unthinkable for a Japanese to stop the habit of exchanging presents once it has been established, as this would send a clearly negative signal about the relationship. Hendry, Joy (1993). As Hendry points out, although gifts are in theory voluntary, they are often subject to strict bonds of obligation, pp.23 The status, age, and gender of the parties involved, as discussed by Hendry in chapters 2,3,8, would certainly be other important determinants of the style in which the presentation would be made. Moving to a more abstract 'layer' of analysis, Hendry recognizes the significance of Thus one can see, that in this respect, Japanese society is classed by clear boundaries based on these and many other aspects. It is true to say, however, that even in some western societies, like French,", "label": 0 }, { "main_document": "Methods of absolute (or chronometric) dating have developed greatly over recent years, with dendrochronology and radiocarbon dating in particular being used extensively by archaeologists. While relative dating techniques (such as typology and frequency seriation) can be used to put objects and sites in a chronology relative to each other, we must look to other methods to provide specific calendar dates. One such method is dendrochronology, the study of tree ring growth, and was first put to notable use by A.E. Douglass in the 1930's. Soon after William Libby made another pivotal discovery while examining unstable carbon isotopes- radiocarbon dating. In this essay I hope to outline the actual processes involved in both methods, compare their merits and give examples of their use in dating sites. I shall also examine the use of dendrochronology as 'a successful means of calibrating or correcting radiocarbon dates' (Renfrew, Bahn, 2000. p135). Renfrew and Bahn name radiocarbon dating the 'single most useful method of dating for the archaeologist' (Renfrew and Bahn, 2000. p138) By measuring the number of carbon 14 atoms present in a sample of organic material, such as wood, bone or leather, its age can be determined. The element carbon is absorbed constantly by living organisms as carbon dioxide, which is taken in during respiration or photosynthesis. There are three isotopes of carbon, that is, atoms that have 'the same atomic number, but different atomic weights', (Bowman, S. 1990. p10) carbon 12, 13 and 14. Of the three, carbon 14 is the rarest in the atmosphere, and the only carbon isotope that is radioactively unstable. Like other unstable isotopes, carbon 14 has a half life and so decays at a rate known to archaeologists. An element's half life is the period of time it takes for half of the atoms present to decay away, which for carbon 14 Libby calculated as 5568 years, although more recently this was adjusted to 5730 years. So, the amount of carbon 14 left in a sample indicates how long ago it ceased to live. However, it is not the case that a sample can be analysed and a specific calendar date easily ascertained. The presentation of radiocarbon results requires some explaining. The level of carbon 14 in the atmosphere is not the same as that of the past, having been changed due to an carbon emissions since the industrial revolution and the use of nuclear weapons. For this reason all results are standardised to the archaeological 'present', set at 1950, and presented as however many years before the present, or BP. There is also always a margin of error attached to these dates, a standard deviation calculated by the archaeologist. Factors such as contamination and decay of the sample, or inaccurate testing make it highly likely that the date provided is not exact. The real figure can only be shown as lying between two dates; 2600 400 BP This result would indicate that the age of the sample lay within a period from 400 years before 2600 to 400 years after 2600, i.e. 2200 to 3000 years before the", "label": 1 }, { "main_document": "Health inequalities are the systematic differences in health and illness that exist between individuals within a population. In the UK they were first seriously addressed with the publication of the Black report in 1980 Following their election win in 1997 the New Labour Government commissioned Sir Donald Acheson to revisit health inequalities, and the report Health inequalities have complex and interrelated causes, and whilst numerous theories have been developed to try and understand them it remains unclear just which considerations are of primary importance. However what is clear is that you are more likely to be disadvantaged in health terms if you are also disadvantaged in socio-economic terms. Some health inequalities are likely never to be resolved because of their essentially genetic nature e.g. higher male mortality in cardiovascular disease. But in the main, many health inequalities are a direct consequence of socio-economic circumstance and demand both a greater understanding of their aetiology and effective action to reduce and eventually, eradicate them. Figure 1 illustrates the prevalence of health inequalities in some of the major causes of chronic illness. It is with these inequalities in mind that in July 2003 the UK Government launched a comprehensive initiative entitled By 2010 it aims to reduce inequalities in health outcomes by 10% as measured by infant mortality and life expectancy at birth A pertinent example of a specific health inequality is in the rate of infectious diseases. In one study, hospital admissions as a result of gastrointestinal infection was observed to be 2.4 times higher in the poorest fifth of the population compared to the richest These data suggest that health inequalities are in some way related to financial circumstances; they are in essence an income inequality. The key question here is why should this be the case? Many commentators have examined the role of behavioural or cultural, materialist or neo-materialist or psychosocial factors as the underlying causes of the emergence and reinforcement of health inequalities. The behavioural or cultural explanation posits that inequalities are a result of individual choice, specifically the choice to engage in behaviours deleterious to health, such as smoking, poor diet and lack of exercise It has been suggested that between 10 and 30% of health inequalities are directly attributable to differences in lifestyle choices that affect health If one looks at smoking, a major cause of mortality and morbidity, numerous studies have shown the variation in smoking rates across the socio-economic spectrum. In one such study from 1994 16% of men and 12% of women in social class (SC) I identified themselves as smokers, compared to 40% of men and 34% of women in SC V The materialist/neo-materialist perspective holds that it is the social differences in material circumstances that lead to health inequalities For example, poor housing tends to be damp, cold and mouldy and is linked to a greater incidence of asthma and respiratory diseases, particularly in children. Neo-materialism goes further and argues that in addition to individual lack of resources, there exists a systematic underinvestment in public infrastructure such as health services, education and transport As", "label": 1 }, { "main_document": "indicates a subclass of the Thing that the noun referent is either a member of or not. Thus, if the word in question is a classifier it cannot be modified by adverbs like epithets can. Determiners are a category that hosts very different semantic notions. This fact explains the wide variation of different classification systems that have been introduced. Downing and Locke distinguish between two main types of determination: deictic and quantifying (Downing and Locke 2002:436). These two groups are divided into various subtypes, and deictic determiners can also be divided into specific and non-specific. Huddleston (1984:354-399), on the other hand, distinguishes the articles from 14 types of \"other\" determinatives, with exact, but rather small categories like \"alternative-additive determinative\" ( A lot of efforts have been made to categorise determiners according to their semantics. The grammatical behaviour of such categories has mainly been linked to the position of their members within the noun group. Berk (1999:58) and Quirkal. (1985:256) have published an account of which type of noun head can be specified by which types of determiners. However, it only gives information on which determiners occur with singular count, plural count and/or noncount nouns, and these co-occurrences are not linked with a coherent semantic parallel. The only semantic classes that display a distinct grammatical behaviour seem to be the types outlined in section 2.1. The position of articles, genitives, demonstratives and interrogatives in the noun group is fixed: they occupy the central position amongst the elements before any epithets, classifiers and the head noun. Quantifiers however can be found in all three determiner positions, although the central position can only host exact quantifiers like The quantifiers Thus, taking into account a few exceptions and bearing in mind the difficult case of quantifying determiners, it seems possible to assign a distinct grammatical behaviour to certain semantic classes of determiners. The answer to the first essay question, \"How do we distinguish determiners from other elements that premodify nouns in English?\", has shown that there are relatively reliable checks to distinguish between determiners and other prenominal modifiers. Section 2 however, dedicated to the second essay question, has shown that distinguishing between different semantic classes within the determiner category is not a question of clear-cut boundaries and obvious perspectives. But it is possible to point out some \"semantic classes of determiners that behave in grammatically distinct ways\", at least as to their typical position in the noun phrase.", "label": 0 }, { "main_document": "home-country staff or practices when expanding abroad (Roper and Guerrier, 2001). Local people is understood to have a greater understanding and knowledge of their market (Wind (1997), using a polycentric approach also means that the strategic choices are formed to fit different countries where the company is operating. Such an approach seems to be most suited for larger firms with enough resources to investigate local markets, and appears to be very market oriented (Wind Using a regiocentric approach, managers are recruited from the specific region to fill top management positions (Go and Pine, 1995). The whole region is seen as one potential market, ignoring national boundaries, and startegies are developed on a regional basis (Wind , 1973). Go and Pine (1995) argues that a regiocentric policy has the advantage of considering the connections in politics and economy between nations within a region, as well providing more flexibility than a polycentric policy. The geocentric approach is quite similar to the regiocentric, assuming that cultures and markets around the globe have certain similarities (Roper The aim is to create a truly international management who can move from country to country without facing any difficulties (Go and Pine, 1995). A company adopt their best practices, either from the home or the host country, and use them on a global base (Roper , 1997), and the products and services tend to be standardised (Wind There are no single superior or dominant approach to managing a hotel business internationally, and the approach is likely to vary within a company depending on the different situations and decisions to be made (Roper and Guerrier, 2001). However, for the expansion into the Canadian market, Perfection Hotels should use a regiocentric approach. They have been very successful in the domestic business market in the UK, and the envrionment and culture in Canada is quite similar. Therefore, Perfection Hotels are likely to succeed in the domestic business market in Canada too. The advantages of taking political and economical connections into consideration also backs up this approach. In addition, Wind (1973) argues that both regiocentric and geocentric approaches provide som significant advantages over ethnocentric and polycentric. The main advantage is gained through the identification of regional/global market segments, and the development of standard policies within each segment provide a higher degree of control and coordination. Between regiocentric and geocentric approaches, they argues that the regiocentric generally is viewed as more economical and manageable. And given that Perfection Hotels is a small player with limited resources, a regiocentric approach to managing their international expansion will be most appropriate. Davis (2000) argues that a brand is an intangible but critical component of what a company stands for, conveying what the company sells, what it does, and what it is. The brand image exist in the consumers' mind, made up by the sum of all the information received about the brand from past experience, word of mouth, advertising, and service, modified by their perceptions, beliefs, and social norms (Randall, 2000). The added value to the product by the brand is called brand equity (Farquhar, 1989), which", "label": 0 }, { "main_document": "to each other in the following way: So simply to find decryption key d, one must know p and q used to calculate n. And when that is done further operations are comparably easy. Our task was to decrypt the message which was encoded with following 617 digit N number: The encrypted message C is: At the beginning the task seemed easy. But simple program written in Maple showed that number N is composed from more than two prime factors: I was quite surprised, because I thought that N is composed of two prime numbers. But after consultations I knew that is not true. When N has more factors, previous formulas are slightly changed: This could simplify my calculations. But still the number that is left after dividing N by prime factors was 607 digits: This number is not prime so I kept searching. I gave up with my Maple program when after a week time of constant calculating my computer reached a value of 10000079. And I believe that this search was useless and would not give any results. Searching for factors with usage of this method is not effective and it would probably take few millions years for the fastest computer to solve it. The next idea which I though about was extracting square roots from the divided number, and searching for the factors around them. The value that I got was 2.697392774 * 10 This could be more effective if I were sure that this number has only two factors left, but it could as well have more of them. I also tried to search using some random values but without any effect. This is the Maple program which I wrote. It was very helpful but still my computer was too slow to find any result:", "label": 0 }, { "main_document": "understand the actions required for a successful outcome. Not only IJVs are caught in the competition but they also deal with parents' intervention and conflicts. To increase likelihood of success, parent firms need to work out the following areas linked to the IJV lifecycle. Trust and time are positively correlated: collaboration length between partners leads to superior overall performance and in particular to risk reduction, export sales and profitability (Luo 1997). Parkhe (1998a) suggests different levels at which trust must be built: organizational, functional and individual. In successful alliances, trust is often touted as a prerequisite, a necessity, an absolute must (Byrne, 1993). In Russia, trust proved to be the Key Success Factor that had the greatest ability to differentiate between IJVs with good or poor performance (Fey 1996). The reverse is also true: a major contributor to failed alliances is lack of trust (Parkhe 1998b). Developing trust reduces risks (Dasal. 2001) and facilitates conflicts management. Together, these points are called the \"liability of foreignness\" (Vermeulen 2001). Due diligence must be done to find evidences for all important assumptions. These skills can be learnt even though prior experience between partners is not necessarily a predictor of IJV's success (Inkpen 1999). However, for some IJVs, experience is key; in the Samsung venture, BP made sure it was responsible for all the marketing from the plant, where its experience elsewhere in the world has proved crucial (Young 1994). Another example, Peugeot, not used to share control over its manufacturing plants, was forced to wind down its IJVs both in China and India (Vermeulen 2001). Alternately, local partner's market and international experience are both found to have a favourable influence on the IJV's risk reduction, market development and accounting return (Luo 1997). Partners strategic Foreign investor cannot, nor will it need to, find a local partner possessing superior attributes in all of the above. The importance of specific attributes within a category is dependent on what the foreign company wants to pursue from the venture. To paraphrase with an example, A new fish in the pond can starve to death because it does not know how to locate the food. As a late entrant in China, Toyota dealt with E.g. marketing competence, relationship building, market position, industrial experience, image etc Leadership, rank, ownership type, learning ability, foreign experience, and human resource skills profitability, liquidity, leverage, and asset efficiency In Russia, a common disagreement arises because the Russian [local] partner wanted to conduct some research and development, but the foreign firm was interested in the venture only as a sales representative (Fey 1995). The primary motivations for the Chinese in entering a JV are to obtain technology, capital, management expertise, and short-term success. The aim of foreign investors, however, is to gain market access to China with a potential for long-term growth. Partners particularly need to insure that the IJV processes will allow them to reach their compatible goals: e.g. technology knowledge or gain market share. It is the firm who moves quickest that will gain the access to better partners and will in turn provide", "label": 0 }, { "main_document": "Nothing but a mine below it on a busy day in term time, with all its records, rules and precedents collected in it...\" and also at Chapter 39, \"Chancery, which knows no wisdom but in Precedent, is very rich in such Precedents; and why should one be different from ten thousand?\" But this was not a necessarily disagreeable thing since criticisms which plagued the early modern Court of Chancery that justice; or rather equity varied according to the length of the Chancellors foot There could not be any allegations of unrestricted subjectivity, which was paradoxical considering that it was subjectivity that lent equity a sense of authority over the law. Subjectivity did not mean unfettered discretion but a sense of doing what was right according to the conscience and morality of the individual Lord Chancellor or Master of the Rolls. The actual practice of the old Court of Chancery which was slow and protracted invited the criticism of authors such as Dickens. This opinion is equally shared by Jeffrey Slusher that, \"[p]erhaps the best-known condemnation of the unintelligible character of 19th-century English law is found in Charles Dickens' In The sense of prolonged litigation is clearly highlighted in the above statement. But to what extent was the logjam in the Court of Chancery attributable to the system itself. Were the deficiencies in the Chancery exaggerated by Dickens and his contemporaries? Although there were some accurate criticisms regarding the practice of the Court of Chancery, due recognition has to be given to the law reformers who sought to remedy the defects. It was not always clear what the root cause of a particular problem was. This was because of the fluctuating statistical information and methods used to calculate the turnover of cases in Chancery. For instance Michael Lobban acknowledges in his article that \"[i]n the era of Lord Eldon's chancellorship, the figures for the number of bills filed in Chancery could thus be used as a weapon to defend or attack Eldon, depending on which years where chosen by the speaker for comparison.\" The fact that the underlying problem was not aptly detected meant that reform was slow-going. Maybe a main problem was never to be discovered, maybe because the weaknesses that plagued the Court of Chancery were a combination of different factors. Factors such as a lack of personnel, indecisiveness, flourishing industrial economy, a separate jurisdiction of law and equity, and mounting administration suits could have all played a part. This is a clear thread running across Michael Lobban's articles in relation to the reforming of the nineteenth-century Court of Chancery. Dickens might have been biased against the Court of Chancery as he had been involved in a copyrights case, which could have motivated him to formulate certain normative opinions. It is apparent when reading When the solicitors Mr. Kenge and Mr. Vholes express their reverence for the Court of Chancery, a hint of sarcasm by Dickens can be detected through these characters e.g. when Mr. Kenge opines that the established law of equity in England is a great system, \"O really,", "label": 1 }, { "main_document": "the difference range.' Waitrose and Marks & Spencer try to differentiate themselves from other supermarkets. They focus more specifically on the quality of their products and steer away from low prices, to generally attract customers with a higher disposable income. Asda and Morrison's focus more on low prices but still try to differentiate themselves to some extent. Morrison's place some emphasis on a high quality customer service and a pleasurable shopping experience whereas Asda sell a range of non-food products such as clothes and electrical items. Kwicksave is at position 1 on the strategy clock which represents the 'no frills' concept that Tesco started off with- 'pile-it-high, sell-it-cheap.' Tesco has changed position over time moving around the strategy clock although even during its early years it did show some signs of differentiation. Tesco established the first loyalty scheme which involved giving out green stamps that customers could collect and exchange for gifts or vouchers. The multi-dimensional perceptual map above represents where the focus of each supermarket's strategy lies. It shows how each supermarket differentiates itself from the competition by focusing on specific areas. Tesco is in the middle of these four areas of diversification showing how it diversifies in several ways. Tesco offers a low price 'value range' while differentiating itself from other supermarkets, such as Kwicksave, by focusing on providing superior customer service. This included such things as; making staff available to help customers pack bags and take them to the car; having a policy of opening checkouts if there was more than one person in a queue. Other areas of customer service improvements also enhance loyalty amongst customers. These include linking in with the Airmiles group in relation to its Clubcard and the provision of facilities such as baby changing units, restaurants and coffee bars. This focus on customer service was also developed through successfully introducing shopping and home delivery via the Internet. Tesco also offers a 'finest' product range as well as a brand called 'Free from' for customers with special dietary needs; to be in close proximity to the area of high quality products (and appeal to the same segment as Waitrose). Sainsbury's have a similar position on the perceptual map as Tesco however one attribute which differentiates Tesco from this competition is the location and range of stores. Tesco have set up small inner city store locations to move into the convenience store market. This has also set Tesco apart from Asda who have focused on big out of town hypermarkets. Tesco put emphasis on having a range of products and an array of variety which other supermarkets do not provide. Sainsbury's seem to be following a similar strategy to Tesco but are one step behind them in innovation. When Tesco bought out the Clubcard, a scheme to encourage loyalty from its customers, at first Sainsbury's dismissed it as a gimmick. The diagram above shows some of the possible development directions which can be adopted. Tesco occupy the whole of this grid as they have pursued development in four areas. Increasing market share of the UK grocery", "label": 1 }, { "main_document": "the 19 During the Industrial Revolution, several key features of \"modern industry\" occurred in Britain, namely the use of machinery, fossil fuels, synthetic materials and an emergence in large-scale enterprise. These \"macro inventions\" greatly contributed in overcoming law of diminishing returns, which was precisely what the Dutch Republic was lacking. The steam engine, a more reliable power source, was also widely used in many sectors, including mining, cotton, iron, railways etc. Although Britain had a higher percentage of employment in agriculture during the 1700's, this figure quickly fell to 37% in 1820. Like the Dutch economy, agriculture in Britain was highly commercialised, with use of enclosure methods to improve techniques such as selective breeding. The state also played an important role in establishing suitable institutional frameworks to protect property rights. One of the most significant institutional developments in Britain was the Glorious Revolution of 1688, which resulted in a constitutional settlement allowing the government to commit to protecting property rights (patents), which encouraged market-led growth through innovation and technological progress. The Glorious Revolution also contributed to the development of a strong public finance system in Britain. Also, institutional changes such as parliamentary supremacy gave the parliament a certain degree of power over important issues such as finances, effectively reducing the 'divine rights' of the King. The dethronement of Charles I and James II acted as a warning for Kings, but it was also crucial that the parliament did not have too much power so that it could become the new autocrat. The parliament also played key roles in significant developments in the 18 These new institutional frameworks assisted London in becoming one of the biggest commercial centres in Europe, challenging Amsterdam's position as the leading financial centre. Some people argue that imperialism played an important role in Britain's economic growth. However, although trade does help commercialisation, it alone cannot be the only factor in stimulating economic growth. Furthermore, colonies are expensive to maintain, so free trade is a more ideal policy. Again using Kuznet's 6 characteristics of modern economic growth, we can see that Britain did make the transition into modern economic growth, mainly through the Industrial Revolution. Like the Dutch, it had a rise in population as well as per capita income. However, unlike the Dutch, the British were able to sustain this growth, thus breaking out of the Malthusian Trap. Furthermore, Britain's growth was based on industrial progress, and not just specialisation as in the Dutch case. Unlike the Dutch, Britain also showed a high degree of structural change from personal to large enterprise, and not just structural changes in the economy from agriculture to industry. As in the case of the Dutch economy, there was a high degree of urbanisation and religious toleration, and Britain was also highly integrated into the world economy. In terms of standards of living, Britain also enjoyed higher living standards that other countries, and it overtook the Netherlands as leading economy with highest per capita income by 1790. To conclude, we can say that Britain succeeded in making the transition into \"modern economic growth\"", "label": 0 }, { "main_document": "travel agents who are able to provide tailor-made packages designed to special requirements and individual interests. Examples for electronic integrators are: Expedia, Travelocity, and Ebookers. However they can act as aggregators as well at the same time depending on the offer they propose. Principals- are the companies themselves who offer the product directly to the customer. There is no middleman between the supplier and consumer. These are hotels, airlines, different visitor attractions that are emerging with their own Web sites. Nevertheless the boundaries between all these suppliers- aggregators, integrators and principals- are starting to blur. In order to stay competitive they might have to be very versatile and try to satisfy different needs in the near future rather than concentrate on average demand. This concludes a channel conflict about 'who owns the customer'. The distribution channels used to depend on each other and thus work in cooperation with each other; however nowadays they are competing with each other. As the traditional bookings were made through two channels, one was providing the necessary information and the other used to carry out the transaction (O'Connor and Frew, 2002). Electronic-based systems play both roles, making it more convenient and less expensive to put together a booking and purchase for a holiday in a shorter time as well. Online buyer behaviour needs to be taken into account when creating internet marketing strategy. In order to satisfy specific needs and to design appealing products, marketers need to gain as much information about their target markets as possible. There are different models of online consumer behaviour created by various authors. Lewis and Lewis (1997 cited in Chaffey , 2003) identified five categories of people using the Internet: Another approach was developed by Styler (2001 cited in Chaffey , 2003) who believes there are four types of customers: In the same year Kothari (cited in Chaffey , 2003) proposed a segmentation based on the level of brand knowledge and on the fact whether one is searching specific information or only 'surfing' on the World Wide Web. They identified: From supplier point of view these authors suggest that one must consider these different behaviours and decide on a target market in order to design a Web site and provide appealing information and navigation assistance according to the level of specific audience (Chaffey For example business-to-business sites should consider that most of their consumers will be information seekers and buyers mainly. Many chief executives of leading e-commerce companies agree that loyalty is an economically and competitively essential attribute (Reichheld and Schefter, 2000) as attaining new customers is extremely costly and long-term profits highly depend on customer loyalty. Trust is stressed mostly when talking about electronic commerce. As customers cannot see and touch the products they buy therefore they need trust in the Web site in order to share their personal details with the suppliers. Many people believe that price is the only reason that drives customers to buy products and services online; indeed on the contrary research indicates the opposite. Reichheld and Schefter carried out a survey in their companies and", "label": 0 }, { "main_document": "In this essay I will attempt to address the question of how institutions affect economic performance. The discussion will also try to explain why institutions are so hard to change, and whether or not economic development is possible without institutional change. I will do this by looking at the historical evolution of institutions and markets, and use this to see how the economic development through history has occurred. First of all, it is important to know what institutions are. As defined by North, 'Institutions are the humanly devised constraints that structure political, economic and social interaction. They consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights).\" Institutions exist primarily to reduce uncertainty and transaction costs and create some kind of order in exchange. Exchange can be interpreted in many ways throughout history, with the early beginning of the simple hunting and gathering society, to the longer distance trade and finally exchange has evolved into our modern, well-developed markets connected worldwide. North, 1991. p97. There has been a massive development in how exchange takes place. In the early hunting and gathering societies exchange had low transaction costs and exchange only occurred within the small villages. There were no formal constraints, but 'the treat of violence is a continuous force for preserving order because of its implications for other members of society.\" This was a much uncomplicated society, where exchange was limited to a few commodities and actors. But as trade developed and expanded into long distance trade, the exchange and trading suddenly became more complicated and the possibilities for conflict grew. The exchange of commodities took place at temporary gathering places or more permanent towns, and more resources had to be dedicated to measurement and enforcement. This implied that the transaction costs grew significantly. Another setback in this appearance of trade is the Fundamental Problem of Exchange (FPOE), which Greif explains by using game theory In short, the FPOE explains the uncertainty in exchange; how can the actors be assured that the other side of an exchange will fulfil their contractual obligations? This is mainly due to the lack of institutional framework that can ensure that all sides of the trade are properly executed. But the favourable institutional framework had not been developed at this early stage. North, 1991. p99. Greif, 2000, p254-255 In neo-classical theory economic performance is, to a large extent, explained by factors such as voluntary exchange and technological innovation. While the importance of institutions is acknowledged, they are usually assumed to be perfect or at least fixed. For instance, the transaction costs of an exchange are often assumed to be zero (perfect institutions enable the transaction to take place at no cost). Opportunities to exchange contribute to economic efficiency in a number of ways, for instance by the gains from natural comparative advantage and the division of labour which brings about specialization, learning by doing, and technological innovation. Institutions are therefore important in explaining long-term economic change. Neo-classical determinants of economic development depend on institutions. For example, in", "label": 0 }, { "main_document": "to conduct a ballot complying with the cumbersome and time-consuming procedure before staging the strike as to gain protection? Since unions are required to give notice to the employer at least seven days before the action, why employer doesn't need to give any notice of the decisions, which are closely related to employees and are likely to provoke conflicts? In this regard, TUC and TGWU have required the government to greatly simplify the regulations on balloting, and repeal the requirement of giving notice to employers. At present time, under Britain's labour law, any employee takes part in a strike or other industrial actions may be in breach of employment contract, under which circumstance, his/her employer are entitled to dismiss him/her. It is true that there is protection against dismissal for employees' taking part in a strike. But this only applies 'in the case of a lawful strike, a term which is narrowly defined in a way which is in breach of UN, ILO and Council of Europe of standards' (Prentis, 2004, cited IER, 2004: 17). Besides, even if the action is lawful and being protected, the validity of the protection only lasts for a period of 12 weeks, which was lengthened from 8 weeks by the Employment Relations Act 2004. This little improvement is far from enough, because employers are still permitted, after 12 weeks of industrial action, to dismiss strikers for breach of contract even when the statutory requirements of balloting and notice periods are met. In most European countries the constitution provides the right to strike and a lawful strike does not break an employment contract, but merely suspends it. It is therefore unlawful to sack a worker on a lawful strike and the courts will prevent it. However, UK has never satisfactorily complied with the international legal obligations. The Committee on Economic, Social and Cultural Rights in its 1997 Report addressed that the common law approach of UK only recognized the freedom to strike, but 'the concept that strike action constitutes a fundamental breach of contract justifying dismissal, is not consistent with protection of the right to strike'(IER, 2004: 3). Woodley (2004, cited IER, 2004: 7), General Secretary of TGWU, also commented that, 'If this were the law here there would be no need for complex unfair dismissal rules to protect strikers'. In the second place, an employee who is dismissed by his employer while taking industrial action may lose his right to claim unfair dismissal. The law simply denies protection for the right of employees in such case. In addition, DTI (2005:39) has drawn attention to the fact that 'the courts have interpreted this legislation as applying to any industrial action - whether or not it involves breach, or interference with, the performance of the employee's contract of employment'. This may provide due pretext for employers to dispense with their workforce during the course of industrial action. Further, there is no protection against dismissal for workers who take part in unofficial industrial action. An employee who is dismissed for taking 'unofficial' industrial action will not generally be able", "label": 0 }, { "main_document": "is the last step in the entire synthesis it is quite poor. The synthesis of This final step then loses over half of this expensive starting material. It is not written whether the starting material that does not form product can be recovered, but even so the step can be considered inefficient. This paper describes the asymmetric synthesis of the diastereomeric core of salicylihalamide. The total synthesis of this compound clearly of medicinal and academic interest due to number of papers published on it. However, the reason for this paper is unclear. Although it achieves its target the synthesis is long and convoluted, with the final step achieving a poor yield. The asymmetric steps used seem to have worked well, but the essential data is missing from the paper which is needed to enable external validation. Also as the synthesis of this molecule has already been achieved is a number of ways the origin of this research is unclear, as no attempt of comparison to other methods is made to show an improvement in methodology. Although this paper contains some interesting chemistry overall it is fundamentally flawed.", "label": 1 }, { "main_document": "Economic stability is the absence of the inefficient aggregate fluctuation in the inflation rate, economic growth, level of employment rate and the balance of payment. If we believe a market economy is itself unstable, then stabilisation policy is introduced as a set of monetary and fiscal policy, designed to attenuate the fluctuations. In this assay, I am going to make a thorough discussion on how the policy should be used to stabilise the economy with examples from the UK. I will discuss the policies in 4 areas. The economic growth is the percentage increased in real national output in a given time period. Given the importance of the economic growth in reducing poverty and improving living standard, ensuring a sustained growth is a key priority for every country. In the classical model, the total output is supply determined. The strong assumption of price and wage perfect flexibility implies the total output always equals to the potential output. Classical economists believe \"free markets are inherently self-stabilising\", thus any attempts of intervention will only \"mess things up\". In contrast another more realistic \"fixed price model\" suggests that the output is demand determined and can deviate from the potential output. When the economy is in recession, an expansion in fiscal policy can shift aggregate demand curve to the right(AD1) by increasing government spending(see dig.1), and the multiplier effect will generate higher output in the short term by In December, the chancellor announced the PBR, forecasting GDP growth of 3.25% this year (see dig2) and said in his pre-budget speech: \"Britain will extend the longest period of uninterrupted growth in the industrial history of our country\". However, the 2004 annual economic summary from national statistics points out \"UK economic growth has continued to be primarily driven by government expenditure and household consumer spending\" .UK budget deficit has increased from 1.53% of GDP (2002) to 2.94%(2004), and it may exceed the limit set by the euro zone of 3% by 2005. In the fixed price model, the increased output might be less than the full multiplier effect in the long run due to the crowing out effects. Governments often borrow money (by issuing bonds) to fund the additional spending. Therefore in dia.1, the MD shifts to right and forces the interest rate increasing, which \"crowds out\" the private investments because of the higher costs for the borrowing. Correspondingly the AD1 curve shifts back to AD2 and responds to a lower output. An alternative way to stimulate the depressed economy is loose monetary policy. In the money market, if the central bank lowered its lending interest rate to R1 by raising the money supply (dig3), the borrowing for private investment is more attractive and result in shifting LM1 curve to the rightLM2. In the goods market, the AD1 curve will correspondingly shift to AD2 thus the total output increases. In the long term, the expanding aggregate demand may lead to the higher demand for the money by shifting the MD1 curve to the right MD2, and the interest rate will rise to R2 which is still lower than", "label": 0 }, { "main_document": "Since Neolithic times, Man has dramatically altered the landscape of Britain from a former almost blanket forest cover more than 12,000 years ago to a predominantly open landscape today. The present landscape is made up of a variety of habitats. As well as woodland, there are also those created by Man, including moorlands (especially on the uplands), heathlands and grasslands. These are 'plagioclimatic communities', habitats that have been arrested a particular stage of succession, and are maintained under various forms of management pressure such as burning and grazing. Grazing comes in many forms. The major grazing animals of Britain today are cattle, ponies, sheep, deer, and rabbits. The main effects by herbivores are grazing, trampling, elimination and nutrient input. Grazers vary in what vegetation they eat and how. Sheep are able to crop the grass to a very small height (mm) due to their jaw and teeth structure, whereas cattle tear grass with their tongues. Ponies also crop the grass to a short height but are important in nutrient cycling of an area (Putman In temperate systems, heath, acid grasslands and bogs experience lower grazing pressure during a year than woodlands (especially deciduous). The latter are used constantly thoughout the year for shelter and feeding. Various improved grasslands also have a high level of use by herbivores. Ponies spend half their time on grasslands over a year (Putman, 1986). Grazing animals have an immediate influence on the functioning and development of a community by altering the relative abundance of plant species. Often, palatable plants are eaten at expense of less-favoured coarse species and in time the latter comes to dominate a habitat. This report aims to give a brief overview of the effects of grazing pressure on upland moorland, lowland heathland, lowland deciduous forest and grassland habitats of Britain. The role of grazing in conservation is discussed. 'Browsing' is included in the meaning of 'grazing' in this review. Particularly since the Second World War, Britain's uplands have been subjected to intense grazing pressure by sheep, causing great loss of heather moor. As a consequence, there has been a general decline in biodiversity in the uplands, including Snowdonia National Park, a designated Special Site of Conservation. Sheep grazing in upland Wales is not beneficial to wildlife, as they prefer more palatable species and avoid siliceous plants e.g. For example, the Their ranges have extended much across the upland regions due to grazing. Good (1990) studied vegetation type in relation to sheep densities in north Wales. Under increased grazing pressure, bushes have decreased and recruitment prevented. In order to allow saplings to establish and mature, sheep density must be reduced for a prolonged period of time (e.g. 15 years). If grazing continues at high levels, conservation problems are likely to occur in the future. At Hafod y Llan farm, in the Snowdon National Nature Reserve, Wales, sheep density has increased over the past few decades. The pressure has led to severe decline in valuable habitats for wildlife such as mires and wet heaths, which support breeding curlews and lapwings. This is a typical case", "label": 1 }, { "main_document": "and a family history, we are looking into all of these causes whilst you are in hospital so that we can aim to modify them and reduce your risks of having further strokes. An increase in your blood pressure and cholesterol can cause narrowing of your arteries which leads to reducing the blood and oxygen supply to your brain. It might be possible for your weakness to remain indefinitely but with therapy and help at home we hope that you will regain the use of your weakened limbs\". To describe the management plan: \"Whilst you are in hospital we will start you on some medication that will thin your blood and some medication that will help to reduce your cholesterol levels. We will arrange for the physiotherapists, occupational therapists, dieticians and swallowing and language therapists to assess your abilities. We will arrange for you to have a CT scan of your head so that we can see any damage that has been caused by your stroke, although sometimes the scans do not pick up all brain injury. Once we are medically happy with you on the acute stroke unit we will arrange a transfer for you to the rehabilitation unit where you will receive further input from the therapists. We will devise a package of care for you when you are ready to go home that will help yourself and your wife manage once you are discharged\". On admission Mr Mr Mr Following admission Mr Mr In the elderly, confusion is often due to a change in surroundings and unfamiliarity and with Mr It is important to rule out any systemic infection, as this can manifest as confusion. A UTI is a common cause of confusion in the elderly. Observations need to be continual to assess Mr Assessments of his self-care ability are also paramount to helping him get back his independence. Swallowing assessments are important in order to make sure the Mr Mr The medical team and the therapists all work together in order to assess Mr Liaisons are also made between himself, his wife, staff and social workers in order to arrange a care package that gives them enough help at home. It is important that all of these agencies discuss Mr Target setting in the MDT also allows goals to be set for subsequent meetings and are a good way of assessing progress. Mr It is important to advise Mr Mr Mr Mr Stroke represents the third most common cause of death in the UK and developed countries, accounting for 12% of all UK deaths (1). The incidence increases exponentially with age, rising from about 3 per 10,000 in the third and fourth decades to about 300 per 10,000 in the eighth and ninth. Total prevalence is estimated at 5-8 per 1,000 over the age of 25 years. About 16% of all women and about 8% of all men are likely to die of a stroke (2). The cumulative risk of recurrence of a stroke is high; between a third and half of survivors are likely to experience a", "label": 1 }, { "main_document": "this study's success. The species identification process is of interest for many different individuals and organisations. This is especially true within the scientific community and within conservation agencies. When a new taxa is discovered the general knowledge about its behaviour and geographic distribution is bound, by virtue of its novelty, to be limited. However, certain traits of the Galagidae taxon are relatively easy to collect (e.g. vocalisations) and their diagnostic value for species description is comparatively high. To further ascertain vocal repertoires importance in the species denomination process will speed up the process of specific identification and description for new Galagidae taxon. Positive identification will enable and interest other researchers to confirm the new findings in independent studies. More information is collected and the actual process of naming the new species is made possible. A common way for conservation agencies to speak for preservation of pristine habitats is through emphasis of their high biodiversity. Biodiversity can be measured in several ways (Purvis & Hector, 2000): When lobbying with governmental officials for conservation of habitats, the last measure (Number) is likely to be the most effective and is the most commonly used feature for describing species richness. Hence, addition of new species names will have a positive effect for any conservation effort taken. Since the process of naming a new species includes several obligatory predetermined steps, the evaluation of the project progress is more or less integrated in the work process. The steps involved are: literature review, morphological analysis and description and intra- and interspecific character analysis and comparisons. The literature review of each species will sort out what type of research that has been done and enable a ranking of the probability for successful species specific description, e.g. if the species only has been seen once and ten minutes of vocal recordings are available, one is most likely to fail in any attempt to validate its taxonomic status at species level. Once the first cut is made, progress of the project will be monitored and evaluated by means of each predetermined analysis completion. Following the completion of the various intra-and interspecific analyses the picture of the projects overall success will gradually emerge. The project will be considered a success if a new species can, unambiguously, be labeled with a Latin binominal name. The process of writing up a paper for publication for the project outcome is then merely a matter of labor. However, if the interspecific comparisons reveal that we are not dealing with a new species, any data that has been extracted on the species from the project in terms of biogeography, morphology, behaviour or genetics is considered as partial project success. Any character description derived will be incorporated in a database and will most likely be use for future taxonomic surveys. An integral part of the process of describing new species includes publicly presenting your results. This means that the results should be readily available in scientific literature according to the rules of The International Code of Zoological Nomenclature (ICZN, 1999) to be valid. The information about the new species", "label": 0 }, { "main_document": "In this assignment, a brake disc of motorcar is chosen as a component to design. Disc brake is a device for slowing or stopping the rotation of a wheel, a brake disc is connected to the wheel or the axle. To stop the wheel, the braking pads are squeezed mechanically or hydraulically against the disc on both sides. Friction causes the disc and attached wheel to slow or stop. In order to design an appropriate brake disc, lots of things have to be considered, thermal and mechanical stresses the disc will endure during different operation has to be analysised. However, the main disc damage modes are warping, cracking and scarring, and warping is the most important thing that engineer has to take into account to determine what kind of material they are going to use, and what is the thickness of the disc. So the design parameters for the brake disc are disc velocity, brake pedal force and structural loading, such as surface moment from hub and surface pressure from pads. The design objective is to prevent the brake disc damaged by the three main damage modes, especially warping. First of all, the dimension of the brake disc is decided. In this design, the radius of the disc is 150mm, although bigger disc will provide better braking performance, but this dimension is suitable for road cars. And the thickness of the disc is 22mm. It is hollowed out with fins joining together its two contact surfaces, this kind of design helps to dissipate the generated heat. Because the temperature on the disc is very high, when braking pads embed on the brake disc, it can reach to 1200 Secondly, plain carbon steel is selected as the material of the disc. Since carbon steel can either be cast to shape or wrought into various mill forms, and it can be machined easily. This is very important for brake disc, if scarring on the disc is not excessive, usually, it can be repaired by machining off a layer of the disc's surface. In addition, the mechanical properties of carbon steel meet the distinct mechanical requirements in the structural applications. Thirdly, Structural loading is considered. The surface moment on the hub is assumed to be Assuming the mass of the brake disc is 9kg, according to centrifugal force formula And the brake pedal force on the disc is 800N, the area of the disc that the force apply to is Therefore, the surface pressure on each side of the disc is Fourthly, structural supports are considered. There are two supports, which are cylindrical support and fixed support. Empirically, the reaction force from the fixed cylinder is about 2500N and the reaction force from fixed surface is about 40000N. In aid of Solidworks and Cosmos tools, the geometry of the disc is shown as Figure 1. One restraint and four loadings are applied to the brake disc, as shown in the Figure 2. The central hole in the hub is fixed, the five bolt holes are applied with 30 Newton force respectively, 800 Newton force is", "label": 0 }, { "main_document": "In this essay it will firstly introduce why language acquisition by apes is interesting to psycholinguistic research and for me personally. It will then look at two specific questions from the topic area. The first question is whether apes are capable to acquire and use language. It will be examined while discussing experiments of two chimps, Washoe and Kanzi. The second question is why humans are capable to acquire and use language. It will be considered with two main views by Piaget and Chomsky. Finally this essay will be concluded by evaluating how my reading has provided an answer towards these questions. There is also a further question arising at the end. It is assumed that language acquisition and the use by non human primates is one of most interesting topic in psycholinguistic research. According to Harley (2001) the study of psycholinguistics is concerned with understanding, producing and remembering language. It seems that the topic is in particular related to understanding language since it is required to analyse the nature of language. Moreover while looking at language use by apes it leads to a further consideration on language acquisition by human beings. In addition this topic is interesting for me personally to find out if humans have any chances to communicate with other primates through human language system. As it was discussed in the class animal communication is different from humans. However communication with other primates will be possible when they have achieved acquisition of human language. In order to investigate the possibility the topic needs to be examined. In addition there are particularly two questions which are interesting and most relevant to psycholinguistic research in this topic area. The first question is whether apes are capable to acquire and use language at equivalent level to humans. It is said that apes such as chimpanzees are cognitively intelligent to have a rich communication system in the wild (Harley, 2001). Furthermore a chimp called Sarah had achieved as high cognitive abilities as humans at the similar age. She recognised that it is not necessary to change the amount of water when pouring it from a tall, thin glass into a short, fat glass (Harley, 2001). However as it was discussed in the class it has been claimed that vocal tracts of apes are not structured to produce speech (Harley, 2001; Steinberg et al, 2001). Therefore language here does not necessarily mean spoken language. However it involves with both semantic associations among words and syntactic rules in sentences as Harley (2001) regards them as essential features of human language. While a number of experiments have been attempted with various apes such as chimpanzees, gorillas and orangutuns the most famous and successful representatives are the cases of Washoe and Kanzi. Washoe, a chimpanzee had started to learn American Sign Language (ASL) when she was at approximately one year old (Harley, 2001). ASL is standard sign language widely used for people with hearing impairment in North America. It has words and syntax as all spoken languages do (Harley, 2001). While she was taught eating habits, toilet", "label": 0 }, { "main_document": "in the wake of renewed vigour on the part of the D.P.R.K to provoke and altogether bully its way to secure guarantees from the United States. By 2002, the North's efforts saw it violate the Nuclear Non-Proliferation Treaty, which it signed in 1985, the accord it signed with Seoul, in 1991, and the 1994 Framework Agreement. However, the U.S reaction to this age-old D.P.R.K tactic was to yet again toughen its stance. Inevitably, causing Pyongyang to remove 'monitors and seals from its stock and waste fuel...and [expel] UN monitoring personnel', which 'put it in a position to add to its probable stock of one or two bombs by...perhaps half a dozen'. Ibid., p.14, and Cumings, O'Hanlon and Mochizuki, True to previous policy, 'the Bush administration eventually relaxed its terms...agreeing to a session in Beijing in April of 2003'. The Beijing negotiations heralded little, 'except the possibility of more talks', and in fact set the stage for the North Koreans to up the ante, supposedly admitting to possessing the nuclear arsenal that U.S. analysts had 'long suspected'. Without meaningful American input the Six-Party Talks thus yielded little since the 2002 crisis. Ibid., p.16 Wit, Poneman, and Gallucci, James Kelly, 'Ensuring a Korean Peninsula Free of Nuclear Weapons', O'Hanlon and Mochizuki, Chung In Moon and Jong Yun Bae essay in Gurtov The 1994 Framework Agreement 'showed that North Korea was willing to trade away a substantial nuclear capability for a package of benefits that included alternative sources of energy and the hope of gradual diplomatic engagement and economic recovery'. These days, the situation in North Korea is even bleaker with economic collapse seemingly just over the horizon. 'Accordingly, Kim Jong Il has demonstrated at least a limited interest in trying economic reform, as attested by his numerous visits to China's special enterprise zones and temperate but frequent efforts to test new economic ideas'. 'North Korean leaders seem to want change; they just cannot figure out how to do it successfully while also holding onto power'. O'Hanlon and Mochizuki, Cumings, Containment and engagement strategies are often debated in a heated, circus-like rhetoric that is entirely unhelpful. Victor Cha and David Kang's eminent study takes the scholarly debate beyond what is often a farcical intellectual abyss and presents an excellent critique of current policy hypotheses. Containment need not be an empty vessel fuelled by blustering, reactionary discourse. Sophisticated containment premises assert that the Pyongyang regime is fundamentally averse to reform and any deviations from its conspicuously secretive, introvert and aggressive carapace is merely a 'change in diplomatic tactics' designed to reap maximum rewards from the prevailing political climate. The regimes ' Victor Cha, 'Weak but Still Threatening', in Victor Cha and David Kang, Cha, 'Weak but Still Threatening', p.15 Ibid., p.16 Ibid., p.17, and Gary Samore, 'The North Korean Nuclear Crisis', 1, 2003, pp.7-24, pp.19-23 Although the D.P.R.K is desperately weak, it is not lacking the potential to go to extreme measures to survive 'despite the many premature eulogies'. 'While fears of an imminent South Korean attack are not a salient pre-emptive/preventative motivation for Pyongyang today,", "label": 1 }, { "main_document": "possible and research should be carried out to understand the technicalities. The risk of a recession and a decrease in revenue demands aspects of allowing, by conducting sensitivity analysis to see how profit is affected by particular drops in sales, and ignoring as there is little that can be done by CFS to prevent a recession. Finally the risk of increased PC / software prices, and therefore cost to the project should be managed through allowing. Sensitivity analysis should be conducted to see how the project cost is affected by a certain increase in PC / software prices. A small contingency fund should also be set aside. With a higher pace of change the project has a better chance of being completed on time but there is a greater risk and less chance for consultation with key stakeholders during the realisation phase. Ultimately at a higher pace, the project may encounter resistance. However at a slower pace there is a higher probability of acceptance, as cultural shifts are often slow. As a shift in thinking is operations department staff, and thorough training is required a medium pace of change is suitable for the CFS project. This should enable it to complete on time, while still capturing \"buy in\" from key stakeholders. At a medium pace, The introduction of a \"pilot\" phase may be beneficial to \"allow the project, the project introduction process and the operational benefits to be thoroughly evaluated\" Although a pilot mitigates some of risk of failure for the main implementation, there is an added cost to the project. As there is no implementation of cutting edge technology or a mass shift in culture the project cannot be considered high risk. It would seem prudent to conduct a pilot phase on one of the ten multitasking teams only. This should be done over a one-month period with fortnightly reviews and lessons learned taken forward to the main implementation. Wheeler (2005), page 50 The pattern of activities defines how the project will be conducted, more specifically how activities are implemented. The two options available are serial or parallel approach. A serial approach, where activities are carried out one after each other, would enable easier management of the project with clear points between activities for management to exercise control. However this approach often lengthens the time for completion. By contrast a parallel approach sees activities carried out at the same time, and hence the time taken for completion is shortened, and there is opportunity for interaction between activities. Although the project may be more difficult to manage with a parallel approach, this is the recommended approach for the CFS project as the reduced time taken is a clear benefit to CFS. This step is relevant to CFS as the project will implement change to a large part of the organisation, the Operations Department. A simultaneous pattern offers blanket coverage and thus requires greater resources to adopt. The next choice is between a horizontal (implementation at a limited number of levels across the organisation) and vertical (implementation spanning all levels but to a", "label": 1 }, { "main_document": "dedicated to the project and bring the team together. e.g. if the hot shot programmers are more productive working from 6pm-5a.m wearing jeans and a T-shirts with explicit text, give them the opportunity and block any attempt by the company's HR to intervene. If team members wasting time on company's meeting, employee evaluations and even disciplinary action ask that they be excused for the duration (for the sake of the project). Risks: If full team commitment cannot be gain it is of no use and may escalate to internal battles which may hinder the project e.g. continuous harassment and annoyance that will shift the team's concentration away from the project. HR department may not cooperate. Risk mitigation: Apply HR techniques such as above to get the maximum output from the team. \"But these organizations miss the fact that Microsoft and other successful commitment-oriented companies don't require overtime. They hire people who love to create software. They team these people with other people who love to create software just as much as they do. They provide lavish organizational support and rewards for creating software. And then they turn them loose. The natural outcome is that software developers and managers choose to work long hours voluntarily. Imposter organizations confuse the effect (long hours) with the cause (high motivation)\" Use of CASE tools can significantly improve the efficiency and productivity of the project e.g. usage of automated testing software for stress testing will cut down lot of time and effort in developing in-house stress testing programs. The packages maybe expensive especially specialised ones, however it's well worth while if it cuts down effort and time significantly. Risks: Off the shelf packages may not meet the exact requirements and actually take more time in customising or worse provide false data. Risk mitigation: Carefully evaluate requirements and supplier before buying in packages. Involve the effected team members in the evaluation since they will be the ones who know what's best for them. The newly installed system is functioning as expected without any technical glitches (wow!). However once the new system has survived its post implementation review, there is much work to be done. There's hardly any system I have heard of that required changes once it's installed due to the dynamic changes in the business operations. This is where continued maintenance (and business) comes in. Two major areas can be identified in the maintenance stage: All required changes to the installed system must, be subjected to the full procedures of the system development life cycle. Therefore, a change-control mechanism must be implemented to ensure that the responsible management insist that each change be installed in a manner that minimizes disruption and is essentially invisible to the user of the system. The deliverables and QA activities in the maintenance are: If non of the recommended activities can be done to reduce the timescale then, simply, job cannot be done. It is my responsibility as a team member to project the truth (however much the project manager doesn't want to hear it) so project goals can be achieved. It", "label": 0 }, { "main_document": "of interest, most European newspapers do proceed onto a domestic level of analysis in their evaluation of Germany's two candidates. Particularly, the Spanish press concentrates on how the SPD plans to rally the support of different interest groups. Schroeder's party satisfies the working class by guaranteeing that the German economy will \"continue being productive without destroying . . . workers' rights\"; it appeals to ecologists with the argument that economic development will not sacrifice ecological sensitivity, and finally vows to \"peacefully resolve world conflicts\", thus gaining the backing of pacifists as well Similarly, the UK's 'Expatica' writes that the SPD's economic plans are, \"beloved by unions and hated by most business leaders\" In France, 'Nord Eclair' examines how the election of Merkel as Germany's first female Chancellor would impact female voters By evaluating each party's assets and shortcomings in the eyes of Germany's interest groups, these publications prove slightly more informative for the non-German European reader. EFE, \"Un combativo Gerhard Schr El mundo. Internet. 16 / 09 / 2005. Accessed on 14 / 10 / 2005. Accessed at: DPA, \"Schroeder fires up party for last stretch of campaign\". Expatica. Internet. 31 / 08 /2005. Accessed on 14 / 10 / 2005. Accessed at: Clauwaert, Jules, \"Nord Eclair\", \"Revue de presse: les elections allemandes\". Le Nouvel Observateur. Internet. 17 / 09 / 2005. Accessed on 14 / 10 / 2005. Accessed at: Certain newspapers widen the domestic level of analysis by comparing the CDU and SPD in terms of domestic German policy. Thus, both 'The Guardian' (UK) and 'El mundo' (Spain) give an overview of Schroeder's plans to cut down on unemployment benefits, and outline Merkel's goal of a \"complete political change\" Readers learn that, while the SPD would ensure free university education and raise the top rate of tax, for example, the CDU instead proposes to \"increase VAT from 16 to 18 per cent\" and to introduce tuition fees for students In France, 'Le Nouvel Observateur' calls attention to the \"tactical uncertainty\" of Merkel's aims, which it sees as too vague While carried out by a restricted sample of publications, the investigation of domestic policy proves to be a conclusive and empirical manner of contrasting the two rival parties. Indeed, a somewhat similar evaluation may be gleaned from considering the political expert's viewpoint. The Observer, \"Germans buy Merkel's miracle\". Internet. 04 / 09 / 2005. Guardian Unlimited. Accessed on 08 / 09 / 2005. Accessed at: The Observer, \"Germans buy Merkel's miracle\". Internet. 04 / 09 / 2005. Guardian Unlimited. Accessed on 08 / 09 / 2005. Accessed at: EFE, \"Un combativo Gerhard Schr El mundo. Internet. 16 / 09 / 2005. Accessed on 14 / 10 / 2005. Accessed at: \"Legislatives Allemandes: Angela Merkel, Chretienne-Democrate\". Le Nouvel Observateur. Internet. 16 / 09 / 2005. Accessed on 14 / 10 / 2005. Accessed at: As with the individual level of analysis, the domestic scrutiny of the CDU and SPD is rooted in political theory. By considering the formal institutions and ideologies that Merkel and Schroeder represent, the media could be seen to", "label": 0 }, { "main_document": "increase of eggshell index since 1905, level off in 1930 at 1.5. Followed by a sharp decrease in 1930, reaching its neap, 1.2 in 1965. The eggshell index stated to increase since the serious of banning of DDT from 1965 inwards. The index finally returned to its pre-war level, 1.4 in 1990. There was a drop in 1995 and the index became 1.3 in 2000. This maybe again because of the collection of eggs in more polluted area and I suggest that should be only a fluctuation. The relationship between the degree of DDT contamination and eggshell thinning is clear. The correlation is negative, the higher degree of DDT contamination, greater degree of eggshell thinning, thus the smaller the eggshell index.", "label": 0 }, { "main_document": "Tisch and Wallace The next section will focus on aid as an instrument for reducing poverty, its effectiveness and its capacity to alleviate the pernicious impact of economic globalisation on world poverty. The chapter will emphasise the difficulty to measure the impact of aid, the purpose of aid giving and the history of aid as overlapping to a certain extent with the history of globalisation and neo-liberalism. The following chapter considers foreign aid as an essential redistribution mechanism in establishing a new global order. Aid, as defined by the Organisation for Economic Co-operation and Development (OECD), represents the flows of resources provided by the official agencies or governments with the purpose of promoting economic development and welfare for the recipient countries. Aid must be concessional in nature, with an interest rate below that of the market and it must contain a grant element of al least 25% of the total package. How effective is aid in alleviating the damaging impact of economic globalisation on world poverty described in the above section? The chapter shows that foreign aid is a complement rather a beneficial supplement of globalisation. Generally most of the foreign aid has been bilateral aid and its origin can be traced back to the Cold War times of ideological confrontation between the United States of America and the Soviet Union. Consequently its starting point reveals its nature as a foreign policy instrument with clear political objectives for the donor country, namely, 'buying' loyalty by pushing a recipient to embrace a certain ideological stance and enter a certain camp. However, there are authors who believe that humanitarian and moral concerns have actually led to aid giving. Lumsdaine, for instance, points out to the fact that generally the supporters of aid have been leftists concerned with reducing inequalities, social welfare advocates, NGOs activists concerned with helping the Third World. It was exactly the case that those who had strategic commercial or selfish considerations in mind were actually against aid. The right generally perceives aid as a kind of welfare state policy, which created dependency and bureaucracy. Lumsdaine's conclusion is that \"national interest justifies a general commitment to making the world a better place\" World equity and welfare is also in the interests of the Western world. David Halloran Lumsdaine, Olav Stokke, \"Foreign Aid: What Now\" in Olav Stokke (ed.) A brief look at the history of foreign aid may offer good evidence for the basic argument of the present paper, namely that foreign aid's contribution to poverty reduction is in fact perpetuating globalisation's harmful impact since they both operate with the same standard neo-liberal methods. First, during the 1950s - 1960s foreign aid had macro-economic objectives: economic growth was paramount because of the assumption that it will certainly 'trickle-down' the benefits in order to combat poverty. Moreover, it was believed that inequality should not be a policy concern since economic growth will reduce the numbers of persons living in poverty. Need of social services, transfer of resources and knowledge, education and training became urgent. Rolph van der Hoeven, \"Poverty and Structural Adjustment: Some", "label": 0 }, { "main_document": "Multinational corporations (MNCs), sometimes called transnational corporations (TNCs), are a very important feature of the modern, globalised economy. MNCs are corporations that have direct ownership of operations overseas in terms of Foreign Direct Investment (FDI) In a broader perspective, these are companies that have the power to manage operations in more than one country, even if it does not own them. In fact, MNCs generally do not own such assets, but co-ordinate and control operations through licensing, joint-ventures, franchising, sub-contracting and strategic alliances. Foreign direct investment - \"the ownership of control of 10 percent or more of an enterprise's voting securities.\" (US government's definition, see also investopedia.com/terms) The emergence of MNCs is often related as being the main driving force in the internalisation of the world economy. This means that its significance in the global economy is particularly proven through the substantial expansion in the number of MNCs especially through the FDI mechanism. Worldwide, FDI has been growing three times as fast as total investment (see Appendix 1.0) for the past 20 years. Moreover, some MNCs turnovers are almost as large as that of some countries. To illustrate further, the turnover of Exxon Mobil accounts for $63 billion of net worth whereas Pakistan accounts for only $62 billion of net worth (Year 2000 Figures). As such, it can be concluded that on the basis on these figures, Exxon Mobil is a more productive and is more efficiently run than Pakistan. MNCs set up subsidiaries abroad for various reasons and in many different ways. The motivations for going multinational could be narrowed down to three main aspects; resource seeking, market seeking and strategic advantages. However, before going through it in greater detail, it is important to note that the transnationalisation of a firm is motivated by aims to increase profit; achieved through either raising the revenues or reducing the cost. As mentioned above, one of the motives that drove companies to invest abroad was This pursuit for capital includes both expanding resources deposits and having access to plentiful labour at low cost. Generally, very few assets needed by firm to produce goods and services are available everywhere. This is particularly true with the natural resource industries such as oil reserves, hard mineral and cocoa. Bearing this in mind, it is unsurprising that, companies like Anaconda Copper, International Nickel and Standard Oil emerged as the first few multinationals to establish production facility overseas. However, technological changes in production and in transportation have tempered the importance of geography, e.g. the location of natural resources as an explanation of why firms turn multinational. It is arguable that labour would be a more important factor for companies to become multinational simply because of the potentially advantageous variations that exist in labour knowledge and skills, wage costs and in labour productivity. For instance, multinationals are often accused of exporting jobs to low-wage countries. This may be true in some industries, such as textiles and electronics. To illustrate further, Nike has resorted to invest its production in countries like Indonesia and China to gain advantage of its cheap labour.", "label": 0 }, { "main_document": "anomalous, or more likely to be suspect than others, so that they can then be investigated in more detail. One can consider the objective of this statistical analysis as being to allocate a Suspicion scores can be given for each record held in the database and these scores can be constantly reassessed. These scores can then be put in order and can be further investigate those with the highest scores or on those that indicate a sudden increase. It is too expensive to undertake a detailed investigation of all records, so for cost-effectiveness investigation is concentrated on those transactions considered most likely to be fraudulent. The diversity of fraudulent activity as indicated by the numerous types of fraud makes the detection of fraudulent behaviour an important task. At most banks, to identify potential fraudulent at source, part of the review process of applications for new credit cards involves routine information checks. (In some A number of fraudulent applications submitted by a Nigerian fraud cartel have been identified in this way.) Following issue, however, most banks depend upon periodic scrutiny of account behaviour to assess if there is suspicion of fraud. In particular, banks have devised a series of rule-based checks against which all portfolio activity is reviewed. Such checks could specify a ceiling on the number of transactions that should reasonably be expected to occur in a single day. Thus, an excessive transaction report might also be utilised to count the number of transactions above some threshold on purchase amount. These guidelines have been developed as a result of analyses of historical fraudulent behaviour in the portfolio. However, most banks tend to use only the most basic of statistical analyses to develop these guidelines, meaning that in most cases the rule sets consist of a set of simple threshold conditions on account variables. It follows that the use of more sophisticated technologies for fraud detection can result in dramatically improved results and detection. In particular, when pattern recognition is utilised, the problem of fraud detection is an obvious application for an appropriately chosen neural network solution. Some of the techniques used for fraud detection: An outlier is an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism. Unsupervised learning approach is employed to this model. Usually, the result of unsupervised learning is a new explanation or representation of the observation data, which will then lead to improved future responses or decisions. Unsupervised methods do not utilise the prior knowledge of fraudulent and non-fraudulent transactions in an historical database, but rather identify changes in behaviour or unusual transactions. These detection utilise a baseline distribution that indicates normal behaviour and then identify observed cases that show highest variance from this norm. Outliers are a basic form of non-standard observation that can be implemented used for fraud identification. In supervised detection methods, a system is in place to discriminate between fraudulent and non-fraudulent behaviour so that new observations can be classified. Supervised detections require accurate identification of fraudulent transactions in historical databases and are", "label": 0 }, { "main_document": "This practical used the sweep netting technique to collect species in four various habitats, and then the Shannon diversity index was used to calculate the diversity of each site. The results showed that all habitats were slightly different in diversity levels due to Abiotic and biotic factors, and other tests should be performed on various other habitats. The aim of this practical was to measure the species diversity of four different habitats, and see if there is any difference in species diversity between the sites, due to different habitats. I believe there will be more diversity in site four because there appeared to be more edible vegetation and less inedible vegetation for example gorse. It was also seemed to be the site with a great mixture of plants. It also had long grass for cover and yet allowed some sun through for warmth. The field containing the four sites was situated in the Rennes field centre in Brittany, on the edge of the foret de Paimpont. The reason for choosing a field here would be because it is undisturbed, and also there is more diversity in France than in Britain, due to continental drifts many years ago, so many species did not make it to Britain. The particular field that was chosen was because it had many different kinds of habitat yet they were not too far apart for other factors to affect the species diversity, for example the weather. To begin with we chose four sites of our particular field selected by choosing four habitats remarkably different from one another, so we got as much variation as possible. Then in order to catch the insects we used sweep nets to sweep up the insects, and then used pewter's to suck the trapped insects out of the sweep nets and into test tubes, for examination and identification back in the labs. For each site notes and pictures of the vegetation were taken in order to help understand why particular species are in those particular vegetations. To make all collections of insects fair, each site was sweep netted four times for the same amount of time. After this, back in the labs, we identified the insects into the matter of order and then into families, separately for each site. We then used this to calculate the diversity indices using Shannon weiner. Using Shannon weiner (= H = - This is calculated using the equation Hmax = LOGeF (F = the number of families) and then this value is put into the equation H / Hmax to work out the equitability index which is always a maximum of one. Refer to printed excel spreadsheets. Our results show that sites one and two are fairly similar in equitability (both 0.87), meaning they are of similar diversity. Whereas site three is considerably lower (0.67) meaning that the species on this site are less equally numbered meaning it is less diverse than sites one, two and four. Site four on the other hand has a higher equitability than all the other sites (0.90) and therefore has a", "label": 1 }, { "main_document": "in this model is the Microsoft SQL Server that is a comprehensive database platform providing enterprise-class data management with integrated business intelligence (BI) tools. The SQL Server database engine provides more secure, reliable storage for both relational and structured data, enabling to build and manage highly available, performance data applications. The Database design is discussed detailed in Section 4. The new transactional model has been designed to overcome the weaknesses present in the current model. A lot of effort has been put into selecting one of the best possible solutions to the problem. The new model aims to offer a transactional model with strong authentication and fraud detection capability alongside with implementation of security features make it least vulnerable to any sorts of attack. The new authorisation process includes more steps as compared to one already present. It was essential to introduce more stages in the authorisation phase to achieve all the objectives.", "label": 0 }, { "main_document": "to PEmax based on weight, BMP, FEV1 and RV using multiple linear regression. The suggested relationship is This suggests that PEmax increases with weight, a patient's forced expiratory volume as a percentage of Forced Vital Capacity, and a patient's residual volume, but decreases with their body mass as a percentage of the age-specific median. The model appears to be a reasonable one, explaining a fair portion of the data and not obviously breaking any assumptions of the multiple linear regression process. While it will be useful as a guide for the general level of a patient's PEmax, it will not be that useful for attempting to predict PEmax precisely. Since the R square value of the regression is still relatively low, there is still a large amount of variation in the data that is not explained by the model. There are probably several other variables that are extremely important to predicting a patient's maximal static expiratory pressure that were not included in the data and therefore are not part of the model. There are still concerns regarding the model over whether the variables used in the regression are normally distributed. Transformations seemed to help a bit in this regard but the model's trustworthiness is limited while the distribution of these variables remains in doubt. Additionally, the relatively small sample size and concerns over the reliability of some of the measurements means that the model may not be a particularly good one. The model could be substantially improved by the inclusion of additional data. A larger sample size would allow for more in-depth analysis and make the results of the regression more reliable. In particular, inclusion of some older subjects in the study would be helpful as it would allow the model to be applied to a wider range of cystic fibrosis patients and the relationships between the variables are likely to be different. Age was not included in the model (mainly due to the high correlation between age and height/weight in the age range considered) but if a wider age group were used it is likely to have an effect as the lungs become damaged over time due to continual infections. Similarly, repeated measurements of the lung volume measurements would also be useful to have to determine exactly how reliable these variables are. In terms of additional variables, a measure of the number of lung infections the subject has suffered from would be quite useful to have but is likely to be difficult to quantify.", "label": 1 }, { "main_document": "advantage that everybody is following the same fundamental principles. This helps to create a universal moral attitude towards situations, which in turn benefits the public, and directs engineers on how they should conduct research. Although these codes help, they do not necessarily denote the exact way in which a situation should be handled. Arthur E Schwartz from the NSPE General Counsel remarks that, \"Except in the most basic circumstances, codes of ethics do not provide 'answers' or 'solutions' as such to ethical dilemmas faced by engineers, but they do provide guideposts.\" This quote is useful to summarise the relevance of the codes with reference to the statement, as it describes how the codes are important, but to do not always dictate the exact way in which a circumstance should be approached.", "label": 1 }, { "main_document": "The whole process of presenting Mulan, from choosing a story to tell to finalizing our product on stage, is like a journey of treasure hunt to me. The more I look back on the journey, the more I grow fond of being a storyteller. To be a better storyteller, each review right after the performance is indispensable and invaluable as well. Therefore, the aims of the following reflection are, first, to deepen my understanding of storytelling and second, clarify some important points of presenting Mulan on stage. This reflection will be divided into three parts: why Mulan was chosen as a story to tell, how she was characterised and what can be done to make the presentation better. Women consist of half of the world's population. To glorify the importance of his opposite sex in China, Chairman Mao also said, \"Women hold up half of the sky\". Ironically, beyond the praise lies the reality that it is men who have been dominating in most of the places and most of the time. The proof can be easily found in the written records, be it in the East or the West. There has always been a tendency to marginalize or belittle women in history. As Hourihan points out in her book, for instance, women are few in the hero myth and \"most of those few function only in the domestic sphere\" (1997, p. 156). She also argues that the records of the hero story can be seen as a \"conscious campaign to marginalize women\" (1997, p. 159). Only when a woman engages herself in the public affairs - men's domain usually, can her words or deeds leave a mark in history. In many stories, women are always depicted with negative character traits. Most of them are wicked stepmothers, evil witches, undutiful daughters, or bad-tempered princesses. Tatar refers to these female characters as \"disagreeable heroines\" (1992, p. 98) in whom seven typical sins can be found - disobedience, stubbornness, infidelity, arrogance, curiosity, laziness and gluttony. These prevailing stereotypes of women in fairy tales and folklores serve a vital function of social conditioning. This is by no means uncommon in traditional Chinese stories. Most female characters in Chinese folktales and legends are either evil spirits leading men astray or pathetic victims only succumbing to their destiny. Disproportionately, the stories of Chinese heroines can be counted on one's fingers. Thus, after reading most sexually biased stories, boys may reinforce their stereotypes about their opposite sex to a further degree. What's worse, girl readers may unconsciously or subconsciously internalize this fixed image that women are secondary or inferior to men. One of our purposes of doing Mulan is to counter that negative effect on both male and female readers. Concerning the meaning of a story, it should be the core issue worthy of a storyteller's attentive consideration. According to Cassady, a story \"should help the listeners in some way to appreciate life, to understand a particular facet of living, and to rejoice in life's richness\" (1990, p. 46). Lavender also claims that myth, legend and lore can", "label": 0 }, { "main_document": "Relations between Britain and America deteriorated so badly in the eighteenth century that the American Declaration of Independence was made in 1776 to cut their country off completely from the island that had ruled them since its birth. The main reason for this turn of events can be attributed to Britain's growing attempt to stifle America's growth through a series of trade acts and economic regulations which highly favoured Britain. This type of control, without any consultation with the American officials, ended up creating an anti-British sentiment which was added to by a variety of factors which appeared during that century. Even as early as the 1730's Britain had passed a series of laws which put American traders at a disadvantage, ranging from prohibiting the exportation of colonial made felt or hats to a law decreeing in 1736 that all sails made in the colonies had to be made of British cloth. In addition to this Britain ruled in 1732 that the colonies were forbidden to levy import taxes on British goods thus stripping the Americans of that avenue to raise income. Virginia and Maryland were hit hard by restrictions placed on tobacco exportation and it has been estimated that this cost tobacco planters about However it was not actually until later on in the century that this type of action by Britain really angered the colonies and provoked any sense of organised resistance. Bernhard Knollenberg, Britain waited until the 1760's to begin policies which were intended to be extremely beneficial to the English but at the same time often highly detrimental to the Americans. The main reason for this was that Britain had been involved in the Seven Years War and had successfully gained Florida from the Spanish and land in the north and west from France. However, Britain had amassed a huge debt in the process and looked to its colonies to collect extra revenue as taxes in Britain were already fairly high. It was at this time that the British Crown and ministry sought to \"centralize the administration of the colonies and to intensify the efficiency with which they were exploited\" Herbert Aptheker, In fact it was not until the Proclamation of 1763 when the Acts which started to be passed were having a strong enough effect on the people for them to voice criticism publicly. The Crown decided to defer opening up the land west of the Appalachian Mountains for settlers and this decision was one of the first to become \"a source of acute discontent in several of the North American colonies\". Knollenberg, Anthony Mcfarlane, This was followed only the year after by a law known as the Sugar Act being passed which vigorously enforced British rules of trade, renewed duties on molasses and sugar and placed new duties on wine, silk, indigo, coffee and non British textiles. This Act angered large sections of colonial society, in particular rum distillers who feared that it would \"destroy their industry\" and coastal shippers \"whose trade was impeded by new customs regulations associated with the Act\" The Act caused such", "label": 1 }, { "main_document": "Elasticity is a measure of responsiveness. The price elasticity of demand is a measure of the responsiveness of the quantity demanded of a good to a change in its price. One way to measure responsiveness is by using the slope of demand curve (Slope = change in y-axis / change in x-axis). We want to know the change in quantity that results from a As slope of demand curve is always negative, we are commonly interested in the inverse of the slope. However, there are problems by using slope to measure responsiveness. We cannot compare two demand curves simply by their slopes because the slope of a demand curve depends in the nits in which we measure the price and quantity. We also need to compare the demand curves for various goods but different goods have different units. On the other hand, prices vary greatly. A As a result, we have to use another way to measure responsiveness and elasticity is that measure. Price elasticity of demand ( The formula means the percentage change in quantity that takes place when price changes by one percent. The price elasticity of demand tells us how responsive or elastic the demand is. The price elasticity is a negative number as a decrease in quantity demand results in an increase in price. To compare elasticities, we can use the magnitude of the price elasticity of demand and ignore the minus sign. We have to collect information on the amount of different types of food sold at different prices (e.g. before and after sale) before we calculate the elasticity. To calculate the elasticity of demand, we express the changes in price and quantity demanded as percentages of the average price and the average quantity. By using the average price and average quantity, we calculate the elasticity at a point on the demand curve midway between the original point and the new point. This method is called arc elasticity. It is an average over a range of price and gives an estimate of the price elasticity of demand between two points on the demand curve. Another more usual way to calculate elasticity is called point elasticity. Instead of using a mean value, we use the value at a point. These two methods are used to calculate elasticity with a linear demand curve. There are different types of elasticity along a linear demand curve: If the quantity demanded remains constant when the price changes, then the elasticity of demand is zero and demand is said to be perfectly inelasticity. If the percentage change in the quantity demanded is less than the percentage change in price, then the magnitude of the elasticity of demand is between zero and one and demand is said to be inelastic. If the percentage change in the quantity demanded equals the percentage change in price, then the elasticity of demand equals to one and is said to be unit elastic. If the percentage change in the quantity demanded exceeds the percentage change in price, then the magnitude of the elasticity of demand is greater than", "label": 0 }, { "main_document": "founders of the business. Again, a long-term business orientation is assumed, requiring more risk as well as delegation/licensing/outsourcing ... The SWOT analysis has resulted in several well-defined strategies. In reality, things are not as clear-cut. A possible perspective for the future is outlined in the next section. Market analysis shows that there is a large potential for growth in the mid-market. The competition is highly profitable, but scarce. Nothing indicates an imminent slump in demand for corporate software. Well-integrated solutions have excellent prospects. The additional income could help extend the product range. Not acting would be too conservative. There is no internal funding problem and economies of scale are not exhausted. Think of the scalability argument mentioned above. A shift in the customer base is not uncommon at this stage. At the moment, there is no replacement for them, though. Relationships to new customers and partners have to be forged first. Another point is the lack of employees, so the interpersonal skills of the founders play an integral role when it comes to finding the right people (which, at this point, will have a big impact on the company's culture). Formal procedures and controls have to be introduced, maintaining efficiency and the advantages of a small firm, but keeping it manageable. Again, this depends a lot on the personalities and experience of the founders. For instance, they have to be able to delegate and relinquish control to others. Signifo is going through a phase that is typical for companies of its size, namely making the transition from family/small team business to being a serious enterprise. Some entrepreneurs defer this step for too long as can be seen from the PCD Malton case. The founders of Signifo, however, seem motivated and capable enough to run on a more innovative strategy. The only question is, First of all, they should try to defend the small business market as a steady source of income. There is a danger of neglecting loyal customers and thus alienating them. They should not even notice that they are becoming less significant. At this stage of development of the firm, some structural changes are necessary. My recommendation would be a staged approach in order to ease the transition and to minimize the risk. It would be a good idea work with distributors on a commission basis to begin with. This way, the company has more time to train a sales team of its own, which is unavoidable in the medium term. This concept works well in conjunction with the product licensing scheme proposed by management, because companies experienced in potential markets complement Signifo's strengths. They also have more reliable marketing and distribution channels. Consequently, finding a good partner is one of the first steps to undertake. Depending on the financial capabilities of the company, the imminent steps could be financed entirely by the company's house bank and own reserves. It is not necessary to look for venture capital at this point. That would only be necessary for the riskier alternative, as investments in sales and support can be expected to", "label": 0 }, { "main_document": "it could have been selected for on plates lacking adenine, in a similar we to trp- and leu- plates were used here. The marker on the plates if ADE-2 was used would have been red signalling protein interaction. The result expected in this instance would have been activation by interaction of the p53 protein. P-elements are of high biological interest due to their horizontal transfer from various species to They also have a large number of technical applications that make them a very useful tool in manipulating the genome of drosophila. It has been used as a model organism for research for almost a century, due to its ease to handle, well understood, a short life cycle of about 2 weeks, and cheap to handle, and its entire genome is now sequenced. It has 4 pairs of chromosomes, 1 XY pair and three autosome pairs (2,3 and 4). We will be looking at pairs 1 2 and 3, as the fourth is so tiny. Transposition is when a donor element (P-element) is excised from the donor, and reinserted into a recipient, creating an exact replication. Transposase is encoded by P-elements and binds to subterminal regions at both ends of the element and represses transcription. Transposition is thought to be an important factor in evolution, due to the mutations it causes, and the selective benefits these may have caused on the species and then been selected for. In this experiment P-elements and drosophila are used to study different genetic techniques. All materials and processes were carried as explained in the manual with the following details: The following flies were crossed: They both carry the white mutations, and the males also carry a transposon marker with a stubble mutation. The females have a P-element. The P element used is as shown in figure 1 The following flies were crossed: Here both the males and females have a Y chromosome demonstrating that the presence of a Y chromosome does not determine a male phenotype. Instead, in the case of drosophila, it is the ratio of X chromosomes to autosomes, and in this example the female has almost 2 complete X chromosomes. The males here are wild type, and the females carry 8 insertions of a P element marked by a mini-white+ gene mutation, and carrying a The females also have an attached X chromosome, which is nearly 2 complete copies of the X chromosome. The P element used is as shown in figure 2 The following flies were crossed: The P element used is that shown in figure 1. The following flies were crossed: The males have UAS- The females have a strong ubiquitous eye-specific GAL4 driver, with the mini-white mutation. The P element used on the male is shown in figure 3. Male offspring with both the transposon and the P-element will have the genotype: Males with curly wings and red eyes and the following genotype, are saved for experiment 3. From cross 1, 4 red eyed, straight winged males with stubble were obtained. Some of the eyes were observed to have mosaic eyes,", "label": 1 }, { "main_document": "the corpse of Eumolpo, gains colour on the fresco of the wall. He is complete, even though the other elements are not. This contrasts to the beginning scene where Encolpio is set against a plain wall, and for a period of time the only element we have of him is his voice. Even when he comes into view he is clearly a tormented person who is constantly in search of something. By the end of the film he has found peace by embracing life, even though he does not have his beloved Giton or companion Ascilto. Perhaps Fellini is trying to make his audience understand the value of life, and how it does not matter if we are unable to achieve or gain all we desire, for life is a wonderful thing that we should embrace and take pleasure in for its own sake. It can also be a warning, as Fellini is trying to show his audience that one day they will be part of the frescoes, lost and forgotten. He therefore urges the people watching the movie to make the most of their lives before it is taken from them. Religion is not an area which Fellini explores to a great extent. Ben Hur and Quo Vadis contrast the brutal Romans to the saintly Christians but Satyricon avoids such matters. Even though the film is set around the period of Nero, the Christian matter is never addressed. It could be argued that Eumolpo in the gallery scene is a saintly figure. He wears a simple toga that resembles that of a priest and he preaches of how to lead a better life. He guides others to be humble, to ignore the temptations of wealth and sex and develop our skills in the arts. He does not, however, advise us on faith but on the philosophical and artistic elements. He argues against those who have ignored poetic thought and discussions in favour for less intellectual elements. In this sense, he appears to resemble a Greek philosopher rather than a Christian priest. Male relationships are a clear focus within the film. In Spartacus a homosexual relationship was hinted at in the 'oysters and snails' scene. In contrast to this Satyrcon embraces homosexual relationships fully and does not try to hide it behind the dialogue. Unlike Spartacus, it is not just the bad characters that are involved in sexually deviant relationships, but the majority of the cast. Encolpio and Ascilto may not be viewed as the typical male, but they are not cruel or nasty. They engage in the same sort of homosexual relationships as the undesirable Lichas. Homosexuality is not used to characterise or define a character, it is merely their preference. This is a better depiction of the past then that shown by previous films. It can be argued that the Greeks and Romans had no definition for homosexuality, so did not feel that it existed. It is likely that they viewed bisexuality as the norm, and I feel that this can be shown by Suetonius who commented on how Claudius", "label": 1 }, { "main_document": "closing mark in case. In this practice, titrate to a faint pink end point with 0.01 M sodium hydroxide. Afterward, 2cm Meanwhile, a portion of fresh and older fish meat, respectively, were placed in the oven over night for determination of the percentage of water. Whole fresh fish and older fish were used to determinate the proportion of protein comparatively. The raw meat from fresh fish and two weeks old fish were sampled. The method of the Kjeldahl digestion was employed. The electric balance was used to determine the weight of sample. We recorded the value to 0.001mg. Equations used in calculating the TVN and the TMA are shown below: In these equations, V W (g) means the water content in fish samples. For sensory assessment of fish quality, a scale refer to Torry Advisory was used. Assessed the fish by smiling, touching, and observing carefully. Then comparison with the description in the table, and marked it. At last, compared the results between fresh fish and older fish and make a conclusion. In the sensory assessment of fish quality, four grades were used for description in different characteristics, which were E, A, B and not graded. These classes could use to describe when fish was in excellent, available, not fresh and unfit for consumption, respectively. Results for In this study, however, the colour of gills in fresh was slightly bleaching. This phenomenon probably due to merited fish had excessive water which came out and diluted the blood in the gills. In contrast, for All these significant changes can be basically explained with reducing water content and protein breaking down during spoiling. Water loss resulted in many phenomenons changing: from convex cornea to sunken cornea; from shining skin to shrinkage; from thin to thick outer slime. The examples for protein broke down were the changes in cornea from translucent to opaque discoloured; pupils were from black to grey; from transparent to yellow-brown in outer slime; and very unpleasant smell from volatile bases such ammonia. When examined whether or not the gills and peritoneum were easily torn from fresh, as known that, older fish was much easier than was fresh fish. These outcome can be explained for the myofibrillar protein was broken down which resulting in the issue was easily to be separated. This result had shown its great agreement with public reports: In fish, rigor takes places faster than in land animals, and also terminates faster. This phenomenon probably can be explained that fish have unusually watery and fibrous meat. Moreover, softening of meat hastens the penetration which leads to faster decomposition (Taneko, 1981). In other words, since the fish star to spoil, under the same conditions, the process would be increased in side the issue. In the second practice, chemical methods were used for evaluation of fish spoilage. As mentioned before, the results came from fresh fish and older fish were shown in the table 1 below: From the figures for These respected results can be explained as the processing of spoilage, bacterial and enzymic action which results in the production of", "label": 0 }, { "main_document": "had no central control and thus no chance of organising themselves effectively in defence of Russia. The Soviet regime disregarded morals, was ruthless and used harsh repression to keep order during the war. If a minority were seen as siding with Germany then the whole community where the minority lived would be deported. The state would punish any soldiers who abandoned their positions. Absenteeism and lateness in defence industry factories were seen as crimes and mass overtime was demanded from offenders. Offenders could also be imprisoned from five to eight years. 'Brute force was applied, and not only to the enemy.\" Another way in which the state controlled the Russian people was concealing information from the public. Harrison Similarly, there obviously cannot be a state without people, the people make the government's policies happen. Thus although the government made new policies of production and had the ideas of creating new factories, the Russian people fulfilled these policies, which was the key to winning the war. The motivation of the Russian people is important to consider. Many historians believe that the cult of Stalin's personality was vital in this. Some historians consider that the Soviet Union won against Germany because of Stalin. 'Some credit for the Soviet Union's ultimate victory undoubtedly belongs to Stalin. He learnt from his mistakes....he demoted incompetent cronies from the Civil War period....he gradually mastered complex areas of strategy and logic.\" Not only was he a great patriotic leader, but also his industrialisation and rearmament policies were seen as successful. However, others suggest that 'perhaps, if a different policy had been followed, the Germans would not have got as far as Stalingrad.\" However, Stalin's ability to raise morale and install nationalism should not be underestimated. He was always in propaganda in officer's uniform and was seen as the protector of Russia, a symbol of ultimate power. He was vital to winning the war because he raised patriotism and was seen as a father figure to the nation. In the Great Patriotic War, socialism and communism were forgotten. The state emphasised that it was a fight for Russia as a nation. 'It is necessary to give the party organs who carried out their ideological work their due. They reacted quickly to the mood of the people.\" Stalin appealed backwards to the great Princes, such as Alexander Nevsky and great leaders, such as Lenin to gain a sense of patriotism in Russia. Also, people genuinely wanted to believe that there was hope of victory; Stalin gave them this hope. 'During the war years, as the Soviet people were battered by unbelievable miseries, the name of Stalin and faith in him to some degree pulled the Soviet people together, giving them hope of victory.\" The cult of Stalin's personality may help to explain how the population survived the horrific invasion of Operation Barbarossa, then endured the hardships of war and come out victorious. Popular belief in Stalin would have made people work harder, more bravely and obey orders. Support for Stalin is believed to have been greater amongst the armed forces over", "label": 1 }, { "main_document": "is and indicates other details of the customer. The room number is a foreign key which is derived from the room entity, which is its parent entity. The reason for doing it this way is that we, the employees or staff, could change the room number of the customer at any time without having to change his or her entire contact details and other information. Apart from this the entity also contains the contact details of the customer and this entity is considered to be the owner of those attributes. Figure 11 is the design view of the room entity. The attributes that are contained within the room entity are as follows - the room number, the room type and the cleaning status. The room number is considered to be the primary key of the room entity. It defines and indicates the entity as a whole. This way the status of the room and the type of room could be changed at will without much hassle as it is contained in a separate entity as such. The other attributes of this entity are the room type, a foreign key from the room type entity, indicating what type of a room it is. The room entity also owns the cleaning status indicating the cleaning status for each room. Apart from this the entity also owns the various bills from different departments of the hotel as it is all associated with the particular room. Figure 12, shown below, is the design view of the room type entity table. The entity contains the attribute - room type and price. The room type attribute is considered to be the primary key attribute for the entity. And the price attribute is a foreign key from the bill entity being the parent entity. ] After the implementation of the new system, it was now time for the group to put the new system to test. The report now aims to demonstrate the effectiveness of the new database system for the reception desk area of the hotel with the help of few screen shots of the new look menu. For testing the system to it full capabilities, the group decided to enter data that would in a form that would have child or dependent entities. The form chosen was for the customer information entity. The screen shot of the customer information screen is as shown below in figure 13, which is followed by a multiple table queries. In the above figure of a screen shot for the customer information entity, the data corresponding to the attributes are filled in. The customer information entity was chosen as an example as it had the most number of child entities that were dependent on its parent entities. Both the room type and room number entities were child entities dependent on the room type and room entities respectively. As shown in the figure, the room type attribute had a drop-down combo box to ease the method of input for the employees. Figure 14 shows a multiple table query where in the group decided", "label": 0 }, { "main_document": "It is suggested that if there's an event during visiting, more potential VFR tourists will come. The problem brings up a niche market to capture and marketing implications to consider. The Treasure Houses of England are ten most magnificent privately owned historic estates in England, offering visitors a living history of the great families, culture and landscape for 1000 years (Treasure Houses of England Website). Most houses are still homes of the families. The SWOT analysis shows problems they have. VisitBritain (2003) investigated that the reduction of oversea visits becomes one of the biggest problem that affecting visits to attractions, particularly US and European markets. And the followed factor is competition. So the Treasure Houses must find themselves a unique proposition and target a niche market - VFR market. The marketing campaign has two target audiences: Host - resident of England, including local resident, who live within about half an hour's drive from the site; regional residents, who live from a distance of two hours' driving or more, make day visits away from home Overseas VFRs from North America and Europe, 25-34, VF/VR/VFVR, VFR is not the sole purpose of travel but a travel activity. To build a image of the Houses are treasures of England and make local residents to be proud of To increase the frequency of local visits, by 10% in tourist number in one year time To increase visits of host and VFRs to different Treasure Houses through the Treasure House of England joint promotion programme To improve the communication between oversea visitor's by Internet and motive their VFR trip to visit To build the database of local residents and reach more oversea VFR visitors via host For VFR purposes, oversea visits to England (excluding London 42%) accounts for 49% of the total number in the UK (VisitBritain, 2003). Thus, strategic objectives of the campaign are primarily to encourage visits from host and VFRs in England to the Treasure Houses, at the same time help the Treasure Houses capture this niche VFR market. Beioley (1997) recommended that active promotion to local residents is a way to gain VFR visitors to attraction and Poriaal. (2003) pointed out that people are more likely to visit a heritage site again if their emotional experiences were involved. Thus, in this campaign, the key is to create emotional ties between attractions and visitors. The strategy is to enhance emotional bond between hosts and treasure houses; the other one is to create emotional ties between VFR visitors and treasure houses through hosts. \"From Home to Home\" - It has two meanings: to host, be advocate of their local houses, see it as their own home; to VFRs, they leave home to home in England, and then visit 'home of the great families' - the Treasure Houses of England. It creates a sweet perception to VFR trip and visit. With a theme 'from home to home', marketing activities will be conducted from the aspects of product, pricing, distribution and communication mix. Product is the visitor experience, and it starts from impression before visit (Johns and", "label": 0 }, { "main_document": "frequency, destination and purpose of travel as well as economic growth (appendices 2, 3) which show Korea's reaction in increased spend as proportionate to higher income (Middleton & Clarke, 2004). Finally, age distribution of these travellers matches those from Saint Fusion's targets in the UK proving Czinkota & Ronkainen's (1996) theory of global niche markets, though not that of global consumption patterns (De Mooij & Hofstede, 2002). There are four different perspectives in market orientation relating to how a company views its home-country markets versus its host-country. An ethnocentric profile prioritizes the home country and thus home-country language, culture and practices will be imposed on the host-country (Usinier & Lee, 2005). Polycentrism is the exact opposite, therefore regarding the host-country as one in which culture, and markets are so different that a distinct marketing approach with bespoke policies and decisions is needed (Usinier & Lee, 2005). Geocentrism sees all markets as a world market and so takes the best practices from around the world and applies them to the host-country. Finally, regiocentrism falls between ethnocentrism and polycentrism, adopting a more global approach than the former whilst recognizing that culture and consumer needs may require a regionally-focused perspective. Considering all four perspectives and that no company totally embodies one centricity profile (Bowie & Buttle, 2004), Saint Fusion's approach, will manifest regiocentric tendencies. The company recognizes the Korean market is becoming increasingly westernized (see Business Environment, Analysis of Demand and Analysis of Supply), a tendency with no signs of slowing down; however, Korea is still has a very homogenized population, with strong emphasis on the national language and Confucian culture (predominating in the region)-appendix 2, therefore suggesting an adaptation of marketing strategies and procedures to better meet customer needs. Nonetheless, once the company has successfully established itself in the region, it will consider the incorporation of increasingly geocentric attitudes (Usunier & Lee, 2005). Regiocentricity will also be reflected in the product, where fusion bakery will be produced with Asian ingredients Important to remind that whereas in the UK basic ingredients are locally produced (appendix 1), in Korea these are imported (appendix 4), therefore, local produce refers to all the secondary ingredients Contrasting its competitors' positioning and targeting (appendix 4) Saint Fusion places itself in the niche business market, thus, customized marketing (Kotler et al, 2003) will be meshed with subjective positioning (Bowie & Buttle, 2004) where innovation, excellence, style, fusion and aspiration will be reflected in the brand's image (e.g. colours, lay-out and presentation of bakery-displayed at knee-high level with \"alternative\" furniture). Therefore, the products' features and attributes (objective positioning), price/quality relation (premium price for premium quality)-appendix 1, usage and class of user (see below) will be part of the positioning strategy-appendix 7 (Bowie & Buttle, 2004). Saint Fusion's dedication to local economy will be a major differentiator as VMS's require large scale supply; this contrast will make the most of Confucian culture where the whole is worth more than the individual and collectivist attitudes are still predominant (Usunier & Lee, 2005). Whereas Paris Baguette, Tous les Jours and Crown Bakery focus on providing", "label": 0 }, { "main_document": "promotion (Blythe, 2001). This is true of all their other Will Self novels and novellas too, none of which exceed The title was sold widely across UK chain and independent bookstores: They sold it throughout February and early March as part of a 3 for 2 sales promotion. Will Self came to the Oxford store to sign copies in mid-February, and after his visit the remaining signed books were stacked at the till as an impulse-buy option for queuing customers. The stores used these sales promotions to speed up the customer's decision-making process by suggesting that the product represented good value (Blythe, 2001). Also, they used the attention-grabbing technique of presenting the book face-out, so that the cover marketing messages were clearly visible. They displayed it face out with Self's older paperback books behind, perhaps relying on fans of the author who knew what they were looking for, rather than those customers swayed by sales offers. It was also sold direct to users by various on-line discount retailers, namely Amazon. Virtually all of the broadsheet reviews took place around the launch of the original hardback publication in 2004. However, as mentioned, they were quoted on the back of the re-release, and most were archived in the on-line versions of the newspapers, still accessible to those who are interested ( It seems that Penguin's promotional efforts were more focused on author public relations and establishment, given that the title had already been received and had made an impression. The author himself is clearly the hub of Penguin's promotion campaign. He has for a long time been involved in various sectors of the media, and recognises that he has cultivated a brand with his publisher ( The promotion activities have taken place steadily since the book's publication. In February, Self toured chain bookshops in the South East, usually Borders stores, during which he signed copies of the paperback for fans. Details of these signings were broadcast and printed in local media. His reputation gets him invitations to other events, too, many outside of the publishing world. He can take these opportunities to raise-awareness about his new books, and did in February at a Curzon Cinemas film fans series ( As part of its '70 Years' celebration this May, Penguin is also launching seventy 'Pocket' paperbacks. These are an eclectic selection of small format books, showcasing what Penguin regards as its most iconic and important authors. Number forty in the list gives a sample of Self's work - two short stories to demonstrate his gift for 'innovative, experimental work' ( Of course, this spin-off promotion is intended to boost sales of all titles on the Penguin backlist, including Penguin's website displayed the title cover in the 'new releases' feature of its homepage throughout February, and also contains a brief blurb about the product and author ( A navigable and simply designed site, I feel it worked well as a marketing tool, allowing consumers to inform themselves about the book, and also acting as Penguin's distribution channel by giving them the ability to buy online via", "label": 1 }, { "main_document": "an 'Early Bird Thailand Offer', two-centre holiday in Bangkok and Phuket, which includes the following: Return scheduled flights on Etihad Airways from London Heathrow to Bangkok International Internal flights from Bangkok International to Phuket on Thai Airways All pre-payable airport taxes 10 nights accommodation in Phuket at fours star Amari Coral Beach Resort, inclusive of breakfast 2 nights accommodation in Bangkok at three star plus Tawana Ramada Hotel inclusive of breakfast Return airport/hotel transports throughout, organised by Destination-Asia. Note: Offer valid between 01/05/2006 and 15/10/2005.at The package as described by aggregator can be found at travelocity.com with the exception for the transfers. For the transfers in this case, Budget car hire company has been used and included in the price comparison. Whilst it was quite easy to book a transfer in Bangkok, however there were some difficulties doing it for Phuket. For this reason, the same price was used for Phuket as it was shown for Bangkok. Due to this fact the figures are not entirely precise, yet there should be only a minor difference that would not affect the final price of the package. Along with the failure to be able to book a transfer in Phuket there appeared another difficulty when looking for the hotel in Bangkok, Tawana Ramada. Despite the fact that there are great search facilities offered on the Web site, in fact they might disappoint the customer. When entering the name Tawana Ramada to advanced hotel search offered, the hotel was not found due to a misspelling. Tawana Ramada is spelled as Tawanda Ramada (see Web site 13 in appendices). Finally the hotel was found with the aid of A-Z listing. There was a problem with the phone number when completing the booking. (See Web site 10 in appendix) Etihad Airways- return international flights This site is very pleasant, customer friendly, and easy to navigate. This Web site confirms the theory that design should be clear, accurate, detailed and relevant. From customer point of view most of the aspects are met. Thai Airways- return internal flights Similarly to the previously discussed site is very customer friendly and experience can be described as enjoyable. Rich and relevant information, quick registration process and easy navigation are all good features for a customer. Amari Coral Beach Resort- ten nights accommodation There was a slight confusion experienced when searching for the Web site of the Amari Coral Beach Resort hotel as there were two different Web sites found, none of them seems to be the hotel's own site as it is connected from a 'phuket' or a 'amari site'. The first one (above) has been evaluated; however there is acknowledgement of the The prices offered on the two sites match; however the layout and design is completely different. The question was raised whether it is the phenomenon of 'cyber-squatting', nonetheless there was no mistyping or misspelling and research confirmed that both sites are valid. The first Web site is more appealing in terms of design and the logo in the background gives a safer feeling to the customer while the amari.com/coral", "label": 0 }, { "main_document": "Rumour of gold on the far western frontier may not have aroused great interest along the North-Eastern seaboard of the United States during 1848, following James Wilson Marshall's discovery of the precious metal in January of that year. As rumour became fact by September of 1848 however, Northern interest swelled; the hope of a fresh start and a guaranteed fortune caught the imagination of many Easterners, sparking a trans-continental migration over the next ten years that we know as 'The California Gold Rush.' Travel to the far west was anything but trans-continental however, and this essay focuses on the travels of those who took to the sea, sailing around Cape Horn or crossing the Isthmus of Panama, years before the building of the Panama Canal. Focus is also placed on the journey undertaken by those choosing to make the trip by land, from the mid-western St. Louis region, toward California. This discussion also notes Brian Roberts' theories on the yearnings of a rising middle class culture which historians can understand through analysis of the diaries of sea-faring Argonauts who were faced with relentlessly repetitive days out in the open sea. The journey, no matter what the route taken, was hazardous, the terrain and at times climate unforgiving, but as Andrew Rolle declares, the typical American gold seeker, ambitious and curious, was not however, easily swayed from his purpose. Andrew Rolle, California, A History, (US 4th ed. 1987) p167 With the exception of those living in the Western territory of Oregon and the Hawaiian islands, no Americans reached California during 1848. The President confirmed Gold rumours as fact on 5 Those first to arrive had travelled via this Isthmus of Panama, reaching San Francisco on 28 Such was the volume of travellers toward California in 1849 that historians now refer to those making the trip as 49ers. Such was the tirade of migrants from the Eastern states that Rodman W. Paul terms it 'the stampede of 1849 and the subsequent years.\" As Malcolm Rohrbough terms it, 'Marshall's Gold discoveries launched a thousand ships and hitched a thousand prairie schooners.\" By 1850, census administrators at San Francisco Bay estimated that 40,000 emigrants had travelled to the region, most of them being North easterners. Traveller variety showed great diversity, as the working class mixed with the wealthy, those from Northern states mixing with those from the South. The relationships between this latter group will be assessed shortly, highlighting just how polarised the two regions had become in the years just before the civil war. Before any talk of travel, it is important to briefly note why exactly people chose to leave the Northern states of America. Many were tired with the growth in urban life since the revolution; the demand for labour, strict timekeeping, intemperance and other such facets of an ever-growing city centralised society. Those in rural regions, who failed to maintain their farms, may have seen a new life in the West as a better alternative to urban society. Since independence, the building of a new nation rested on the idea of wealth", "label": 1 }, { "main_document": "different states of feelings, which he used to explain when emotional labour is needed: \"emotional harmony\", \"emotional dissonance\" and emotional deviance\" (Mann, 2004; p:208). Emotional harmony occurs when the felt feelings are the same as the ones required by the display rules. Emotional dissonance is when the employees display the emotions that are expected of them, but do not actually feel them. Lastly emotional deviance explains the situation when the employees show the feelings they have, which, however, are not in accordance with the display rules. Mann (2004) explained that emotional labour mainly arises with emotional dissonance. According to Hochschild (1983) there are two ways of expressing the desired emotions: through surface acting and through deep acting. When employees perform surface acting, they outwardly show the feelings that are expected of them while at the same time they might be feeling something completely different (Hochschild, 1983; Ashforth and Humphrey, 1993; Sharpe, 2005). As Ashforth and Humphrey (1993) explained, this may involve the tone of voice or the expression on the employee's face. Deep acting, in comparison, is when the employees are actually trying to feel the emotions which are expected of them (Hochschild, 1983; Ashforth and Humphrey, 1993; Sharpe, 2005). According to Grandey (2003) surface acting provides a bigger challenge for the employee, as feelings that are not felt have to be displayed. Hochschild (1983) and Pugh (2001) argued that employees can suffer from stress through extended periods of emotional labour. According to Grandey (2003) this can result in emotional exhaustion. She explained that this exhaustion is more likely to occur through extensive surface acting. Ashforth and Humphrey (1993) on the other hand reasoned that this does not have to be the case. If employees have higher job autonomy and are able to manage their own emotions, emotional labour can even have positive consequences (Pugliesi, 1999; Wharton 1996, cited in Pugliesi, 1999). If the employees enjoy the work and identify with the role that is expected of them and with the company, accomplishing the required work will result in a positive state of mind and increase job satisfaction (Ashforth and Humphrey, 1993; Pugliesi, 1999). The service industry is a very competitive industry. As Bryman (2004) explained, often the only element which distinguishes one company from another is the quality of service the customers experience. This quality is achieved through good customer service, which highlights the importance of the front-line employees who are the representative of the organisation (Ashforth and Humphrey, 1993). Therefore it is essential that these employees manage their emotions well in order to offer a good service (Constanti and Gibbs, 2005; Lovelock, 1992; Pugh, 2001). According to Grandey (2003) \"positive affective displays seem to be vital to quality service\" (2003; p: 94). For this reason many organisations now implement display rules to guide their employees through the customer contact. Disney is one company that is known to enforce particularly strict rules to ensure the uniform quality experience of their customers. The idea of the Disney theme parks was born when Walt Disney wanted \"an amusement park that grown-ups as well", "label": 0 }, { "main_document": "public and by the mid-nineteenth century a contrasting image of the machine started to attract the attention of American intellectuals. The railway came to symbolise progress to the American people and was welcomed by most, as it broke down regional barriers and guaranteed social equality by providing everybody with a mode of transport, irrespective of wealth or status. Indeed, the railroad became a national obsession in America in the 1830s - 'a testament to the will of man rising over natural obstacles, and, yet, confined by its iron rails to a predetermined path, it suggests a new sort of fate.\" That is, one that apparently excluded the initial ideal of the pastoral landscape. Ibid, p. 191 Having given a brief outline of the two contrasting ideologies, the pastoral and progressive attitudes to the landscape, it is necessary to analyse whether these two theories were compatible. Initially, most writers found the acceptance of technology into the rural landscape extremely hard to accept. Nathaniel Hawthorne's observations of his surroundings written whilst in the woods known as the 'Sleepy Hollow' near Concord, Massachusetts indicate a completely different point of view to Coxe's. He is clearly enjoying the natural environment - 'Indian corn, now in its most perfect growth... It is like the lap of bounteous Nature.\" He seems to spend hours recording the sounds and movements of the wildlife. However, a passing train suddenly disturbs the peace - 'the startling shriek of the train whistle bearing down upon him, [forces] him to acknowledge the existence of a reality to the pastoral dream.\" At end of the passage, the train can no longer be seen or heard, but Hawthorne cannot get back completely that sense of calm and serenity that he had felt before the interruption of the locomotive. The reader gets a sense that this has never happened before, as if the virginity of the land has been taken by the train and can never be regained. Writers usually associated machines and technology in general with masculine aggression and nature with feminine submission. Nathaniel Hawthorne, 'Sleepy Hollow' (Massachusetts, 1844) Marx, This abrupt unwanted interruption of the machine into nature is a common theme among many American literary texts of the eighteenth and nineteenth centuries. Marx even claims that 'it is difficult to think of a major American writer upon whom the image of the machine's sudden appearance in the landscape has not exercised its fascination'. Henry David Thoreau, the famous American author and philosopher, advocated the idea of the pastoral life and believed that it was being wiped away by the American obsession with technology. He thought that the abundance of machinery transformed men into mere objects. Ibid, p. 16 Annette Kolodny argues that women in general favoured the pastoral ideal in comparison to men - 'What women were apparently less willing to accept was the single-minded transformation of nature into wealth without any regard for the inherent beauty of the place.\" Many women felt uncomfortable at watching the destruction of the Edenesque landscape. She argues that had women been in control of westward migration,", "label": 1 }, { "main_document": "had completely forgotten about their promise, just as if Hortalus entrusted his word 'in vain to the wondering wind'. This could be seen as an interesting contrast with his poem LXX, where he declares that what a woman says to her lover should be written down on wind or on running water. It seems that, even though he does not hesitate at all to present himself as feminised and lost in sexual identity in his Lesbia poems, here Catullus totally rejects to be seen as like a woman to her lover in his friend relation to Hortalus, and distinguishes the male friendship as masculine, solid and trustworthy compared to the fluid and unreliable Love between a male and a female. From this point of view, it may even be argued that Catullus considers taking a feminine position in man to man relationship to be as serious humiliation as being penetrated in the homosexual relationship Wiseman, p.11 In the last six lines, he makes further comments on how friendship works differently from Love. The apple, a symbol of love, which is secretly entrusted to a girl by her lover, will eventually roll out from the girl's lap and makes this innocent girl feel guilty and ashamed. This illustration could, as Godwin argues in his commentary, be taken as simply describing that Catullus is still feeling ashamed of his delayed deliver of his poem to Hortalus, which he does not need to feel guilty about anymore because the promised poem is being sent to him already From another angle, however, it could be seen as Catullus' implicit criticism towards unaccountability of woman: in fact, one of the Roman proverbs says 'One should entrust nothing to a woman or a lap', which means that as often what has been placed on someone's lap falls out when they have forgotten about it and carelessly rise, so as woman's mind that is changeable and flighty could keep nothing In other words, Catullus is emphasising that friendship is not at all like an apple on a girl's lap, that is, there is stability and eternity in masculine friendship which Love could never obtain. Godwin, p.180 Fitzgerald, p.193 Now, let us examine how Catullus illustrates that friendship in reality can take widely divergent forms and thus is not always as masculine and reliable as Catullus describes in the Poem LXV. First of all, friendship can be, just as Love can when both parties involved are faithful and loving to each other, very sweet and tender. Fro example, in the poem IX, Catullus seems to be so excited about that he will soon see Veranius, a friend of his who is 'more dear than any 300,000 of his many dear friends' and have been away for a while. The list of welcomes which Catullus said he would give Veranius when he first sees him, such as embracing and kisses, could be seen as almost overreaction for just a friendly greeting, which would probably be more appropriate on the occasion of reunion of a long separated couple. Similarly in the poem", "label": 0 }, { "main_document": "public sector. Finally, mothers may be seen as subordinate to the political system. As New and David suggest, 'family failures can often be put down to the inadequacy of particular individual parents', this mainly being the mother (Phoenix et al, 1991, p15). It is mothers, rather than fathers who have the responsibility to support, care for and socialise children in ways deemed acceptable by the state (Phoenix et al, 1991, p16). If this role is not met to satisfaction, the state will intervene. If mothers are seen as inadequate mothers, the state will place the children elsewhere. Both mothers and motherhood as an entity are under constant scrutiny and surveillance. In this final section, possible alternatives are explored that could challenge the dominant division of childcare. Many myths, folklore, social institutions, media, customs and religions all exalt the role of parenthood and encourage reproduction (Bartlett, 1994, p2). To question these beliefs would be to challenge years of tradition. However, this challenge is seen as necessary to transform the meaning of motherhood. One proposal is that of shared childcare. The social organisation of mothering today produces a huge amount of sexual inequality. The reorganisation of parenting thus needs to be a prime starting place to overthrow the dominant division of labour. As Chodorow illustrates, 'equal parenting would leave people of both genders with the positive capacities each has, but without the destructive extremes that these currently tend toward' (Chodorow, 1999, p218). Equal parenting would therefore remove the subjection from motherhood as the workload would be shared. Welldon (1992) however illustrates that the problem of motherhood as a source of subordination lies at the heart of society; it is society's expectations that need to be challenged. 'Our whole culture supports the idea that mothers have complete dominion over their babies [...] we neither help her nor her children, nor society in general' (Welldon, 1992, p 83). Glorifying motherhood in this way excludes the fact mothers need help. It is this assumption towards motherhood that requires change. Alternatives that challenge this oppressive nature of the family and childcare are vital. Childcare assistance, career breaks and other schemes such as flexi time are strongly recommended as strategies to enable women to pursue a career and to make childcare a privilege as opposed to a chore. These initiatives however, fail to challenge the assumption that parenting is exclusively a woman's issue. Whether these changes can bring about real choices of identity for mothers remains an empirical question. As a conclusion, it is evident that the pivotal debate to this topic is the issue of choice. Motherhood may be a destiny, or it may be a source of subordination; the conclusion depends on the stereotypes that society presents. The pressure for women to become mothers centres on the ideal of conformity. This dominant ideal subordinates women as it enables society to label those who do not conform as deviant. Men on the other hand, have escaped such pressures and the mass of generalisations that are heaped upon women from birth. 'Every woman has the whole weight of formulated", "label": 1 }, { "main_document": "This laboratory session was regarding the construction, working and use of pneumatic circuits and its limitations. The features of these circuits were studied using \"Pneumate\" Basic Pneumatic Trainer assembly panel supplied by SMC Pneumatics (UK) Ltd. The trainer uses air to actuate the pneumatic values as its ease of use and safe. In the exercises performed the behaviour of pneumatic actuators and the working of directional control values, check valves were learned. By using the theory learned in the course calculations to predict the speed and force of pneumatic actuators were carried out to be 11.31N and the accuracy of the results was discussed considering that in real life nothing is 100% efficient. The results obtained were almost as required, but some varied due to time delay. Care had to be taken of the type of air used in the circuits. Compressed air with less moisture and filtered so that no impurities like oil, dust, dirt, etc should be used. Pneumatic actuators can perform linear and rotary motion very easily and hence have many applications than electrical actuators. Applications of compressed air are limitless, from optician's gentle use of low pressure to measure fluid pressure in the human eyeball, the multiplicity of linear and rotary motions in robotic mechanisms, to high force required for concrete breaking pneumatic drills. For the exercises carried out, they can also be used for the operation of automatic doors, to press parts into a die, stamper with a parts feeder and ejection mechanism. Understand the basic principles pneumatics, it limitations and potential for practical applications Create, test and understand the pneumatic circuits and its working Compare pneumatic actuation advantages and disadvantages and control to similar mechanical and electrical systems Apply theory studied in course to carry out fluid calculations to predict speed and force of pneumatic actuators. Use engineering drafting packages (such as Microsoft Visio) to create pneumatic circuit diagrams' In order to know the experiment better, we need to know the basics of pneumatics. In Pneumatics, this media is air. This of course comes from the atmosphere and is reduced in volume by compression, thus increasing its pressure.' This gas is then made to rapidly expand by being released into a separate chamber. The chamber is connected to a rod or cylinder that pushes outward when the gas expands, thereby bringing about the energy required to perform specified task'. The correct use of pneumatic control requires an adequate knowledge of pneumatic components and their function to ensure their integration into an effective working system. For this experiment we need to know the working of pneumatic actuators (single acting and double acting), directional control and check valves, some basic notations of pneumatic components and their interpretations. The pneumatic cylinder converts the potential energy stored in the compressed air into linear displacement. They are typically cylinders hermetically sealed which house a piston that divides the interior into two sections. By applying compressed air into one or both sections the piston moves. You can see this happen as the cylinders rod extends out or contracts into the cylinder. Cylinders", "label": 0 }, { "main_document": "of the acquisition. [1984] 1 Ch 317, CA. Ibid, Materials D at pg 8. We shall now assess whether with the events that followed there is the possibility to rebut this finding. For this purpose, we shall refer to the statement by Lord Diplock in It is clear from the oral evidence provided by the Claimant that when he seeked to recover his money, he approached the Defendant on the basis that he was ready to forgo any increase in value of the property and was prepared to liquidate his asset in order to get his money back. And on those conditions, the Defendant agreed to repay the money. As a consequence, this evidently shows that a subsequent fresh agreement has been reached to convert the beneficial interest created when the property was acquired into a loan repayable on demand. However, Lord Diplock clearly mentions that the subsequent fresh agreement must be acted upon by the parties. In our case, it is clear that the Defendant's complete failure to repay the monies means that she has not responded to the new agreement in any way. The new agreement has not been acted upon by the parties. Therefore, the judge's decision in concluding that the monies were advanced by way of loan is wrong and the claim based on constructive trust should have succeeded. [1970] 2 All ER 780. Ibid, Materials D at pg 3. The alternative approach can also be problematic in the sense that to hold that a constructive trust exists on the basis that the Claimant made substantial indirect contributions enabling the Defendant to repay the mortgage instalments, seems to blur the line between constructive and resulting trust. Indeed the underlying basis of resulting trusts is that contribution to the acquisition of the property carries with it an inferred intention to acquire a corresponding share of the beneficial ownership M P.Thompson, Modern Land Law, 2nd edition, 2003 at pg 258. Thus, this case, which is indeed a very recent one, does not cast further light on the difficulties encountered in this area of the law which is dominated by the law of trusts. Indeed, as the Law Commission mentioned in Sixth Programme of Law Reform: \" Everything is just a matter of making inferences as to common intention and detrimental reliance. And such inferences will be made according to what the judges think. This is highly undesirable in the sense that it gives too much discretionary power to the court. This case fails to take into consideration the scope for the doctrine of propriety estoppel which belongs to the sphere of equity. The concept of this doctrine as enunciated in \" Under such a principle, the decision in our case would have been more clear-cut: that Mr Keondjian did not have any right in the property because Mrs Kay did not in any way encourage him to have this belief since for the past eighteen months she has been constantly warning him that she wished to repay him in full Therefore, perhaps there is the need to reform the law", "label": 0 }, { "main_document": "but perhaps simply at securing commercial gains. As put by Chinese journalists, \"tabloids want to use provocative reports to sell newspapers . . . positive stories [on Japan] would not be used\" While the press augments Chinese nationalism, therefore, the nature of the Chinese public itself influences the media's level of bias. Chung-Yan, \"China & Japan - superficial, biased reporting hurting ties between nations\" \"The People's Daily\", 3rd September 2005 \"The People's Daily\", 18th April 2005 \"The People's Daily\", 16th August 2005 Chung-Yan, \"China & Japan - superficial, biased reporting hurting ties between nations\" Unsurprisingly, the Japanese press is equally subjective in its presentation of the textbook issue; rather than exploring the controversy at the individual level, however, journalists more frequently blame it on the Chinese government and education system. Reporting on the anti-Japan protests in China, A.4 attributes Chinese hostility to \"anti-Japanese instruction that Chinese schools have delivered since the 1980s\" Hu Jintao's government comes under further attack in A.5: as in many Chinese articles, the author's objective tone here is offset by his exclusive citing of claims that the Chinese protesters were \"controlled\" A.6 likewise states that, \"police watched the protesters but didn't stop them\" and that, \"Beijing should have prevented the violence\" Contrarily to Chinese articles, which frequently analyse the textbook controversy on what political scientists would call the individual level (adopting an 'extensive view' of politics in which daily individual interactions influence a country's political stance), Japanese articles thus apply the domestic level of analysis to the issue, focusing primarily on government actions through a 'limited view'. Despite such differences, Sino-Japanese press are nevertheless both highly subjective in their coverage. \"The Japan Times\", 18th April 2005 \"The Japan Times\", 18th April 2005 \"The Japan Times\", 17th April 2005 Such bias is not present in all Chinese and Japanese articles, however: some journalists in fact fully recognize their own country's share of responsibility in the ongoing friction. In reporting intervening political figures' speeches, A.7 differs from previously-cited articles - it includes quotations that are both favourable towards China's government and critical of it: the article refers both to the Chinese Foreign Minister's claim that, \"the Chinese government has never done anything that wronged the Japanese people,\" and to his Japanese counterpart's request for the Chinese government to \"sincerely handle [the] matter under international regulations\" Correspondingly, A.8 looks not towards Japan as a cause of April's violent protests, but towards Chinese education - while the author argues that, \"Chinese students learn in their history lessons to love their country but not to hate Japan\", he nonetheless acknowledges that, \"58 million Chinese teens study the Japanese occupation of China . . . as part of regular Chinese history courses\" Visibly, not all articles dealing with the textbook issue in the Chinese press are overly biased; by adopting a stance more akin to that of political scientists and analyzing the conflict based on its cultural and sociological antecedents, some authors offer readers a more comprehensive investigation of today's tensions. \"CBS News\", 17 April 2005 \"The People's Daily\", 22 April 2005 Similarly, Japanese", "label": 0 }, { "main_document": "superior to other people, and a general mistrust and scorn of others always remains. At his worst, his phobias imprint themselves upon his perceptions, the voices he hears feed his paranoia, and he is possessed by the compulsion to defend himself from that which haunts him so vividly, actions which express themselves violently. The contrast made by Clarisse is less obvious. She is quite aware of the growing frequency and intensity of her mystical experiences, and rejoices in them. With Ulrich she shares the feeling of exaltation in these moments. However, the sense of connection that touches her is not one where the self is immersed in a state of un-egoistic benevolence, but is rather the sense of everything being connected to It is almost as though she becomes the dictator of the mystical realm. These moments are when the ineffable is \"Whereas Ulrich and Agathe are, so to speak, the apprentices of their love, learning from it and seeking to register its subtlest promptings, Clarisse does violence to love when it touches her, reshaping it to her ends.\" Her conviction that she is to deliver to the world the great Redeemer becomes the filter through which her experiences are altered; the arbitrary is seen by her as destiny, as signs sent by God to tell her that it is time for her to act. Moosbrugger suffers from the community that imposes itself inside him, Clarisse holds up the same community as her kingdom. Payne, Philip. For Ulrich, the mystical experiences he shares with Agathe prompt him to abandon what he calls \"the attitude of personal greed towards one's experiences\". While helping Agathe into her dress, The arrogant attachment to the importance of self is lost once the ultimate state of interconnection is revealed; one is as much absent as present, as one's identity It is this character of participation in the territory of love, rather than possession, that is made manifest to Ulrich in his relationship with Agathe: The final point regarding the Other Condition made clear through the counterexample of Moosbrugger and Clarisse is the withdrawal from activity. What makes Ulrich a man without qualities is the one trait he possesses that overwhelms all chances for a stable personality - his sense-of-possibility. Finding no reason to grant what A character, a profession, a definite mode of existence - for him these are notions through which the skeleton is already peering, the skeleton that is all that will be left of him in the end\". This also means that the state is essentially antithetical to action. Ulrich laments \"that of all the systems we have set up there is not one that possessed the secret of stillness\" And one can't do anything mean in that condition [...] but nothing bad can happen while the condition lasts; the very moment it does happen, the stillness and clarity tear to shreds and the miraculous condition ceases\". Moosbrugger, as we have seen, is a victim of his pathology. The fears and visions which seize him so powerfully make resistance to act, and to act", "label": 1 }, { "main_document": "to be very important in the restaurant, and feel that I am a good team player. After week 4 in the restaurant, we were split into teams to look after tables. According to Mullins (1995), teamwork is essential in hospitality operations. Each week, I felt the teamwork between us improved. In week 9, this is particularly evident. From this experience, I learnt that it's best to work with everyone, and not just on your own. We communicated well with each other to ensure that all of the jobs that needed doing for each table were done, for example being offered bread before starters, and crumbing down before sweets. There is a kind on synergy effect when people work together. Buchanan and Huczynski (1997) define synergy as \"the ability of the group to out perform even its best individual members.\" The result of the team members working together is better than if they were working individually. Part of working in a team involves helping the other members in the team, and communicating well. In week 9, the unexpected table was given to us just as our other tables had arrived. This meant that we had extra work to do. As we worked so well together, we were able to cope with the extra work. The restaurant has also made me more aware of the need to consider customer's requirements. In week 3, my first session in the restaurant, one of my customers ordered cheese instead of dessert. However, I didn't correct the cover to give her a cheese knife. The customer had to ask me for the knife when she was able to attract my attention. I hadn't considered that the customer would need a knife. However, if I had looked at the book by Brown I have learnt that I must consider all the possible requirements of the customer, especially when it involves something that will help them eat their meal. Another situation where I misread the customer's needs was in week 6. The customers were taking a long time to decide over sweets, and it became quite embarrassing when I asked them the 4 In Brown I should have noted that the customers were still occasionally looking at the menu. I have learnt that I must be very alert to the customer's needs, and their actions, such as looking at a menu still. Another instance of not being fully alert to my surroundings was in week 11, when I nearly spilt coffee on a customer. The cafetiere had been overfilled, but as I didn't notice, it poured out before I expected. I was lucky it didn't land in the customer's lap. I must make sure that I watch what I am doing all the time. As I have never worked in a kitchen before, I was unsure of what to expect. I was expecting a lot of pressure particularly during service, and not as much support from Chef Trainers as we received. One of the main traits that I needed in the kitchen was good teamwork. As previously mentioned, this is", "label": 1 }, { "main_document": "Emerging as prosperous after the adversity of the Depression and Second World War, 1950s America heralded sustained economic growth and a revitalised faith in capitalism. These developments were bolstered by a pervasive lauding of consumption, in marked contrast to the frugality and emphasis on production of the immediate past. Whilst female consumer prominence was potentially empowering, the mass media overwhelmingly advocated women's retreat to a private, domesticated sphere. Coinciding with the post-war rise in suburban housing and population growth, women came to suffer great pressure to aspire to a housewife ideal. This racial paradigm left no room for those deviating from its stringent criteria, ostracising at once migrating African-American families excluded by segregation. In light of the post-war expansion of female employment however, it would be inaccurate to speak of women's complete containment, but rather that a heightened number felt confused by the tension between career ambition and domestic obligations. Equally, though women's activism would garner great support in succeeding years, there were those who achieved workable, happy marriages as housewives. The continuing expansion in female employment, albeit largely restricted to traditionally 'female' work, ensured women's active contribution to the family income. Matthews, Glenna, (New York, 1987) P.212 Mead, Margaret, (Delivered October 11, 1963). In Firor Scott (ed.), Anne, (Boston, 1970) P.172 Breines, Wini, (Boston, 1992) P.11 Meyerowitz, Joanne, 'Beyond the Feminine Mystique: A Reassessment of Postwar Mass Culture, 1946-1958', P.1474 Chafe, William H., (New York, 1991) P.193 In the wake of its regeneration during the Second World War, America presided over a period of sustained prosperity that served to affirm its self-image as a world leader and beacon of democracy. After the austerity of the Depression, the contrasting prosperity of the 1950s fuelled society's sense of the \"good life\", leading many to feel deserving of consumerism's new opulence. People were having more and more children at a younger age; the average age of marriage for women by the decade's end dropped to twenty. Breines, Wini, (Boston, 1992) P.2 Mead, Margaret, (Delivered October 11, 1963). In Firor Scott (ed.), Anne, (Boston, 1970) P.172 Friedan, Betty, In Firor Scott (ed.), Anne, (Boston, 1970) P.172 Breines, Wini, (Boston, 1992) P.3 Matthews, Glenna, (New York, 1987) P.212 Following the completion of the war, this vision of prosperity brought a renewed emphasis on the sanctity of marriage, rooted in the archetypal white family unit comprising an enterprising husband and motherly wife. In an often-uneasy climate mixing self-assurance and fear of insidious Communist infiltration, both consumerism and domesticity were regarded as patriotic in maintaining American values. Much of the late 1940s and 1950s social commentary focused on promoting legitimate marital sex and the championing of the family as society's bedrock, apportioning essentially conventional gender roles. Baxandall, Rosalyn and Ewen, Elizabeth, (New York, 2000) P.149 Costello, John, (London, 1985) P.357 Baxandall, Rosalyn and Ewen, Elizabeth, (New York, 2000) P.143 Matthews, Glenna, (New York, 1987) P.210 In this way, domesticity and consumerism came to be very much entwined and mutually reinforcing. Innovative products such as the Bendix washing machine possessed an evident appeal, and made the kitchen the centrepiece", "label": 1 }, { "main_document": "will be u=60*ones(size(y)) . There are two sets of data sets we have been provided on which we have to do four tests each and comment on the results . In addition to plotting the coefficients and y and yhat , we are also required to plot a z transform and find the location of the a coefficients on a unit circle . If they ('a' coefficients ) all lie in the circle the given model is stable . There are a total of eight experiments in this part four each for the two different sets of data . They specify dimensions of These diagrams may show that y and yhat are exactly the same due to restrictions in the printing process but rest assured that these are different if viewed in matlab when zoomed or when viewing corresponding elements of the vector . Experimental run on the x axis where the proportional gain was set to 30 and derivative gain was set to 7 In this case n In this case n In this case n In this case n It is concluded that All models are stable , some more than the other (although commenting on their degrees of stability is beyond the scope of this report). ii) The best model to predict this particular mechanical system is the one having least deviation between the measured and the predicted value of the output . In other words y - yhat should be minimum . Which in this experiment is the model Experimental run on the x axis where the proportional gain was set to 40 and derivative gain was set to 3 In this case n In this case n In this case n In this case n It is concluded that i)All models are stable , some more than the other (although commenting on their degrees of stability is beyond the scope of this report). ii) The best model to predict this particular mechanical system is the one having least deviation between the measured and the predicted value of the output . In other words y - yhat should be minimum . Which in this experiment is the model Although Upgrade the basic RLS algorithm to either include instrument variable or a moving average noise model based on Pseudo Linear Regression , or both . Illustrate these by comparing results with those obtained in B. From equation (4) In other words if the disturbance is described as a moving average of a white noise sequence e(t) . Then the resultant model is also known as ARMAX model (AutoRegressive Moving Average and eXogenous variable) Please note that all equations discussed in chapter 1 change with a C quantity added to them.Hence The model which I am going to demonstrate here (for the sake of simplicity and to keep in check the length of this report) is the one where n Reasons for this being that these give the best way to compare the two different approaches we have covered .Hence our model can be summarized as The last observed value", "label": 0 }, { "main_document": "taken from Heywood A. The third dimension within such debate was offered by Steven Lukes in Both the previous approaches share the assumption that actors, as rational and autonomous individuals capable of assessing their own interests, say what they want. Lukes argued power rests in A's ability not just to get B to do something but in Luke's three dimensional view argued power should be conceptualized by recognising the struggles involved within the decision making arena, the actions and inactions within agenda setting and the actions and inactions concerned in the shaping or perceived political interests. In expanding the notion of power to include preference shaping, Lukes subsequently draws a distinction between 'felt' (subjective) and 'real' (objective) interests in an almost Marxist fashion. Marxists believe that as the base mode of production, Capitalism dominates the superstructure of society. Hence the proletariat remain dormant under their 'false consciousness' or, as Lenin preferred, their 'trade-union consciousness' which he believed needed be transformed by the Vanguard leadership so the workers could realise their 'revolutionary conscience'. Lukes argues that it is within A's indoctrination of B where the real essence of political power lies. To identify a power relationship B's preferences must be proven to be contrary to his genuine interests. Lukes attracts fierce criticism for his formulation of what can be seen as deeply condescending and patronizing notion as it implies the enlightened and privileged position of A. If people's stated interests are not to be relied upon, how is it that we are able to judge what their real interests might be? Similarly if only rational and autonomous individuals can make objective judgements how are we to decide who is rational and autonomous? Luke's analysis is indeed innovative but involves complex and potentially misplaced judgements upon individuals. Quotations taken from Heywood A. There has been much debate following Dahl's analysis, suggesting the essence of political power lies beyond the relatively narrow arena of decision making. Bachrach and Baratz and Lukes have offered valuable insight alerting political scientists to other related areas of enquiry - agenda setting and preference shaping respectively. It is important to acknowledge, however, that throughout the development of the debate concerning the conceptualization of political power, Dahl's contribution has remained appreciated - his ideas have been built upon as a benchmark. Interestingly all such approaches make the assumption that man is endowed with an instinct which drives him to try and impose his will on others - One must recognise that such assumptions are challenged by the ideologies of liberalism and anarchism. There remains debate about political power unexplored in this modest analysis, perhaps most notably discussion within the European tradition of political science focusing upon the Weber made significant distinctions between what he termed Thus it seems the essence of political power lies beyond Dahl's phrase that ' Dahl's conclusions were pioneering but in turn prompted a debate which, combined, offer a more coherent approach to the essence of political power. Goodwin B. ' Quotation taken from Hay C. ' Goodwin B. '", "label": 1 }, { "main_document": "During the calculation of estimates and timescales phase, there were three calculations. Due to the saving on cost and time buying the package was considered instead of developing it. While this itself isn't a risk at all if it was evaluated properly, the main problem was the selection process or the lack of information for the selection process of buying a package. Buying a package for sub systems 2 has the highest risk exposure for this project. At least according to the information given. All the information we have regarding buying a sub systems 2 are: \"The First of all it's the account manager who knows of the package. Did the account manager of Opus evaluate the requirements of the customer needs of the project? Why did the account manager make the decision to buy a package? Isn't it the project manager's call? The systems analysts haven't even been to the customer site. In fact this is just the feedback proposal for the invitation tender. The package was suited for Another customer. Which mean it was a customized solution for that particular customer. Opus hasn't done a similar project of this magnitude. Opus has \"limited\" experience in a simple clinical system. This is way different from the new project. The application factor is the second largest risk in the table! But all the estimates and costs (including the Grant chart) were based on the package for subsystem 2 calculation. The reasons for this having the highest risk exposure has been several risk other factors Management expects that packaged system will require only little or no modification. Package selected as lowest cost alternative Level of technical evaluation is zero (with respect to this project) No defined or documented requirements for product evaluation (At least the info wasn't disclosed) New customer requirements process is undocumented so that comparison is not possible. This new project requires a custom system development approach. The responsibility for maintenance of modifications is vendor. There's no information on the vendor quality control standards. (The client quality control standards are assumed not to happen...which itseld is a major risk. This is the 3 Changes to specification requirements) If Opus is invited to develop this project (with the package estimation), Opus would have to meet their deadlines. If the sub systems 2 package did not fit the requirements of the new project what would Opus do? It would be a disaster. The risk can be eliminated by evaluating the customer requiments first. But this is not possible during the tender process. Without atleast a initial study by the systems analysts it is difficult to evaluate the sub system 2 package. Opus should give the 2 And include the sub system 2 package buying to as an bonus if the package meets the customer requirements. The management shouldn't be quick to the take decisions if the project isn't going to fail. The project should adhere to project management standards and guidelines. The management should respect and take the advice of the technical teams and project manager very seriously. After all, it's them who", "label": 0 }, { "main_document": "exploitations they mask. As well as the current practices of transplantation medicine, future prospects, such as xenotransplantation, also raise important ethical dilemmas. Whilst the concept of using animal tissues is not in itself a new phenomenon, with 17 Scientists are already able to transplant pig cells into human beings, with the hope of soon being able to transplant whole organs, and indeed, the creation of customised pigs with a decreased likelihood of organ rejection has already become a reality (Williams, 2003). Whilst many have hailed the prospect of xenotransplantation as signalling a decrease in waiting lists for transplantation, and a break through for medical science, it undoubtedly also raises some uncomfortable moral, social and ethical dilemmas. As Brown and Webster (2002) note, modernity is premised on a 'cultivated distance' between humans and animals, a distance that would be broken down by xenotransplantation, agitating questions of species boundaries and the 'natural' order. Indeed, if allotransplantation (human to human) raises questions of personal identity for organ recipients, having a pig's heart or other organ inside one's body my heighten this discomfort further, especially given the cultural meanings assigned to pigs as being 'unclean' animals (Williams, 2003). The possibility of transpecies diseases in the wake of the BSE, and its human variant, CJD, furthermore are very real concerns with regards to xenotransplantation. Some pig viruses, for example, can be transmitted across species and the prospect of hitherto unknown diseases emerging has to be taken seriously as a major complication of xenotransplantation (Pilnick, 2002). The moral implications of using animals in this way have further ramifications for ethical debates; whilst many argue that animals have long been used in medical research and in food production, the use of animal's organs in this way may contravene certain religious or moral beliefs (Buddhist or vegan for example), and therefore prove unacceptable to certain social groups on this basis. Indeed, perhaps one response to these ethical dilemmas would be the placement of more resources into stem cell research, so that organs can be manufactured, rather than moved around, and indeed, as recent developments in the engineering of bladders from stem cells suggests, such advances are on the horizon. Further to the creation, and transplantation of, human and animal organs, the implantation of artificial devices into the body has resurfaced the notion of 'cyborgs' through machine-human couplings. There is a broad range of artificial devices and components which can be surgically placed in the body, from inert joint replacements to devices which actually take over the innate functions of the body; pacemakers, for example, electronically regulate the rhythms of the heart whilst cochlear implants generate tiny oscillating vibrations to stimulate auditory nerves (Brown and Webster, 2004). The future these technologies suggest is one in which bodies are increasingly 'inhabited' by technology, and where natural ageing is a thing of the past, with our body parts being replaced by technological devices, marking the end of 'pure humans' (Deitch, 1992) Indeed, Rucker et al (1993) suggest, due to advances in biotechnology, this 'postbiological humanity' is achievable in the next fifty years, heralding", "label": 1 }, { "main_document": "the chariot, wherein lay the bright armour... or to strip the life from still more Thracians.' (Homer, 6: l.503-6). The attitudes to fighting of Achilles, Hector and Diomedes differ greatly in terms of each man's reason for fighting, but they do have certain features in common. All of them place considerable importance on respect and honour, but from different people in the case of all three heroes. All of them at one point in the They all have different versions of the 'heroic code'. Diomedes' is all about personal fame and glory, Hector cares foremost for his family and his country, and Achilles fights for his principles. Achilles loses faith in the heroic code and is determined to stick to his own views, but these are frequently not shared by older, more experienced warriors than him. Hector knows that he will die if he goes in to single combat with Achilles, but does so for the sake of his country, and the respect of the soldiers he commands. Diomedes is quick to fight for his country, and to rebuke any one who calls him a coward, and desires glory for himself and Greece. These men are all great heroes who fight and die in the wars of others, but die bravely, defending what they love and hold dear.", "label": 1 }, { "main_document": "Computers are today communication devices which hold promises of shaping all our day-to-day tasks; the Internet opening up huge potential. It is possible, over standard telephone lines, to simultaneously chat and surf at the same time. How? My main sources used were \"How the Internet works\" by P. Gralla, published by Que in the USA in 2001 and \"How DSL Works\" from First, data is divided into packets. Earlier technology used the modem to convert that from digital to analogue and vice-versa, to be transmitted over POTS. Today, technologies such as DSL and ISDN both send and receive information in digital signals via another modem found in the telephone company. Standard, analogue, voice calls use only a small portion of the potential bandwidth of a copper wire. DSL 'divides' the line into 3 channels. This is not of course a physical division, but simply the use of modulation techniques to separate the signals: voice communication, sending, and reception of data; thus allowing a phone call and surfing to take place simultaneously, 24/7. DSL technology, however, requires the computer to be in a specified radius (dependent on the speed and type of DSL implemented) from the telephone company and its DSL modem. This is not usually a problem in urban areas but could become a barrier to wider access to internet for rural areas. ISDN uses dedicated lines and specialist equipment, but the same splitting method as DSL, to provide simultaneous telephone and surfing services. This could be useful for places where DSL technology would be too costly or not available. However, ISDN relies on an outside power outlet making the network vulnerable in case of power failure. As better alternatives are developed, customers will benefit from a wider range of choices and get a better run for their money. Computer viruses are program files which exploits weaknesses of computer systems and harms them - filling up memory space, deleting files, disrupting the sequence of execution, popping up annoying windows etc. Emails being popular and vulnerable are a star target for virus writers. The sources used where \"How the Internet works\" by P. Gralla, published by Que in the USA in 2001 and \"How email viruses work\" from The most popular form of email virus known is file attachments. While downloading it, a virus can actually be downloaded to your computer. The virus runs itself incognito, copies itself to your hard-disk, sends itself from contacts in your address book, appearing to be from you with randomly generated subject line and body text. But viruses can also be downloaded when HTML email is being displayed. The virus can stop you from updating your anti-virus and replicate itself. A new virus named 'Fizzer' can even randomly generate addresses. Outlook Express is popular, though it can be easily infected by virus writers who target it more compared to others on the market. Outlook Express has many inbuilt functions that make 'email browsing' easy - the mass mailing facilities, spell checker, automatic time-set mail, etc. Being in the Microsoft family, coming with most Windows Operating Systems, it", "label": 0 }, { "main_document": "style preferences, biases, aptitudes, and beliefs about themselves as learners. (Williams and Burden, 1997; Ellis, 1994). See appendix 1 for a range of needs analysis. Not only do analyses enable the teacher to plan a syllabus or specific activities to address relevant issues, but the students can also become aware of their learning styles and their own ultimate responsibility for their success in learning. Additionally, students are aware of their needs being taken into account and therefore feel that they have a choice in what and how they learn, which ultimately leads to better motivation. Needs analyses should also look at local context such as culture, politics and the institution. As teachers for example, we have all probably found some mismatch occurring with a communicative teaching approach and teaching in a culture where attitudes about communication are widely divergent from our own. We need to recognise and adapt, otherwise our insistence on a particular approach could be ineffective, or harmful and possibly viewed as \"cultural imperialism\" (Richards and Rodgers, 2001) Describing one's approach to teaching is a potentially exhaustive task. An approach must take into account contextual issues, language learning and learning theories and one's principles and rationale. These combined should direct the teaching process to facilitate and enhance students' learning. All approaches or theories agree that motivation matters; a basic premise I work from is that motivation is the key. I believe I am responsible for To aid this, I might ask them to confidentially write a list of their preferred partners. This means that prior to the lesson I can plan various combinations to accommodate all participants; student pairing or grouping is crucial for a successful class. From my own reflections as a student this came as a significant realisation: if my partner is someone I find difficult to work with, my capacity to learn and enjoy is radically compromised. Therefore, to further minimize the adverse impact a single student may have on another student, I now group students in threes rather that twos. If I involve students in making decisions, I can help students motivate themselves (Errey and Schollaert, 2003). For example at any stage in a lesson I can ask students: to make a choice between 2 activities; how long they want to spend on an activity; or which role in a role play they would prefer. Also, it is possible to involve students in negotiating topics or outcomes, a major feature in the TABASCO project (ibid) and in Community Language Learning (CLL). Following CLL, or humanistic techniques (Richards, and Rodgers, 2001), I believe it is extremely difficult to learn when you feel resistance or anxiety in the environment (Larsen-Freeman, 2000). As Brown (2000, p61) observes: \"all second language learners need to be treated with affective loving care.\" This is the foundation from which I work; it is impossible to separate the desire to motivate from any aspect of my teaching. Class atmosphere must be such that each student feels able to get what they want from the learning process. After the initial hard work of establishing the", "label": 1 }, { "main_document": "AIM: Information security is protecting the data against unauthorized access or modification. This term applies not only to the electronically stored data but also to all aspects of safeguarding information, in whatever form or media (Wikipedia 2006). Computer and network security is nowadays the most important field of computer science. The main objective for all computer scientists is to ensure their systems to be secure. We do not want unauthorised people to access our private information such as e.g. bank statements. Therefore there is a strong need to deal with information security issues. Some people might think that to provide information security we need only to control the access to the data. It occurs however that this concept is just one of information security goals. Information security is an essential issue especially where electronic transactions are concerned. We want to assure that information that user supplies to the web server (e.g. username, password, financial information) cannot be read, modified or destroyed by any third party. We want the similar protection for the data that flow back from the web server to the user. To understand the relevance of the information security issues, first we need to understand the meanings of the terms related to the topic. To achieve confidentiality, cryptosystems need to be developed and deployed. The cryptosystems use modern cryptographic techniques for higher security level. Information is protected by transforming it into unreadable format. This is the process of encryption. To read the data we need to decrypt them. Only authorized person has the key which is needed for this process. We must remember that not all the data are confidential e.g. special offers of the companies are the data which should be available among as many people as possible. We need to know if somebody/something is not pretending to be someone else. We must remember that authenticity says nothing about the right to access the data, it only checks the identity. To achieve authenticity we use: digital signatures - works similar to written one. It is attached to the message. It guarantees that the individual is the one whom he claims to be. digital certificates - is also attached to the message. Its aim is to verify the identity of the sender. The sender applies for a digital certificate from Certificate Authority CA. Any unauthorized changes must be detectable to authorized users. It is difficult to prevent the data change, it is much easier however to detect it. Therefore there is a strong need to back-up data regularly and in case of error detection use the back-ups. To detect the attack on the integrity we use: Cyclic Redundancy Check CRC - it is a type of hash function to produce the checksum. The checksum is verified by the recipient to check the data integrity. Message Authentication Code MAC - MAC value is used to protect both integrity and authenticity. It is generated and verified using the same secret key. Therefore it does not assure non-repudiation - anybody who can verify the MAC value can produce it for other message. The", "label": 0 }, { "main_document": "Aid Scholte p231 If International Organisations can be monitored and controlled by other transnational actors, then, rather than endangering global stability, they may become \"an important supplement to the decentralised cooperation of the international system.\" Decentralised cooperation theory holds that IOs centralise the interests of different states through stable organisational structures which significantly enhance interstate cooperation. IOs provide institutions that increase the efficiency of collective activities by pooling the assets and activities of members. This enables burden sharing between nations and reduces the risks of individual states, solving coordination problems and facilitating the otherwise contentious production of collective goods Moreover, Keohane (1984) stresses that IOs can play a crucial role in intergovernmental bargaining: international regimes reduce the transaction costs of negotiations, improve the flow of information between states, and make any violations of multilateral agreements more costly. Abbott and Snidal p31 Abbott and Snidal p17 Abbott and Snidal p20 Yet, for many, the growing involvement of transnational actors in world politics is of little value if its only effect is to uphold the existing status quo. Rather, international relations theorists such as Snidal argue that International Organisations \"often represent deliberate decisions by states to change their mutually consisted environments, and thus, themselves\" One way in which IOs can foster such reform is through their role as sources of information. According to Alexandru Grigorescu, state interaction with IOs can lead to increased governmental transparency Through their various activities, International Organizations collect large amounts of information from member states; as they release previously concealed and potentially disparaging information about a government to societal actors, the regime concerned may face a 'trust crisis' from its citizens. In order to prove its credibility to a society, it will tend to adopt laws that increase transparency and free flow of information. Based on empirical evidence, Grigorescu finds that IO-led growth in external information does precede the advent of democratic transparency in many countries In turn, this transparency enhances interstate cooperation and makes collective action more effective. A strong civil society can therefore have a beneficial impact on the workings of domestic and international politics. Abbott and Snidal p25 Grigorescu p643 Grigorescu p665 The constructive role that IOs can play in improving interstate cooperation is best exemplified by the tasks currently ascribed to the United Nations. Reports composed by UN enthusiasts circa 1995 maintain that the globalizing world order needs international organizations that can mitigate the negative side-effects of modernization and resolve interstate conflicts resulting from interdependence and collective burden-sharing. IOs are believed to build confidence and transparency between members, in particular through creating monitoring mechanisms that guarantee that states will respect their commitments to international agreements. Finally, rather than endangering the stability of states, such organizations may provide states with what United Nations Secretary-General Perez de Cueller expressed as a \"final confirmation of independence, nationhood, and sovereignty\" International Organisations can therefore create a more peaceful international order capable of reforming the obsolete realism of the Westphalian system without endangering the states that comprise it. Barnett In the last few decades, states have witnessed the appearance of a", "label": 0 }, { "main_document": "they represent knowledge at all levels of abstraction, as well as experiences rather than definitions and abstract rules (Baddeley, 1999). Bartlett's influence to the theorems of memory is a widespread one: it is generally accepted that a person's knowledge of the world and of his mental processes, plays an intimate part in learning and remembering - it is hence impossible to isolate cognition from the study of memory. Nevertheless, the methods introduced by Ebbinghaus, remain influential whenever an objective and empirical study of memory is employed. In fact Ebbinghaus's influence may be accredited that from time to time psychologists remind one another that you cannot 'decouple' memory from other mental functions (Gregg, 1986).", "label": 1 }, { "main_document": "on proper crop management have led to the inefficiency in production by individual farmers. Low social status of farmers in the open market discourages young generation to engage in rice farming (IRRI, 2006, Weerahewa, 2004). In relation to the improvement in farming practices of individual farmers and government support, several focus points are identified from the revealed major constrains; cultivars and timely cultivation, soil fertility improvement, weed, pests and disease management, post harvest management, and extension services. Extension of agricultural land was recognised as impractical since the majority of productive land has already been converted to agriculture purposes. However, cropping intensity can be increased from 119 % of the current figure to 136 % which enables to increase the total cultivated land area to 0.96 million hectares (0.77 million hectare in 2006). Prioritisation should be given to Low Country Wet Zone and the areas with minor irrigation systems in the Dry or Intermediate Zones where there has been an increase in neglect of rainfed rice cultivation. Location specific rice cultivars should be recommended which would be successful in maximising the resource uptake in individual areas. Medium duration cultivars (4 - 4.5 months) should be selectively grown where water is not a limiting factor throughout the growing season. Medium duration cultivars are superior in terms of higher grain yield potential than short duration cultivars since the long growing duration enables plants to utilise the available energy for a longer period. Although use of medium duration cultivars was common in the past, it has declined to less than 18 % at present. Main potential reasons for this change are the increase in labour cost so that farmers tend to choose cultivars with shorter cultivation period, and establishment of irrigation systems which enabled to increase the efficiency and the extent of control on water use. Where rainfall is adequately supplied, longer cultivation is possible in Sri Lanka since the temperature, being constant, is not the limiting factor for agricultural practice in this country. Therefore, medium duration cultivars should be reevaluated its significance to be promoted again (Amarasinghe and Liyanage, 2001, IRRI, 2006). Traditional rice cultivation period was scheduled based on the change in season, with the onset of monsoon to region; however, such timely cultivation has been gradually ignored with the use of short duration cultivars. It is estimated that about 70 % of the rice fields under irrigation do not necessarily receive the sufficient water supply; hence rice plants are likely to be subjected to water stress due to occasional drought during its growth. Delay in crop establishment using short duration cultivars increases this risk and shortens crop growing period in good condition. In addition, collective cultivation of rice with the same growth stage between rice fields has declined but is very important to reduce the pest and disease occurrence. Overlapping of different growth stages of rice plants in a given area provides the ideal condition to complete life cycles of disease pathogens and pests within the area, hence stimulates transmission of diseases and pests. Timely cultivation which matches with the increase in rainfall", "label": 0 }, { "main_document": "\"Pan Recipe\" is a Caribbean poem written as an extended metaphor, which uses the vehicle of the steel pan to convey a sense of new life born out of the past, and is suggestive of the oppression of black slaves. When I began writing \"How we have walked, How we have journeyed\", I aimed to re-centre Agard's text and focus more primarily on the Caribbean music itself, as a celebration of freedom, but still express ideas about the oppression of black slaves through metaphor. \"How we have walked, How we have journeyed\" should, like \"Pan Recipe\", be read on both a literal and metaphorical level. The title itself works on both of these levels, and directs the reader to acknowledge the metaphorical tenor through the first person present plural pronoun \"we\". This deictic suggests that the poem is also about a number of people collectively participating in the acts of \"walking\" and \"journeying\". On a literal level this might be related to the way in which large masses of people \"walk\" and \"journey\", perhaps in a carnival parade. The use of the past tense in the title might also suggest a metaphorical journey through time; the use of the perfect present forms of both verbs indicates that the journey, both literal and metaphorical, has come to an end. I wanted the poem to have a strong, driving rhythm to imitate the rhythms of Caribbean music and give it a sense of progression. This drive comes largely from the use of continuous present verbs throughout the poem. In the first verse, for example, the pre-modifying adjectives \"flaming\" and \"flashing\" are the non-finite progressive participles of active, and dynamic, verbs, giving a sense of moving forward towards an end. This end is indicated initially in the use of the past tense in the title of the poem. Furthermore, the final verse of the poem moves into the past tense through the past participle \"fought\". This internal deviation gives the last lines a sense of finality and conclusion. \"New World A-Comin\" also works with both the present and past tenses to similar effect. The opening lines of the poem are written in the present tense, indicated by the repeated use of the demonstrative article, \"this\", which gives a sense of immediacy. In line five the text moves into the past tense, through the past participle \"met\". At this point the text also introduces context sensitive pronouns (\"we\" and \"you\"). This is echoed in \"How we have walked, How we have journeyed\", when in the final line of the ninth verse, the refrain is altered by replacing the definite article \"the\" with the plural possessive pronoun \"our\". In both poems, this internal deviation takes the text to a human level. In \"How we have walked, How we have journeyed\" it also makes more perceptually prominent the metaphorical level on which the poem is working. My poem also uses paralleled phrase structures, as do both of the poems it draws upon. Parallelism is used as a larger structural feature of the text through the repetition and variation", "label": 1 }, { "main_document": "Tort is a civil wrong which sanctions any party that has suffered damage from that wrong to claim for compensation. The law of tort basically deals with providing justice to any person who has been harmed by the act of others. Law of tort basically protects both the buying and no-buying consumers. Below are three situations of negligence which falls under the law of tort In the first situation, if Albert were to sue Barry, he would be appealing to the House of Lords based on the grounds of negligent misstatement which cause financial losses. Upon advising Albert regarding this matter, we must see what constitutes the duty of care for negligent misstatement. In Ltd. v. \"A reasonable man, knowing that he was being trusted or that his skill or judgement was being relied on, would I think, have three courses open to him. He could keep silent or decline to give the information; or he could give an answer with clear qualification that he accepted no responsibility for it; or he could simply answer without any such qualification.\" First, the circumstances in which the advice was given must be in a professional manner. However, the advice was given to Albert during a lunch break at a conference, which indicates that it was just a social event. Advises should not be given 'off the cuff' but must be on business occasion and accompanied with proper checks on relevant data, none of which Albert has done. Secondly, there must be proof that the claimant knows that the advice is reliable. The decisions made in Barry may be a director of Dunmore Ltd., but he is not a professional person to receive advice from. Albert should have sought advises from professionals who acquire specialist knowledge and rightful qualifications on the matter. The majority held that; duty applied to defendants who were in the business of giving advice or information or who claimed that they had the requisite expertise. Finally, the speaker knows that the advice is reliable and so he undertakes the responsibility to ensure that it is accurate. Albert may argue on this circumstance but, there is a possibility that Dunmore Ltd. is financially stable at the period when the conversation takes place. Again, Albert should have consulted a professional to find out the financial position of Dunmore Ltd. On these accounts, it is therefore not reasonably foreseeable for Albert to rely on the statement. Besides that, the Hedley Byrne theory stressed that in order for the defendant to owe a duty of care, a 'special relationship' Clearly, there is no 'special relationship' for instance, stating that Barry would undertake any responsibility for his statement. Therefore, it is advisable for Albert to have prove that Barry owed a duty of care in order to claim compensation. Where the claimant could reasonably rely on the skill and care of the defendant in making statement and resulted in the defendant undertaking responsibility for the accuracy of the statement made. The second case is yet another case of negligent misstatement. Like any other cases, it is", "label": 0 }, { "main_document": "among several processors, we can then think to partition the matrix A and distribute each part to every processor which would be enabled to compute the product of their \"allocated\" part knowing the whole matrix B. The schema below (figure 2.4.) explains how this distribution is theoretically managed, This schema shows that the matrix A is partitioned in several parts which are distributed among the processors. It is not necessary but it is better to partition the matrix in an equal number of rows. Hence, each processor would have the same number of row to compute. But that involves the number of row of the matrix has to be a multiple of the number of processor, that means, So, each processor has a fixed number of rows from the matrix A and they also know the whole matrix B (as we saw before, this is required to compute a part of the product). Then they would be able to compute a part of the result matrix C. The range of the row computed by each processor in the matrix C is the same as the range known from the matrix A. The following representation (Figure 2.5.) shows how the computation of the product works when the matrix A is distributed, Now, we can write the algorithm in pseudo-code which computes it Several ways can be taken to compute this algorithm. The following (figure 2.6.) is the one which has been chosen and implemented in this project. The algorithm is executed by EVERY process. As we see, this algorithm is not so difficult theoretically, but some difficulties and problem can be met when it is implemented. The third part of this report explains the C implementation of this algorithm. As we explained The main goal of this implementation has been to find, using the Jacobi method, the inverse of a matrix. Actually, compute the inverse of a matrix is simply solving See section 3.3.2 So, in this part is explained the algorithm solving a system of linear equation and then the algorithm computing the inverse of a matrix. First of all, the algorithm shown here (Figure 2.7.) solves simply a system of linear equations This algorithm is adapted from Bib[F] and Bib[B] (This algorithm is shown on the next page.) Using the previous algorithm, the equation However, we want to find the inverse of a matrix. That means we want to solve Indeed, solving the inverse of a matrix A of dimension Where An identity matrix can simply be defined as follows, Obviously, So, we now want to find the inverse of a matrix using the Jacobi method. That involved we want to solve the following equation, Where X and B are now two bi-dimensional matrices (not a vector like in the previous algorithm) and B is equal to the identity matrix. Each column of the matrix X is actually one column of the inverse matrix and so each column of the matrix X contains There are Hence, to find the solution, we will actually solve several single systems and gather the result", "label": 0 }, { "main_document": "of all cases (Alzheimer's disease International). Dementia is a degenerative disorder of the brain which can be defined as \"global impairment of memory and other cognitive functions in the absence of clouding of consciousness\" (Lishman 1987). Characteristic features are confusion and loss of memory in the early stages followed by personality disintegration in the later stages (Rinomhota & Marshall 2000). Statistics show that the incidence of Alzheimer's disease is increasing; currently, one person in 100 will develop it between 40 and 65, increasing to 1 in 50 at 65-70 and 1 in 5 at 80 and above (Hunt 1996). The majority of useful research into Alzheimer's disease has only been conducted during the last ten years partly due to new technology that has become available; this has led to the development of new drugs and a better understanding of the disease pathology. To obtain a diagnosis of Alzheimer's disease, there must be the presence of Dementia, clarified by a clinical measurement tool such as the Mini Mental State Examination (Folstein et al 1975) and a slow (sometimes even unrecognised) onset with a uniformly and progressively deteriorating course. Other causes of dementia such as a tumour, also need to be ruled out and this should be done by a thorough physical examination, laboratory tests, psychometric tests and a complete family history from the client and their family or carers (Cutler N & Sramek 1996). Illnesses such as severe depression may manifest themselves in the same way as Dementia and can affect cognitive functioning, these also need to be ruled out. Even though today's computerised brain imaging techniques give us a better idea of what is happening to the brain during the disease process which has helped greatly in its diagnosis, there is still only one way to categorically prove that someone has Alzheimer's disease and this is through a post mortem examination. In a brain which has been affected by Alzheimer's, there will be bodies called \"neuritic plaques\" which are found chiefly in the cerebral cortex but have also been found in other parts of the brain. The number of plaques found correlates with the magnitude of the cognitive deterioration. The plaques consist of a central abnormal protein amyloid which sits half in and half out of a cell; the waste products of the cells collect around the amyloid protein so that the cell becomes surrounded by degenerative cellular fragments which eventually cause it to die. This is something that happens in normal ageing but at a faster rate. Another feature of Alzheimer's disease is the presence of paired helical filaments (PHFs), also known as neurofibrillary tangles (Lovestone S 1997). These are found in both the Cortex and the Hippocampus which is why memory problems are often an early sign of Alzheimer's. The posterior part of the hippocampus is most badly damaged and this is why sufferers will sometimes have excellent recall of events in their early life that are stored in their long term memory. These tangles are easily identified by light microscopy during a post mortem. As well as the plaques", "label": 1 }, { "main_document": "to wait great lengths for their drinks, and expect a certain speed and type of service as a reward for their regularity and the higher prices they pay here.\" These expectations were confirmed through casual discussions with customers, who when questioned about what their expectation of the service, named promptness and friendliness as their top concerns. Mr. Wood, interview (14/12/2005) The average perceived 'fair waiting time' for the 20 customers questioned was a mere 141 seconds or 2 minutes and 21 seconds (see Appendix A for a breakdown). The capacity management task is made easier in winter/autumn, as the pub is virtually only frequented by regulars during these seasons, and demand can be accurately forecasted in advance. A predominantly chase capacity approach was deemed appropriate by the managers, and this approach has been pursued for the past few years in determining the staffing levels of the pub. With a chase capacity strategy in mind, the managers need to make sure that they can provide rapid access to the service, matching capacity to demand. This approach is important to ensure that customers remain satisfied with the service (and that their particular concern with the waiting time is met), so as to retain their preference of 'The Beefeater' over other similar nearby English pubs. The managers have attempted to build some volume flexibility through the staffing system to better manage unforeseen demand. The current coping system is an informal one, based only on the willingness of the staff, who due to the positive work relations with the managers/owners often accept coming into work in unscheduled nights (receiving in payment in exchange for the extra hours). This does not prove to be a problem in winter, where demand is usually accurately forecasted, preventing this coping system from really being put to the test; however, during summer, the tourists add a large unknown to the equation. The casual capacity control system employed over winter doesn't fulfil its function over the summer, which calls for the implementation of a formalized coping system. This will be further discussed in the 'Recommendations for Improvement' section. No formal studies however have ever been conducted by the managers (who rely primarily on observation and their deep knowledge of the business) or by hired professional on the optimum staffing levels or demand in the bar during different times or days. Nonetheless, as a whole, the current system (see Appendix B for current staffing system) appears to work well as customers rarely complain about the wait time and the pub remains a profitable enterprise for the last few years. This study will attempt to examine if the staffing resource could be further optimized to lower costs and improve resource utilization. Simple observation made evident that managers were occasionally keeping staffing levels unnecessarily high, due to a preoccupation that demand would not be met, particularly on very busy nights (i.e. Premiership Match nights). In these instances, where the number of staff members on duty was high (generally 5/6 staff members), two bottlenecks could be observed, illustrating some inefficiencies in the resource usage. The main", "label": 0 }, { "main_document": "left wing parties were divided on issues that preoccupied the French population. Le Pen continued on this nationalistic path, when in 1992, the Maastricht Treaty was signed. He claimed the political class was undermining the sovereignty of the state and that this would have a negative impact on local producers who would lose business to competitors, as protectionist measures were deemed illegal. People felt 'protected' by Le Pen, and believed he, unlike the incumbent government, would have their best interest at heart. Le Pen fended off his opponents on the political Left much in the same manner - through fear and defamation. He developed strong anti-communist policies, making people apprehensive of the political Left and of Communism, claiming that \"...le d Jean-Marie Le Pen, 13th of May, 1984, in a TV programme \" L'heure de V These scare-tactics impacted the various departments differently and many authors suggest that a large reason for the FN's success was their specific targeting of the correct audience for support. This national variation in voting preference is also rooted in the individual social and economic history of each region, and in how this has affected their lifestyles, culture and views. Despite these oscillations in their voter numbers, the FN developed support groups in specific areas in France; areas which interestingly share many characteristics. As observed in the map below Mainly old industrial zones, with high immigrant populations (many attracted during the years of the 1950-60 expansion) and high criminality rates, these areas have proved to be more consistent in their support in the FN, than other regions. The map shows the percentage of FN voters in the 1995 elections, which despite being the result of only a single election, is quite representative of areas with a strong tendency to support the FN. Map taken from Lubbers and Scheepers: 2002 There have been suggestions that part of the FN's success is due to their distinctive 'targeting' of these particular audiences. As a whole, these areas have lower literacy and schooling numbers, as well as lower average salaries, possibly being more susceptible to scare-tactics. Furthermore, people with a more rural upbringing have more difficulty with dealing with economic modernity and feel greatly the loss of national identity that this brings about. By correctly identifying this, and by campaigning heavily around these areas, the FN managed to develop this somewhat loyal support base. The town of Dreux, an industrial suburb north of Paris, is frequently taken as an example of an FN support 'hotspot', as shown in the table below (Table 2) which compares general voting results in various elections with those in Dreux. We can observe that general voting patterns don't apply as strongly in Dreux, whose voting preference is much less responsive to national changes. The FN campaigned heavily in this city, particularly through Jean-Pierre Stirbois and his wife Marie-France Stirbois, two prominent members of the party who developed a strong following in Dreux. Husbands (1992) depicted in his bivariate study this positive correlation between people's area of residence and political preference, in particular with the FN. However,", "label": 0 }, { "main_document": "paradoxically dropped from 23 Accordingly, tourist numbers have risen almost as fast as the peso has fallen (Telegraph, 15/03/2003). The largest increases came from neighbouring countries, particularly Chile, benefiting from the high-quality-low-price shopping which according to CEDEM (Buenos Aires Centre for the Study of Metropolitan Economic Development), became the primary reason for visits to Buenos Aires in 2002/3. Summer (December-February) 2002/03 broke all records as the peso began to stabilise, promotion paid off, confidence was restored and visitors from neighbouring countries continued to enjoy the benefit of favourable exchange rates: However, it was not all good news during the last 15 months. Overcoming negative perceptions of Buenos Aires became a real challenge after the 19 There were genuine security issues to be addressed, leading the British Foreign & Commonwealth Office, for example, to advise against all but essential visits and requesting all British Nationals in the country to register with the Consulate. As a December 2001 Mintel report highlighted, \"television is a powerful medium for changing and shaping the perceptions of its viewers\". Essentially, there was a sense of uncertainty and insecurity which doubtless discouraged many foreigners from visiting, despite attractive exchange rates. Prices in Buenos Aires rose as the peso fell in value and Argentine nationals suffered banking restrictions imposed by the national Government in an effort to curb spending, a situation known as the 'corralito'. Those measures were successful, but effectively prevented any but the wealthiest Argentine visiting the city. The economic outlook for Argentina improved slightly in late 2002 and early 2003 with the peso falling to its lowest rate against the US dollar since April 2002, and with that economic stability came a renewed willingness to travel to Buenos Aires by tourists from the major generating countries of Europe and North America. In response to this opportunity, Daniel Scioli, Argentine Minister of Tourism, declared in early 2003 that tourism in Buenos Aires has been put \"top of the agenda\" in order to aid flow of currency, international investment, employment and recovery from the crisis. Seeing tourism as having a major role in the city, the Government of Buenos Aires, hoteliers' associations and the national Government have begun developing a solid strategy integrating tourism with urban planning and economic recuperation. The city's authorities have been collecting statistical data since November 2001 and analysing strengths, weaknesses, opportunities and threats in order to develop their tourism industry, and have so far responded well to the challenges they have been presented with. The Sub-secretariat of Tourism of Buenos Aires City Government was awarded first prize in the Official Organisations (Domestic) section of the Argentine International Tourism Fair 2003, based on its success in 2002. Since the disturbances of December 2001, the city has concentrated on improving its image overseas, taking part in tourism fairs around the world and focussing on its cultural offering, particularly This sensible strategy and marketing campaign, allied with the support of the national government and the attraction of a high-quality-low-cost long-haul destination for visitors from Europe and North America and the boom in domestic tourism in summer 2002/3,", "label": 1 }, { "main_document": "0.05. However, considering the outliers, and a scatterplot of the overall means and standard deviations which showed a positive correlation, the data was transformed. One way ANOVA was performed. There was a significant main effect of speed, F(2,57) = 7.08, p<0.05. There was no significant main effect of accuracy, F(2,57) = 2.84, p>0.05 (ns). The Kruskal Wallis Test was carried out and verified that speed yielded a significant result (X (2) = 2.57; p<0.05), and accuracy did not (X (2) = 10.98; p> 0.05), and that there was no relationship between the two. However, Tukey and Bonferroni tests showed significant differences were found between conditions 1 and 2, (t = 0.09) and conditions 1 and 5 ( t = 0.03), but not between conditions 2 and 5 ( t = 0.95, ns). The results partially confirm the original hypothesis - there was a significant main effect of speed. The results illustrate a decrease in time taken to complete the experiment between conditions 1, 2 and 5. There are significant differences between conditions 1 and 2 and 1 and 5. However the non-significant difference between conditions 2 and 5 does not follow the traditional trend of social facilitation theory, though the difference in means between the two conditions is still notable. While there is an increase in time taken to complete the task, the result is not significant. This means that the presence of one other person has more of an effect on participants, as can be seen by the significant difference between conditions 1 and 2, than the presence of three other people when participants are in pairs, as can be seen by the insignificant difference between conditions 2 and 5. The accuracy mean being the lowest in condition 2 also supports this. Such results confirmed our hypothesis, which stated that the time taken to complete the task would be significantly different as a function of group size. There was a main effect of speed despite the non-significant difference between conditions 2 and 5. The second hypothesis was not confirmed however, as results show no significant effects of accuracy, or a relationship between accuracy and speed (Appendix, figure 7). The results broadly support previous research. As a whole, they can be explained in terms of traditional drive theories, (Zajonc, 1965; Guerin, 1982; Geen and Gange, 1977; Zajonc, Wolosin, Wolosin and Loh 1970) research on mere presence effects (Guerin and Innes, 1982; Bond, 1982; Seta, Paulus, and Schkade (1976), and research on competitiveness (Triplett, 1898, Allport, 1920). The results remain within the general framework of social facilitation theories, namely drive theories, (Zajonc, 1965; Geen and Gange, 1977; Zajonc, Wolosin, Wolosin and Loh, 1970), and has shown a quicker response time in completing the task as a function of group size. The quickest response times occurred in condition 5, as well as the highest mean of accuracy. The non-significant difference between conditions 2 and 5 is inconsistent with traditional social facilitation theories. Furthermore, there was no main effect of accuracy, despite our predictions, and a non-significant correlation between speed and accuracy. This could have", "label": 1 }, { "main_document": "the value of the metal. This showed that male graves had more gold and bronze than those of females, indicating inequalities between men and women; it also showed inequality within the two sexes. He further deduced that a higher density of graves presented a higher level of inequality and so a denser population created greater social complexity. Yet one unclear area of his theory is that there is no distinction as to whether gold represents a chief or a sacred person/deity (Pearson, P p79). This could present problems as a religious society that assigns more power to sacred people is different to that of a society with chiefs who have perhaps achieved that power through securing political control or that of the trade routes. This prestige goods idea was also applied by Chapman at the Iberian Chalcolithic site of Los Milliares. At this site was a central cluster of tombs thought to be 'prestige tombs' due to their inclusion of ivory and copper objects, jet and amber beads and other luxury goods. Chapman argues that since there were other tombs without these artefacts, this represents a heirarchical society, as opposed to the egalitarian society presented by Almagro and Arribas. It is further thought that this society became further stratified with hereditary leadership in the Bronze Age. This is due to the high status goods beginning to cross cut graves of differing age and sex (Pearson P p78). There are also rich burials of children, a sign generally assigned to societies with ascribed status since they have no way of acquiring wealth themselves. Labour investment can be a useful way to distinguish the value of goods. For example, goods acquired through trade would probably be of a higher value than local goods which were easier to come by. Also goods which would have had a lot of time and effort put into them would be worth more, such as female headdresses. This theory was put forward by Shennan who found that generally female goods were worth more than male, which either indicates that women were of higher status or, more commonly thought, that they were dressed in the wealth of their male relatives (Pearson P, p78). However, the idea of dress showing status has more recently been thought misleading. For example, increased use of Wampum beads among the Native Americans was seen as increased elaboration, but Patricia Rubertone suggests that it was political resistance to stop the 'whites' getting them (Pearson P, p85). The idea of women wearing the wealth of men is also seen in O'Shea's study of the mortuary variability of Native American tribes ( O'Shea studied three groups, the Pawnee, Arikawa and Omaha to look principally at the co-associations of the grave goods and their occurrence in individual graves. In the Omaha graves over half of the fifty goods found were in male and adult graves. There was also a distinguishable level of structure: More sociotechnic goods found in male than female, and adults had more native implements and trade goods than sub adults. The discovery of the usually male", "label": 1 }, { "main_document": "proved at the beginning of this essay. Then, trade would equalise factor prices. However, in this case now, this only happens if the diagram below If tastes are biased towards good New tangent points ( Again, trade would equalise factor prices. However, again, this must be that the diagram to the left of When there is factor-intensity reversal, demand conditions clearly affect the pattern of trade. If both countries have similar preferences (both countries bias towards the same good), then trade will occur with each country exporting a different good and factor-price equalisation theorem holds. However, if both countries have different tastes and the indifference curve of one country is above Therefore, factor-price equalisation theorem, which is a result of the model, does not hold. Another result of factor-intensity reversal is that, both countries may be exporting relative capital-intensive goods or relative labour-intensive goods. Thus, rental for capital (or real wages) will go up in both countries. Trade causes factor prices to be more unequal than before trade. The Heckscher-Ohlin model has interesting implications such as internal income distribution. Demand conditions such as different tastes between countries still hold the same relation between good prices and factor returns when there is no factor-intensity reversal. However, when there is factor-intensity reversal, demand conditions will determine whether the factor-price equalisation theorem can hold in such cases.", "label": 0 }, { "main_document": "of this movement may be difficult to replicate elsewhere it does highlight the power that women can have as a collective. By accepting traditional stereotypes such as the 'loving' and 'dutiful' sister, who for example, ties a band of love around her brothers wrists or the 'courageous' mother, who would risk injuring herself in order to save the trees so her child will have a better life, women are 'playing' on their femininity in a way that men have been doing for generations. In 'assuming responsibility for which they had been socialized' (Kaplan, 2001: 34) women managed to humiliate their own husbands and authorities as then these men were not fulfilling their 'traditional' role, which is of course to 'protect' their women. Therefore the main message which should be taken from the 'Chipko' movement should be this; women can shape globalisation to enhance a common good by uniting as a collective and if need be by playing on their femininity. The paper has aimed to investigate the proposition that gender is central to the project of globalisation. It has becomes clear that globalisation is in fact not only shaped by, but is also sustained by gender. Based on the discussion above it would not be unfair to argue that the global culture in which we now live our lives depends on the unpaid labour of women to survive. Certainly global processes are shaped and defined by actors even in ways which may be hard to measure. Thus women's work needs to be seen as global process in its own right. There is now a growing awareness among academics that gender structures do indeed influence the direction of global production in a similar way to the more dominant global actors, i.e. white male elites. In this is important that we appreciate how both the 'productive' and 'reproductive' roles of women shape the process of globalisation. Furthermore it must be remembered that globalisation itself can be progressive as it can be destructive. Women can unite and use their gender to their advantage, and it doing so shape the global economy in a way which is beneficial to them. However it must be recognised that these advancements can be lost through the very patriarchal structures on which globalisation is founded. If we are to truly change the project of globalisation we must start at the root of the problem; the gender ideologies which manifest in the private sphere. Here is where the problems of gender begin and it is these same notions which eventually find their way into the public global arena. Arguably the key to altering the process of globalisation lies in changing the gendered structures which assign women the bulk of unpaid work, on top of any paid work, as if it was natural law. In order to achieve this solidarity among women must be encouraged. This can be facilitated through services offered by women's groups and non-government organisations that now operate within and across national and state boundaries. Indeed the irony is that these very services are themselves a result of the", "label": 1 }, { "main_document": "fortunes garnered from plantations did not inevitably contribute to Britain's commercial empire. In 1770, Adam Smith emphasised that 'profits...filled the coffers of certain interest groups such as merchants and planters but did not benefit the economy as a whole' Adam Smith, Britain's dependence on slavery can be seen in the pursuit of a protectionist market, with the introduction of the Navigation Acts in the late seventeenth-century, to ensure that she dominated the slave trade. These Acts declared that the products of slavery could only be carried to the motherland on British-owned British-manned ships, and colonies could only purchase British commodities or those that had been taken there first. Britain's commercial empire depended upon the ability to re-export slave-produced products to Europe, and her shipping and carrying trades flourished as a result. Although modern history on shipping is somewhat indecisive, it is likely that the slave economy was the 'main driving force behind the growth of English shipping and ship building trades' The indirect impact of slavery on the provision of work for shipbuilders, merchants, customs officers and entrepreneurs is important in assessing the dependence on slavery. Underemployment characterised eighteenth-century Britain, hence it proves difficult to see how else her commercial empire would have progressed without the impact slavery had upon employment levels. Inikori, The evolution of financial institutions also constituted an important part of Britain's commercial empire. Mainstream institutions were often directly associated with slavery, because of the benefits their founders had amassed. A successful slave-trading family in Liverpool set up the Heywood Bank in 1773; tobacco lords in Glasgow established The Ship Bank and the Glasgow Arms in 1753; and insurance companies were born of the ranks of sugar refiners. However, 'it must not be inferred that the triangular trade was solely and entirely responsible for...economic development.\" Britain's commercial empire depended upon a variety of factors, and such endogenous factors must not be underestimated. For example, Britain was highly dependent on her internal market, alongside profits from agriculture and industry, to generate capital and act as the mainspring of commercial growth. Increasing consumer demand for domestic manufactured goods was indispensable and, as middle-range incomes rose and the price of commodities fell, Britain's commercial empire benefited immensely. Furthermore taxation, by the way of customs and excise duties as well as indirect taxes on luxuries, was substantial and without it Williams, Morgan, Yet this highlights that other factors were often linked to slavery, even if indirectly, and 'economic historians have disagreed over the precise interrelationship of the various economic factors involved.' Morgan, In conclusion, slavery constituted an essential quantitative and qualitative stimulus to Britain's economy as a whole; 'the colonial system was the spinal cord of the commercial capitalism of the mercantile epoch' Enslaved labour sustained Britain's commercial empire because it made possible large-scale specialised production of commodities, which drove commerce; Williams, Thomas, It is difficult to envisage another path that would have carried Britain's commercial empire to the level it attained by the end of the eighteenth-century, emphasising dependence on slavery. It was slavery that 'made an important, though not decisive, impact", "label": 1 }, { "main_document": "Vinyl polymers and block copolymers of methacrylates can be synthesized using an alkyl bromide initiator and a copper(I) catalyst of different ligands via living radical polymerisation. Living polymerization is a form of addition polymerization that is free from side reactions such as termination and chain transfer. This can give polymers of defined architectures and molecular weights (M The number of polymer chains is limited by the number of initiator molecules. The growing chains can polymerize infinitely until the concentration of monomer ceases. The effectiveness of this polymerization is to produce a narrow PDi polymer of controlled number-average molecular weight (M PDi is the polydispersity index which is the measure of the distribution of molecular mass in a polymer sample. This allows a vast amount of different polymer structure such as block co-polymers, graft co-polymers, star polymers to be made. The introduction of free radicals into this form of polymerization enables the synthesis of a range of polymers to be achieved but relieving all the constraints of anionic transfer polymerization. Constraints such as clean glassware, pure reagents and pure solvents are required as termination can be achieved by abduction of H+ source from the environment. The termination process can be avoided by the using a suitable halogen as a \"cap\" to the active growing radical species.1,2 Matyjaszewski Many transition metals such as nickel, palladium, iron have been reported but recent work showed that copper(I) based complexes are effective at conventional free- radical polymerization, living polymerization and the investigation of ratios of co-polymerisation. Ligands of different types can be prepared by undergoing a condensation reaction with primary amines with pyridine-2-carboxaldehyde (Scheme 1). Propyl ligand and pentyl ligand were synthesized in order to investigate the effect of R group chain length on rate of polymerisation. A range of phenolic esters can be used as the initiator. 2- Napthol initiator was synthesized by undergoing an esterification of 2- napthol with 2- bromobutyryl bromide in the presence of triethylamine (Scheme 3). Methyl methacrylate (MMA) and butyl methacrylate (BMA) were used as monomers in order to examine the effects of R group chain length on the rate of polymerization. The C-Br bond can cleave homolytically to give two radical species; (Initiator) I Note: The chemical composition of the \"y\" repeating unit might not have an alternating pattern but a more complex or a random pattern. Block copolymers can also be synthesized by adding a second monomer. The reactivity of both monomers can be examined. MMA was used as the first monomer and benzyl methyacrylate (BzMA) was used as the second monomer. Propyl ligand was synthesized and characterised using proton NMR and FT- IR. FT- IR of the starting materials for propyl ligand were collected so the reaction can be tracked. Yield: 11.0834g, 73%; boiling point: 94- 96 FT- IR of pyridine-2-carboxaldehyde: 1708 cm (Fig 11). Pentyl ligand and 2-napthol initiator were also synthesized and characterised using FT-IR and The living radical polymerisation was carried out by reacting monomer(s) (MMA or MMA and BzMA or BMA), 2-napthol initiator, a ligand and CuBr in toluene at 90 In order to", "label": 0 }, { "main_document": "we can make out generalities about lexical acquisition : as a whole the child uses a natural logic to get new words : first of all he 'matches a referent to the new word, then he compares it with the latter, he stores it in his memory to finally retrieve it on demand' (lecture of M. Garman). But the investigations we studied go farther : they tend to prove that robust abilities to learn more or less fast exist during childhood, even if there are (two main) constrains to that mapping. One is the theory of Mutual Exclusivity : each object corresponds to one label of only one category (e.g. : for the same word 'dog' a child can name an animal, a puddle or his dog 'Fido'). The other main constrain is known as the Principle of Contrast after Clark; it states that 'children will not learn new words for word meaning they already have', or in other terms that they generally avoid synonyms. But with the example of the video sequences, the linguists realized that at the end, both a new and a known word the child had lerant could relate to the same, probably unconscious, idea (see : malicious and bad), so it would be more like similarities than contraries... Here is implicated the power and flexibility of vocabulary; later, the child will separate those words to get real synonyms. But these theories will always be criticized because there will always be debates about child vocabulary learning among linguists, for child language development is an endless subject and children always keep on surprising us...", "label": 0 }, { "main_document": "of course, is not very efficient as this can only process one signal at one time and it takes a long time to find the right digital equivalent signal but it gives a general picture of how one of the simplest ADC works. The output digital signal could be transferred in to a computer, which could then display and interpret the data (e.g. self-calibration, etc) with the use of suitable softwares. Labview is one of the programs used in the university and it is very easy to use. 1. Connect the output of the designed torque sensor system to a data acquisition system which has an AD and DA card plugged into a PC 2. Run Labview 3. Calibrate the system with the program 4. Test the measurement system by putting loads on shaft B progressively 5. Save the result The range of the measurement system can measure depends on the diameter of the shaft and the proportional limit S Maximum torsion can be measured is given by Resolution in an analogue instrument is limited by its noise level but in a digital instrument the resolution is represented by its least significant bit. For instance a the most common 12 bit Analogue-to-Digital converter has an input range of V All transducers exhibit a dual sensitivity to some degree, which means that the output voltage is the result of both a primary quantity, in this case torque, and a secondary quantity, such as temperature or secondary load (e.g. axial force, bending moment) To check if the design measurement system is insensitive to secondary quantity, It can be tested under: 1. a standard primary quantity i.e. constant torque 2. conditions of different secondary quantity (e.g. tested in different temperature, different axial force and bending moment) to see if there is any change under different secondary conditions. The arrangement of the bridge circuit compensates any variation caused by secondary quantity elements and therefore the torque sensor should be proved insensitive to uniform changes, axial force and bending force. This design covers all the basic requirements needed in a torque measurement system. It is believed that this circuit can improve by using a Wheatstone Bridge constant current circuit as using constant current in a Wheatstone Bridge circuit", "label": 0 }, { "main_document": "Neanderthals are known as lived between 250k and 40kya in Europe and South West Asia, and 500 individuals in total have been identified, and 12 semi-complete skeletons have been largely contributing the study of Neanderthal anatomy to investigate their environment, behaviour and lifestyles. The physical features of Neanderthals are relatively robust in appearance, short heavily muscled body with relatively short distal limbs, distinctive cranial features in brain size, prominent brow ridge, projecting nose and nasal cavities. According to traditional view, these Neanderthal features might be the result of a genetic isolation under heavy selective stress. However, recently it is suggested that the possible reflection of severe lifestyle and cold adaptation during the last ice age. Thus this essay focused on the Question of how much Neanderthal anatomy represents adaptations to the cold environments, with representing the significance of Neanderthal living environment, the climate reconstruction by oxygen isotope ratio analysis in deep sea sediments, and possible anatomical adaptation with several theories and researches. The Middle Palaeolithic period is known for its dramatic climatic change, which is the later stages of Middle and early late Pleistocene. Pleistocene has at least ten of condition shifts between glacial and interglacial. The definition of glacial periods is a cold climatic episode with widespread ice sheets and reduced sea levels. On the other hand, interglacials are warm climatic episodes, with little or no glacial climatic process and high sea levels. The study of oxygen isotope ratios in deep sea sediment is key for investigating past climate, because deep sea sediments represent accumulation of oxygen isotopes. The skeletons of microscopic organisms, foraminifera, are mainly applied. These foraminifera absorb two stable oxygen isotopes from sea water. Measuring the ration of 18O and 16O in foraminifera reveals relative balance of global sea water and ice sheets. High 18O means growing ice sheets, and reducing global sea levels in glacial period. On the other hand high 16O represent warm climate with rising sea levels and reducing ice sheets. Interglacial climate fluctuated with rapid internal temperature shift, changes happen within individual's life time. Hominine adaptation to millennia of constant climate/ sea level/ plant cover is an increasingly unlikely scenario. More dominant condition is early glacial, open landscape and steppe environments. But Ice core data is indicating the rapidity of change/ oscillation. Therefore millennia of stability model is out. This section is going to discuss Neanderthal anatomical adaptation, mainly the function of projecting nose, large nasal cavities and the large brain size of Neanderthals. The size of Neanderthal nose is remarkably prominent and nasal cavity was large. However, comparing the length and breadth of Neanderthal noses with those of modern humans, the combination is unusual, because the modern cold adapted people, as Inuit, commonly have long and narrow. Thus, this is considered as other types of cold adaptation. Large nasal cavities of Neanderthals likely suggest that the adaptation to the cold climate by providing more surfaces for both warming and moistening the cold arid air before reaching the lungs, as well as for preventing the damage to the brain. Furthermore, the other hypothesis of", "label": 0 }, { "main_document": "Now moving on to the exciting developments in our wonderful field over the last decade. My area of interest is still functions of a real variable and Fourier's discovery that arbitrary functions can be represented in series of sines and cosines is, in my opinion, a magnificent piece of mathematics. We are living in an interesting time for mathematics and I feel our profession is really taking off. My advice to you would be to continue your work on pure mathematics but also consider applied mathematics which the French are becoming more concerned with. Base yourself in France if you can as I feel the focus of mathematics is shifting there. Continuity is an intriguing subject at the moment. Cauchy has given us his definition although only for continuity at an interval, not on a point - something you could consider perhaps. I have a reservation about one piece of Cauchy's work however. Abel commented in 1826 that there were flaws in his binomial theorem and described it as \"a theorem that admits exceptions.\" Abel quotes the series Perhaps Cauchy does not think this relevant to his theorem or possibly he is only considering continuity on an interval, in which case the theorem is right. Cauchy uses this binomial theorem to prove that I advise you to have a look at this. The genius of Cauchy can be seen in not only the rigour he has brought to mathematics, but also what I think is the most significant mathematical development of recent years. Cauchy has destroyed the foundations of Lagrangian calculus. He discovered that Lagrange had used a flawed argument at the start of his account that every function admits a Taylor series expansion. Cauchy was more careful and restricted it to functions which, with their first n derivatives, are continuous within the interval [0,h]. Previously we mathematicians thought our task was to capture the fact that every function could be expanded as a Taylor series in the most rigorous way. Cauchy has shown that it is possible to define a function that does not agree as a Taylor series. He uses the example This is not identical to zero, but all terms of its Taylor series are zero. Cauchy has given us the question of how, if at all, can a function agrees with a representation of it. Think about this and its possible ramifications for Fourier series. It is something I will be working on. I urge you to have a close look at Crelle's journal, the first of its kind in Germany and an example of mathematicians trying to raise the standards in that country. You will be fascinated by Abel's work on the solvability of equations by radicals. Did you know he has succeeded in showing that the general polynomial equation of degree 5 cannot be solvable by radicals? Look at the exceptional changes in our field in recent times and enjoy this mathematical age we are living in.", "label": 1 }, { "main_document": "and both have to resend when the line is free; obviously on a high traffic network this is not efficient. In contrast, the Token Ring architecture copes well with high traffic as collisions do not occur as nodes can only transmit when they are in possession of the token thus making it speedier than Ethernet. Token Ring is a fair network and prevents one node from hogging the network media. It doesn't allow a node that has just transmitted to transmit again in succession, and so the speed of the network is consistent at all times. Ethernet does not have a scheme like this and a node can flood the network with data and not allow others to transmit. This can be frustrating for other users as the network will appear slow. You could argue that Ethernet is more flexible than Token Ring as it can be wired as a Bus or Star topology giving you options with cabling but as prices are dropping rapidly, this is no longer becoming an issue, and the ability for a node to be added onto a bus network makes the architecture very scalable. Token Ring is stuck with the star configuration but this is standard across many architectures today, but it is not very scalable because the more nodes on the network, the less often the token is available and so sub netting is necessary. Today 80% of LAN connections installed in universities and within organisations use Ethernet. It is preferable over Token Ring because it has further advances in the size of the bandwidth with Gigabit Ethernet being common these days. Token Ring has made advances into larger bandwidths but not on the same scale as Ethernet. Ethernet is very flexible and it integrates well between LAN and WAN services which is much needed for multinational organisations. It is an inherently scalable architecture in the sense that as many nodes can be added as necessary with only a NIC and a cable necessary and minimal configuration (unless sub netting is necessary), or just a wireless NIC for a wireless connection which is very useful for organisations which maybe in growth. Token Ring is also scalable but only up to a certain point as the circle would become too large to be able to efficiently pass the token around. It is also proving a very affordable solution as hardware is very cheap compared to the Token Ring architecture which needs a multistation access unit in addition to cabling and NICs. Most PCs and laptops today come with a NIC included and so Ethernet only carries the cost of the cabling. As IP-based networks can be of different sizes, the architects of the IP addressing system set up different classes of IP addresses to accommodate different sizes of networks. IP addresses exist within three distinct classes commonly known as class A, B, and C. The class in which an IP address resides, is determined by the first byte of the IP address or the digits before the first dot. Class A is used for very large", "label": 1 }, { "main_document": "as well as the team morale severely, since the report made the \"backstage\" racist comments visible to everyone. The case of Porter has portrayed a \"powerful\" image of the researchers. They can place themselves \"undercover\" and invade private premises to observe; they can deceive the participants with or without intention. On the other hand, there is a \"vulnerable\" image of the social researchers available in some methodological discussions. Researchers can be under enormous constraints or threats which are imposed by the sponsors, gatekeepers or even the apparently \"harmless\" subjects. The piece of work that I am going to examine is one of these cases. In \"Conducting Qualitative Research on Wife Abuse: Dealing with the Issue of Anxiety\" by Sevaste Chatzifotiou (2000), the researcher examined the experience of violent abuse by conducting in-depth interviews with 53 Greek women. Chatzifotiou stated clearly that a \"feminist and context-specific\" approach was adopted as research design. The feminist approach is considered to be a \"revolution\" against the conventional value of social research (Lee-Treweek & Linkogle 2000). The emotional side of both the researcher and participants is emphasized, and the researcher's political stance is also made explicit in the hope of \"ending women's unequal position in society\" (Chatzifotiou 2000: section 3.3). In consequence, the feminist approach aims at establishing an open and active exchange between the participants and researcher. The notion of \"equality\" was emphasized repeatedly by Chatzifotiou. She attempted to minimize the gap between the (powerful) researcher and the (powerless) subjects by means of strictly following the researchers' \"code of practice\". For example, a full explanation of the research purpose was given to every subject, and the confidentiality of the collected information was also guaranteed. Sufficient preparation was made to protect the participants; unexpectedly, the researcher was also exposed to depression and anxiety during and after the research process. When interviewing the abused women, Chatzifotiou found herself trapped by a matrix of roles she was expected to play. She was at the same time a researcher, a listener, a woman, or simply a human being on whom the abused women depended for condolence and comfort. She felt stressed when she was \"listening to the women's traumatized stories... and struggling to decide as to the degree and the way to respond\" (2000: section 7.4). The traumatized feeling continued after she finished the interviews. The research experiences of Chatzifotiou and Porter showed a dynamic interplay of power between the researchers and their surroundings. The researchers may have ways to \"manipulate\" the settings, but they are also caught by the various constraints and danger in return. Chatzifotiou and Porter made their choices of which side to take in the diagram of value and power. As Becker argued, \"The question is not whether we (the researchers) should take sides, since we inevitably will, but rather whose side are we on?\"(1967: 239) How can we know which side the researchers are on? It depends on how \"honest\" the researchers are when they \"talk\" to the audience. The ethical issues of social research can be controversial, and sometimes \"concealment\" of the research purpose", "label": 0 }, { "main_document": "was extruded, then mirrored about its central axis, this enabled a \"split line This feature becomes of use when meshing the model. The presence of a split line means that the mesh nodes will terminate along the split line allowing results to be obtained along the beams cross section. An axis was constructed to run along the beams central axis, this would be used when applying the torsional load during analysis. As can be seen from this brief description of the CAD model geometry, it is advantageous to plan the modelling of the geometry construction so that additional features can be added that make the analysis process easier. A line drawn perpendicular to a face, which can split a single face into two faces. Split Line The material properties used for the analysis can be seen in Table 3.1, even though the software package has a built in material library, which lists all the common materials, a custom material was chosen to ensure that FE results were as close as possible to theoretical and practical results. The mesh type used in the analysis of the I beam section was a solid mesh 10 node tetrahedral, the mesh density for the beam was kept to the default size of 13.307mm, this can be seen in the left hand figure in Figure 3.5. Note that the presence of the split line causes the nodes to terminate against the line, the right hand figure is with mesh control added to the standard mesh this was achieved by selecting the split line to apply the mesh control to. As stated earlier the default mesh was selected for the model of the I beam, mesh control was used to increase the mesh density around the critical region being examined. In this case it was the stress profile around the I beam section at its mid position. Various mesh sizes were tried, the results of which can be seen in Table 4.1, the results have been plotted and can be seen in Graph 4.2. By adding mesh control to only the critical region it minimises the computational time and resources, if the entire beam was meshed to the size of the mesh control there would be many more elements and the solution would be unnecessarily lengthy. The final mesh used on the model from which the results were obtained were as follows: The constraints on the model were constructed to simulate the practical lab setup. Figure 3.6 depicts how the model was constrained. The left hand end of the beam was fully constrained in all three axis to mimic a built in end. The right hand end of the beam was only constrained to prevent translation on the central axis on the beam. The torsional load was applied to the right hand face on two points as shown, and was 1Nm. The results obtained from applying torsional load to the I beam section and recording the strain gauge outputs can be seen in Table 4.3. The table shows for each strain gauge location and torsional load the strain", "label": 1 }, { "main_document": "Vinegar is a dilute solution of acetic acid, so that it can be titrated with a base. Titration is the process of adding a known amount of a solution of known concentration to a know amount of solution of unknown concentration. In this experiment, we use the sodium hydroxide solution to determine the total acidity of 3 vinegar samples. a small portion of the sodium hydroxide solution was placed in a clean burette until above the zero mark, the jet was filled with liquid by opened the tap, and the position of the base of the meniscus was recorded. The concentration=40.82g/2000ml=0.02g/ml (potassium hydrogen phthalate) 25ml of potassium hydrogen phthalate solution were filled into a 150ml flask, while 2 or drops of phenolphthalein indicator were added. The flask is putted below the burette which the Sodium hydroxide was run slowly in, while the flask was rotated and the tap was controlled by both hands separately. Keep on the addition until a permanent faint pink color was showed in the solution, then the burette was read and the titre was recorded. The above titration has been repeated once again, and the results were observed and recorded. Calculate the molarity of the sodium hydroxide solution. Water was added into a 100ml flask, which 25ml of vinegar1 sample has been pipetted in, until 100ml. (which means 75ml of water was added.) 25ml of diluted sample was titrated with 0.1M of sodium hydroxide solution, while 5 or 6 drops of the phenolphthalein solution was used as indicator. As more NaOH is added, the red color, which forms on the point of contact, becomes progressively harder to bleach by swirling. Note that, if solution does not return to pink colour after swirling, then the titration has gone too far. From burette containing the Base, titrate additional base into flask until colour returns to pink. The base(NaOH) should be added till one drop turns the solution faint pink, and then stopped. The faint pink color at the end-point should persist for at least 30 seconds of swirling to be accepted as genuine. The number of NaOH shows on the burette(the liquid levels) has been recorded and noted. pH paper was used into those samples, the colour has been recorded and compared, in order to determine the pH value of the three vinegar samples. A few drops of Barium Chloride and Silver Nitrate solutions were added to separate aliquots of the diluted vinegar in the test tubes, the result was recorded and observed.(the result is positive or negative? Shows in table3) Basic equation: Where: So: And also in this experiment, So : First is the in standardization of sodium hydroxide: 1 mole of phthalate reacts with 1 mole of sodium chloride but . The phthalate is 0.1M So: We all know that: , where that much CH So: and now we do the same calculation for vinegar 2 &3, the results show below: The method used to measure the total acidity of the vinegar being studied is called acid-base titration, which is an analytical chemistry technique. For an acid-base titration,", "label": 0 }, { "main_document": "through all journals with both of these words in. So I was able to narrow down my search and it came up with only 23 journals. Although these search words produced articles relevant to the title I decided to narrow down my search further. So I decided to search by using the words 'witch* Africa*' by using an asterisk (*) at the end of the words it gave me alternative endings e.g. it will not only find journals with witch in but witches, witchcraft etc. This search came up with 7 journals, which was a perfect amount to look through. It brought up some very useful journals, which related well to my title. Manderson, Desmond (2005). Possessed: drug policy, witchcraft and belief. This journals main theme is about drug policies, however it touches on the European belief in witchcraft, looking at it in the sixteenth century. This essay argues that our drug laws are not intended to get rid of drugs, any more than the Inquisition wanted to ban witchcraft and the devil. Although the article only touches on witchcraft, I would be able to use this article to get a European's perspective on witchcraft and how it is thought of by others as destructive and irrational therefore get an opposing view. At the same time gain some history on witchcraft itself. In witch-bound Africa; an account of the primitive Kaonde tribe and their beliefs. New York: Barnes and Noble. This book looks at the views of the Kaonde tribe in Africa, looking at the tribe's beliefs and traditions. Especially concentrating on the tribes believes in witchcraft. Why do they feel so strongly about it and where did these rituals come from? This article will be very useful for my essay as it is a first hand account of the Kaonde tribe, concentrating on how they feel about witchcraft, which is an angle the other articles only touch on. Rasmussen, Susan-J (2004). Reflections on witchcraft, danger, and modernity among the Tuareg. 74:3, pp. 315-340. This essay explores the different ways witchcraft is implicated today in an African society, in the Tuareg of the Republic of Niger. The essay analyses the power and dangers of witchcraft in case studies, suggesting the ceremonies which take place to be very traditional. It also looks at questions such as 'why is it so important to the Tuareg people?', which will help expand my essay. Salamone, F.A. (1980). Gbagyi witchcraft: a reconsideration of S. F. Nadel's theory of African witchcraft. This article looks into S.F. Nadel's theory of African witchcraft of the Gbagyi people. It touches on many topics such as some of the ceremonies which take place. Of which some are more important than others and why this is the case. It also looks at the history of witchcraft, when the first cases were recorded, and why he believes it started. He looks at why certain rituals are so important to the people of Gbagyi and why it is still so commonly used. This article has strong links to my essay title and will enable me", "label": 1 }, { "main_document": "in week 4 (Fig 11) and the changing patterns of Specific Leaf Area were simultaneous to those of the other growth parameters on leaves and it was at the maximum in week 5 (Fig 12). By increasing the number, weight, and sizes of leaves, the plant was to achieve the higher yielding of photosynthesis and hence, the source assimilation before the production of reproductive organs was accumulated. The growth of leaves began to decrease after the week 5 and the plant developed into the reproductive period (Fig 7). The buds started to emerge in week 6, the flowering and fruiting were taken place. Due to the differences in the time of maturity among the individual plants, the flowering period varied (Fig 4). As mentioned, Relative Growth Rate in weeks 1 and 2 can be omitted for the analysis (Fig 8). The accurate estimation of during the weeks 3 and 4 is very difficult without any reliable data, hence, it can vary, however, it was presumed to be the highest of all periods. The growth of a plant is divided into some phases (Ross & Salisbury, 1992). The logarithm phase, when the seeds emerge and the primary growth of a plant starts, the growth rate is low which was between the weeks 0 and 2 in this experiment. During the next linear phase, the growth rate increases with time and the maximum rate can be observed which might have been at some time between the weeks 2 and 4. The final reproductive development is expressed as the senescence phase and there is a decline in the growth rate. In this experiment, it was slightly bigger in week 8 than in week 7 which was estimated to be because of the increase in the weight of pods. The patterns of Net Assimilation Rate shifted from the assimilation to sources, i.e. leaves into sinks, i.e. pods by week 7 (Fig 9). The plants grown at 14 Although those plants successfully reached to the flowering as the number of flowers was the greatest (Fig 2.5), the growth was very slow and the yield would be low. On the other hand, those at 30 With high temperature, those plants reached to the maturity quickly, however, the earlier production of fruits meant the poor harvest. The pod weight was very low and Harvest Index was low with 40% (Fig 2.2 & 2.10). Despite of the fact that those plants yield more leaves and pods per plant, the sizes of leaves, i.e. the area and the productivity were much smaller compared to the others (Fig 2.4). This can also be estimated from the small figures of Specific Leaf Area and Leaf Area Ratio (Fig 2.7 & 2.9). For the plants at 26 However, the pod weight was smaller than those at 18 and 22 The growth of plants in terms of the level of maturity was very high for those at 18 and 22 From this experiment, it was concluded that the plants at 26 Hence, for the vegetative growth of this plant, the optimum temperature was about 20", "label": 0 }, { "main_document": "growing concern of how commerce is altering types of leisure in which it is involved. Commerce may possibly stimulate inertia because it separates the consumer from the provider, but this also demands commerce to continually transform itself to keep consumers interested. Furthermore, semiotics is an important asset in achieving the commercial providers profit goal, hence only supplying for demands which will meet their need for revenue. Commerce influences and manipulates consumers to spend more and increase their expenditure each time, whilst having them regard it as a means of differentiation (Henry, 2001), therefore seeking to create a consumer culture. Henry continues that the constant reproduction of leisure commodities has led, over the years to a mass standardisation of products. It has also spread demand into an unbalanced arrangement of money and time, thus dividing society into four groups: those who have abundance of time and economic resources, reduced time and high disposable income, meagre economical resources and ample time, and finally, reduced money and time (Martin & Mason, 1998) therefore differentiating patterns of consumption and emphasizing the importance of lifestyles. The McDonaldization theory defends that consumers are controlled to think they are free to make their own decisions when in fact their decisions are moulded unconsciously (Ritzer, 2000). Roberts (2004) disagrees with Ritzer stating that McDonaldization is a result of profit and cannot influence consumer's rational thinking. He defends that commerce will endeavour repeating the same formula to profit, however, this will not be similar to the food industry every time. Finally, he claims that this process of standardisation could simply never be attached to consumer culture routines. He goes further to state that commerce provides new experiences, diversity and stimulus to consumers, therefore broadening markets in their benefit, gives them the opportunity to create individualized lifestyles and that there are always substitutes to the commercial sector such as State provided forms of leisure, though, people choose to be attached to consumer culture. Coffee shops can have an important role in suppressing some constraints felt by ethnic women and Haywood et al (1995) refer that minorities, such as Asian and Afro-Caribbean women, have felt at some stage, oppression towards their leisure choices. Much of this (Barrett & McIntosh, 1985) was on behalf of white feminists and based on pre-conceived opinions. This has obviously changed to some extent; however, there is a need to further understand different cultures and ethnic groups (Deem, 1986). Clarke & Critcher (1985), state that many minorities, although different, have similarities with regards to the significance attributed to religion and family. They add that due to differences between western cultures and ethnic groups, there was a need for minorities to adopt certain leisure spaces and adapt them to their needs, thus, cafes and cinemas were purchased and became more widespread. In many ways, these changes allied to the development of commerce and the renovation of conventional recreation over the centuries, have provided safe environments for consumption, therefore state regulations have been eased and going out is now a widespread activity (Roberts, 2000). Hot beverages have become trendy over the", "label": 0 }, { "main_document": "more specific titles for niche areas of the market, for example Out of these, Most music magazines are monthly publications, except for published by Emap. It started in 1952, making it the oldest existing music magazine. Kerrang! is dedicated to \"rock in all its forms\" and has been since the first issue came out in June 1981. According to the latest Audit Bureau of Circulation (ABC) figures published in February 2006, is only 600 copies behind ). The core target audience for is 15-24 year olds, and there is a 60:40 male:female ratio (NRS). In comparison the mean age of The median age of This is reflected by the language used in the magazines; uses more colloquial language to appeal to a younger audience. On the other hand, Yet despite sales being in decline, music magazines still hold their influence in the music scene. Many upcoming artists and bands rely on music magazines as a medium to reach potential fans and increase their recognition. For example 'The Strokes', 'Kaiser Chiefs' and 'Arctic Monkeys' all shot to fame after appearing on the front of titles such as and Many titles have attempted to turn modern technology from a threat into an opportunity. Magazines such as and For example Kerrang! is also a digital radio station playing many sub-genres of rock.", "label": 1 }, { "main_document": "to achieve advantage. The strategy clock is the competitive strategy options for the SBU. Most company will attain competitive advantage through price and differentiation. Tesco is in route 3 of the strategic clock which is a hybrid strategy. It seeks simultaneously to achieve differentiation and a price lower than that of competitors. Differentiates itself to provide large range of goods and services: grocery, non-food including electrical, stationary, clothing etc. and retailing services such as financial services, insurance and telecommunications packages. Has various own brands, from Value to Finest and lifestyle ranges like Organic, Free From and Healthy Living to enable customers to buy products to compliment their lifestyle (Source: Tesco plc). Continually innovating and investing in new lines to increase choice for customers (Source: Tesco plc). Compares price with other competitors by price check (Source: Tesco.com) and posters in store to show that it offers a lower price. Tesco always need to understand competitiors, segments operating/target in and what customers in that segment value in order to succeed. Some rivals are in the same position while some are in different position on the strategy clock as Tesco. Asda is also in route 3 as it provides large range of food, non-food products and retail services such as financial services and travel. So it is a great competitor of Tesco. Sainsbury's is in route 5 on the strategy clock. It also differentiates itself in providing both food, non-food and retail services such as Sainsbury's property and Sainsbury's bank, but of slightly higher price (Source: J Sainsbury plc). Morrisons is in route 2 on the strategy clock. Its strategy is on selling predominantly food, at low prices, and doing so only from large stores (Source: Morrisons). Tesco has changed position on the strategy clock over time. Was started in route 1 in 1920-60s as it aimed to sell large volume of products at a very low price. Tesco only sold grocery products in the beginning. Moved up to route 2 in 1970-80s as it opened its first petrol stations to have a bit more differentiation and introduce price-cutting campaign. In 1990s-present, Tesco moved up to route 3 to provide a wide ranges of products and services. (Source: Tesco plc) Tesco's competitors are Asda, Sainsbury's, Morrisons, Marks and Spencer and Waitrose From perceptual map, Tesco and Asda both offer a wide range of products including groceries and retail services at a very low price. Sainsbury has slightly less product range and products are higher in price. Morrisons offer a low price but only sell food. Marks & Spencer only sell own-brand food and clothing at a high price Waitrose's products are at high price and mainly sell food. Supermarkets can differentiate themselves base on marketing, products and/or resources. Tesco differentiate itself base on all marketing, products and competences/resources. Marketing based: Organised online order system through Tesco.com Corporate social responsibility by reducing emissions from our transport distribution and refrigerant used (Source: Tesco plc). Introduce GI labelling on food products. Product based: Offers various own-brand products---Value, Healthy Living, Finest, Organic, 'Free From' and Fairtrade. Competence/resource based: It", "label": 0 }, { "main_document": "bits of work of people, or have to grade people off hand, without discussing grading with them previously. These were the two most important and hard learned lessons. It wasent of course, until I'd experienced the two things I just mentioned that I stepped into gear and decided that next term I was going to run things differently. There were going to be accepted and hard coded rules of discipline, there were going to be regular meetings throughout the term. Work was not going to be rushed off at the last minute; it was going to be handed in, in advance. And fundamentally I learnt, that if you want something done properly, do it yourself. But I was also determined to find out how to get other people to do it as near perfectly as possible. The first thing that was really setup in term two was that there were going to be two weekly meetings during which we would discuss progress and develop new ideas. At first we used both of them to the full, with them both sometimes extending to two hours each, as we got further in and our individual tasks took up more of our time, the meetings became briefer and more just progress reporting. Either way - they were fundamental to our progress this term. On occasions though it was necessary to rearrange the meeting times though which initially led to confusion when communication was largely done by word of mouth and e-mail. I learnt that some people don't actually check there e-mail the moment they wake up, go to bed and every five minutes in-between, and I had to adapt to use telephones - this made things easier. Everyone's numbers were quickly gathered and shared out - another thing which will in future be done on day one. With this done it was possible to negotiate too rearrange meeting times so that people could get vital bits of work done before the meeting, get prepared and ensure everyone turned up. It still didn't happen all the time though, and that's where the second big problem turned up. I didn't want to have to just chop peoples grades down without giving them a chance to discuss it with me, I had felt guilty enough doing this the last time and at that point the amount of work done by the team was negligible. A better system needed to be found. I decided that establishing a few disciplinary rules would be the fairest way to do it - if people consistently missed meetings or didn't hand in work then they would be moved down a grade, then it would all be discussed at the end. This took that weight of my back - I have no intention of making enemies because someone had a different idea too me, as to the amount of work needing doing. Another similar problem that I encountered was how to tell someone that I didn't like their idea and that it wasent going to get used. The necessity to do this occurred on", "label": 1 }, { "main_document": "onset-rhyme theory against alternative methods of analysis, for instance the flat structure theory. This version of syllable structure, proposed by Clements and Keyser (1990), assumes that the syllable is a flat constituent without any internal structure (Appendix 2). Clements and Keyser (1990:20) based on the assumption that \"cooccurence restrictions holding between the nucleus and preceding elements of the syllable appear to be just as common as cooccurence restrictions holding between the nucleus and following elements\" rejected the recognition that the relationship between the vowel and the following consonant is closer than between the vowel and the onset. In fact, there is a number of certain restrictions in the combination of onsets and rhymes. Nevertheless, as Fudge (1987:361) argues restrictions between the peak and coda are more frequent and stricter than those holding between the onset and peak in most languages of the world. For instance, one of the most significant constraints which occur between the peak and the coda, according to Fudge (1987: 369) is the fact that \"Word-final syllables must be closed if they contain a short vowel.\" This means that in cases where there is no coda the stressed-final vowel must be a long one or a diphthong. For instance /fI/* and /dU/* are incorrect forms in English whereas /fi:/ and /du:/ are considered acceptable. Consequently, short vowels can occur only in positions before a consonant. It is clear from this example that the form of the nucleus depends on the coda while the onset remains unrelated to the rest of a syllable. As Giegerich (1992:145) argues \"Expressed in terms of syllable structure, the generalisation would be that a syllable has minimally two X-positions in the rhyme.\" Thus, a stressed syllable would be illustrated like this according to the onset-rhyme theory (Appendix 3). Although Clements and Keyser's model has the advantage of being simpler than the complex onset-rhyme theory, a hierarchical rather than a linear order of structure is more widely accepted. One suggestion is that Roach could have elaborated on how long vowels and diphthongs are illustrated in terms of the onset-rhyme theory. As we supported earlier, this is perhaps the soundest argument to support that the syllable should be seen as a sequence of an onset and a rhyme. Roach (2000) does not mention anything about what happens in cases where we have a long vowel and how this is depicted in the onset-rhyme structure. A second suggestion that could be made is concerned with the sonority theory. Although Roach (2000:70) defines the syllable as a part, the centre of which does not prevent the passage of the air and sounds louder than the rest, he does not mention anything about the sonority theory. The sonority theory holds, as Giegerich explains (1992:132), that \"The pulses of the air stream correspond to peaks in sonority.\" and \"The sonority of a sound is its relative loudness compared to other sounds.\" Consequently, the distribution of segments in a syllable is not random but follows the pattern of the sonority hierarchy with the centre of the syllable being more sonorous, that is", "label": 0 }, { "main_document": "Medical Law Review, 12, Spring, pp. 14 - 39 ROGER BROWNSWORD, REGULATING HUMAN GENETICS: NEW DILEMMAS FOR A NEW MILLENNIUM, Medical Law Review, 12, Spring, pp. 14 - 39 Cloning is expressly prohibited under S3 (3) (d) HFE Act 1990 S1 HFE Act 1990 S1 HFE Act 1990 Wilmut, sourced from Aurora Plomer, Beyond the HFEA 1990 - The Regulation of Stem cell Research in the UK, Medical Law Review, 10 Summer 2002, pp132-164, Donaldson Report, 2001 Aurora Plomer, Beyond the HFEA 1990 - The Regulation of Stem cell Research in the UK, Medical Law Review, 10 Summer 2002, pp132-164, Human Reproductive Cloning Act, 2001 As per Crane J; R (on the application of Quintavalle) v Secretary of State for Health (2001) 4 All E.R. 1013 SAMANTHA HALLIDAY AND DEBORAH LYNN STEINBERG, THE REGULATED GENE: NEW LEGAL DILEMMAS, Medical Law Review, 12, Spring, pp. 2 - 13 The forced interpretation used in The The HFEA permitted this, stating that where \"PGD is already undertaken...the use of tissue typing to save the life of a sibling can be justified\", CORE R ex parte Quintavelle v HFEA Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Beta Thassaemia Pre-implantation Genetic Diagnosis Human Leukocyte Antigen Tissue Typing HFEA Statement: HFEA Statement: Acronym: Comment On Reproductive Ethics Judicial Review CORE, sourced from R ex parte Quintavelle v HFEA CORE, sourced from R ex parte Quintavelle v HFEA However, Kay's In an attempt to justify/secure PGD, the court considered whether under the \"restrictive\" Kay asserted that under Schedule-Two However, the Court of Appeal's judgement overcame these difficulties by using \"an approach to statutory interpretation that was not just purposive but creative\". Consequently, the court Clearly considerable uncertainty exists whether it is licensable; an extremely undesirable position. Kay LJ Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Treatment service under S11(1) Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Under S HFE Act 1990 S2(1) HFE Act 1990 Schedule Two, HFE Act 1990 Schedule 2, HFE Act 1990 Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Pepper v Hart (1993) 1 All ER42 CA Pepper v Hart (1993) 1 All ER42 CA Court of Appeal Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, Beverley Mulvenna, Pre-Implantation genetic diagnosis, Tissues typing and beyond: the legal implications of the Hashmi case, Medical Law International 2004, Vol. 6, pp.163-182, AB Academic Publishers, The Court of Appeal's responses to the HFEA's arguments for HLA were \"less than satisfactory\". In agreement, Schiemann This \"misses the point\": Fortunately, Mance Otherwise, where PGD had been permitted", "label": 1 }, { "main_document": "are required enhance to get the estimate sampling size by using the 'Win episcope' program. Statistics is following after data collecting out of the best-fit sample size for the hypothesis testing, To analyze the data, firstly, it always needs to quantify the accuracy of measurement, accuracy of calculation. The accuracy of the measurement of little finger here was in millimeter due to the scale of the ruler in use. As most other biological studies, the confidence level of this test was set as 95% and power was 80%. Secondly, an appropriate statistical method was introduced to work out the hypothesis. In this hypothesis test, student t-test was used to compare two means of continuous data. The calculated value of 't' was compared with the tabulated value of 't' for n If the calculated value is greater than the tabulated value, then the two means are significantly different from each other at the 95% probability level. In order to work out an appropriate sample size for this study, previous data collecting of 10 samples of each sample groups were firstly recorded. Thereby, the expected sample means were 61.4mm and 57.5mm, and expected standard deviation was 4.3. As 'Win episcope' program suggested that at 95% confidence level and 80% power, for two tailed independent samples, the best sample size was 22 observations per group. Thus, the degree of freedom of student t-test was determined as 42. The critical t value at 42 degree of freedom for 95% probability is 2.02. After 12 more samples taken for each group, the sample mean of male turned to 60.7mm, while the sample mean of female was 58.6mm. Thus the difference between the means was 2.1mm. Therefore, the 't' value was calculated as 1.91 upon the equation shown below: The calculated 't' value was smaller than the critical 't' value. Therefore, for degree of freedom at 95% confidence level, there does not exist difference in the length of the little finger of male and female students. Therefore, the null hypothesis was proved. Furthermore, standard error is the standard deviation of the means of the samples. Here, the standard error was 0.62. Due to limited resources and time, there were only 10 samples of each group had been measured to determine the expected means and expected standard deviation. The limited expected data collecting might result in poorly estimating the population size and its variations. Consequently, the sampling size was influenced and raised random error of the hypothesis test. Since the ruler used in this testing was limited as millimeter-scale, this probably raised random error too. Since the confident level was set as 95%, which means there was a type 1 error (alpha) that was 5% of the probability of declaring a difference to be statistically significant when no real difference exists in the population. Type 2 error (beta) is the probability of declaring a difference to be non-significant when a real difference exists in the population. In this test, due to power is 1-beta, there was a 20% type 2 error. The sampling size would be tremendous and very", "label": 0 }, { "main_document": "for the decentralization of collective bargaining would and actually affect the direct participants on the employers' side (representative sectoral employers' associations in the three countries) more than NEPAs. Evidence that such decentralization is taking place in Sweden, Germany and France is present in the analyses of several authors (Katz 1993, Locke and Kochan 1995, Parsons 2005, Thornqvist 1999). Yet an EIRO comparative study of 2005 (EIRO 2005) confirms that the sector level is still the most important level for wage determination in Germany and Sweden (even some recentralization trends noted by Kjellberg (1998) in the latter) and while company level bargaining is increasing in significance for large companies in France, the sector level still plays a dominant role for collective bargaining for small and medium-sized enterprises. This would suggest that the actors at sectoral level - the employers' associations involved directly in collective bargaining retain their key role in the industrial relation systems in the three countries examined. In addition, and in referring to an earlier study from 2001, Traxler points to the fact that 'in most countries [the] decentralization of bargaining had taken the form of a decay of central-level, interindustry bargaining (in favour of combined bargaining at sector and company level) rather than a radical move from multi- to single-employer bargaining' (2004:53). This would mean that even in countries where several levels of collective bargaining take place with equal importance (the already mentioned sector and company levels in France, the latter level strongly boosted by the Auroux (1982) and Aubry (1998/99) laws), the sector level maintains its considerable position, as do the actors engaged in the process. The participation of peak employers' associations in various statutory bodies is another significant role of theirs in industrial relations in Germany, Sweden and France. The role that Sisson (1991) names 'pressure group' complements the strength of employers' organisations (primarily peak ones) as it allows them to have a say in major employment issues, such as social security (in France and Germany), bargaining/industrial relations (Germany) or mediation initiatives and occupational accidents/illnesses cases (Sweden). Although the EIRO report on employers' organisations notes two cases when MEDEF (2001) and SAF (1990/92) have deliberately withdrawn from some representative bodies to avoid 'excessive corporatism', it still maintains that 'little has changed in practice since many NEPA officials have continued to serve on governing bodies, but now as personal representatives (EIRO 2004:14). In addition, Vatta (1999) advances the argument of the importance of the participation of peak organisations in statutory bodies through drawing attention to its overall significance for their access to decision making processes in the countries. She points out that this access can be 'demonstrated by (...) their presence in specialized bodies or agencies (bi- or tripartite) and their involvement in the implementation of policies' (1999:247). Another point the EU Industrial Relations in Europe 2004 Report makes in this regard is that in Sweden representative tasks 'are now shared with sectoral (...) employer federations' (2004:53), a fact which concerns the level of actors, involved in representation, but does not diminish the significance of that role, quite on", "label": 0 }, { "main_document": "Bertrand still appears to demonstrate the most intense competition, the difference between the two models is certainly not as transparent. We can also look perhaps at different types of competition measures such as the discount factor required for collusion within infinitely repeated Cournot and Bertrand competition or advertising intensity. Here it can be seen that a market operating under Bertrand competition is not necessarily more competitive than a market operating under a Cournot model. We must be very careful of our definition of intensity of competition when discussing this topic. We must also remember that the Bertrand and Cournot models are simply the building blocks of oligopoly theory. In order to discuss intensity of competition further, we could perhaps investigate market concentration ratios to discover if the number of firms entering each type of competitive market differs and how this affects intensity of competition.", "label": 1 }, { "main_document": "In his 1813 masterpiece Nations and nationalism have indeed long been considered vital to our identities. As the current abundance of separatist movements suggests (the ETA in Spain, the Kurds in Iraq, or the Tamils in Sri Lanka), following its inception with the 1789 French Revolution and its gradual spread outside of Europe through anti-colonial struggles, nationalism remains a dominant political force today What explains this potency? To quote historian Benedict Anderson, \"what makes people love and die for nations, as well as hate and kill in their name?\" Many attribute the ideology's force to its reliance on the psychological impacts of shared language, ethnicity, religion and history. Others, however, see \"the apparent universal ideological domination of nationalism [as an] optical illusion\" Will this ideology remain influential throughout the 21 By investigating what factors have fostered nationalism in the past, and by assessing to what extent nationalism's flaws are today undermining its bases, one may ascertain whether \"the flesh and blood of populations [will] still [be] swayed by appeals to their national character\" Gellner, Ernest. Ithaca: Cornell University Press, 1983. Pg. 3. Heywood, Andrew. 3rd Edition. Palgrave, 2004. Pg. 157. Anderson, Benedict. Verso. New York, 1991. Pg. 3. Hobsbawm, Eric. Nations and Nationalism Since 1780. Cambridge University Press, 1990. Pg. 114. Axford, Barrie, Browning, Gary, Huggins, Richard, & Rosamond, Ben. Pg. 246. When nationalism first emerged in 18 Since then, nationalism has emphasized self-determination, holding that \"the political and national unit should be congruent\" What, though, constitutes a nation? Many claim that a shared language particularly fosters national belonging. Certain groups, such as Canada's Francophones in Quebec, are indeed highly sensitive to the dissolution of their language; nevertheless, language alone cannot define a nation. Different dialects may coexist within a country - in Switzerland French, Italian, are German are all official languages - without damaging its sense of nationalism Furthermore, for Eric Hobsbawm, \"national languages are almost always semi-artificial constructs\", as, far from being a national culture's primary foundation, a common language is obtained only after the painstaking selection and standardisation of particular dialects In France in 1789, only 20% of the population spoke French, and when Italy unified in 1860, only 2.5% spoke Italian Heywood, Andrew. 3rd Edition. Palgrave, 2004. Pg. 156. Gellner, Ernest. Ithaca: Cornell University Press, 1983. Pg. 4. Heywood, Andrew. 3rd Edition. Palgrave, 2004. Pg. 160. Hobsbawm, Eric. Nations and Nationalism Since 1780. Cambridge University Press, 1990. Pg. 118. Hobsbawm, Eric. Nations and Nationalism Since 1780. Cambridge University Press, 1990. Pg. 119. Indeed, other criteria enjoy equal importance in fostering nationalism. Religion can for instance supersede the influence of common language (as Ireland's Catholic-Protestant divide illustrates) or the obstacles of disparate geography (Islam, notably, forges national identity in both North Africa and the Middle East) However, even religion cannot justify all nationalist sentiments - numerous religions often peacefully coexist within countries, as in Poland where Orthodox, Uniate and Roman Catholic beliefs intermingle Ethnicity, likewise, is only one component of nationalism: whilst it can heighten nationalistic divisions by categorically distinguishing 'insiders' from 'outsiders', ethnic groups - such as the", "label": 0 }, { "main_document": "For Schopenhauer life is suffering and strife, each man against all others, a Darwinian survival of the fittest, a Hobbesian existence where \"The right of nature... is the liberty each man hath to use his own power...for the preservation of his own nature,\" This is the picture Schopenhauer paints for us, and although we may not realise it, we all lie and deceive and use whatever means possible in order to get one step ahead of our comrades, as in the man who does a good deed only to receive future homage, or the man who lends money only to secure a future alliance. Schopenhauer, however, noticed that there was one 'feeling' or 'motivation' quite unlike all other, and he also noticed that it had been largely ignored or dismissed by other philosophers. This motive was compassion, for Schopenhauer a feeling very real and mysterious, since it makes us want to strive for the good and well-being of Is compassion therefore unnatural, and whence forth does it stem? Thomas Hobbes, All other motives of action, that is, egoistic ones, are governed by the The Will for Schopenhauer is not an individual will, it is not simply doing 'what one wills'. The Will is unified and collective, the blind and goalless force behind all of mankinds striving. The grand force of nature and of life, a force that we might liken to our own ideas of evolution or 'mother nature'. There is no need to question why the Will exists or for what reason it harrows and beleaguers us into action, for this is beyond our realm of knowledge, indeed it has no reason. All that we can examine are its expressions and effects, and that includes our motives of action. If a motive of action is a mere expression of the Will, it is not genuinely moral, since the Will cares not for morality. Schopenhauer was in a sense a precursor to Darwin, anticipating his more scientific theories of evolution and survival instincts. Schopenhauers' Indeed his theory is pure metaphysics and will always stand out as something that cannot be reduced either to science or religion. That said, his theories provide a great compliment to science. Where Darwin tells us what, Schopenhauer has already told us why, and this is no better portrayed than in his theory of love: We are attracted to strong, healthy individuals, we fall in love, and then we reproduce, hopefully producing strong and healthy children; this is natural selection. But why the 'love' bit? Schopenhauer tells us that we are of such a sentient nature that we realise how abhorred and disgusting the act of reproduction is, and how much suffering life itself causes that if we were of a sane mind no-one would reproduce and the entire species would fail. Therefore the Will provides us with a mechanism that equates love with pleasure and something to be desired. In essence the Will tricks us into thinking that what we most desire in the world is to fall in love and reproduce. Of course when we are", "label": 1 }, { "main_document": "The experience of the mathematical and the dynamic sublimes are both occasioned by the revelation of the inferiority of the imagination to that of reason. Kant believes that in considering how our faculties deal with instances of the massively extended and the massively powerful we can see how reason can exceed the capabilities of imagination; the ideas of reason and the force of our moral freedom exceed what can be presented in nature. This excessive capability of reason is linked to our moral nature. In the case of the mathematically sublime the rule that reason subjects our thinking to is analogous to the moral law and hence strengthens the prioritising of the common source of both - reason. In the case of the dynamic sublime, the link to morality is direct. It is our free moral choice which the experience of the massively powerful draws our attention to. In section 25 Kant considers how we measure extension. He distinguishes between the mathematical and the aesthetic methods. The question is how we can gain a foundation for our measurements. How do we get an 'absolute' concept of magnitude - how do I get a concept of magnitude that isn't related to other measures in an infinite regress? The problem arises if we try to define the length of a centimetre. We can do so by relating it to other measures: for example, it is one hundredth of a metre, or ten times the length of a millimetre. But then, of course, the question arises as to the definition of this further measure, and any used to answer that question. Kant's solution is to resort to the aesthetic method: we measure an object in comparison to some other constituent of experience. I measure the pyramids in terms of their building blocks, for example, or I measure my sister in comparison to my brother. We can thereby come to some concrete end to our search for definitions which allows us to use mathematical measures practically. If we were not able to relate the length 1cm to some such experience as 'the length of a house fly' we would not be able to use the measure in any but pure mathematical contexts. What is this intended to show? Kant's concern is with what can be experienced sensibly. By considering the aesthetic method we can see the limits of the measurement we can carry out in sensible experience. The aesthetic measurement of magnitude proceeds on two stages Kant calls 'apprehension' and 'comprehension'. The former amounts to the synthesising of parts. In the case of measuring my brother I need to synthesise the part of my brother equal to the height of my sister and the remainder. In the case of measuring a mansion I may have to synthesis twelve parts of the size of the car sitting in front of it. 'Comprehension' refers to that act of the mind that grasps the totality. It is important to recognize that these are not temporally distinct moments. Comprehension refers to the holding together of all the synthesised parts, apprehension", "label": 1 }, { "main_document": "committing war crimes, genocide and crimes against humanity The Statute has ensured all of the traditional immunities enjoyed by State officials, which shield them from outside scrutiny and prosecution, are swept away Thus, potential war-wagers from nations seeking to escape the ICC jurisdiction may nevertheless still be brought to trial. The permanency of the court negates the logistical drawbacks of setting up continuous Article 5(1) Rome Statute of the ICC \" for example, following on from the ICTR, jurisdiction reaches to crimes committed in purely Also, as Kittischaisaree notes: \" Antonio Cassese, \" Article 15 Rome Statute of the ICC As Dworkin notes: However, despite this revolutionary step in international criminal law, The ICC is a UN-independent, treaty-based organisation, entry to which is voluntary and dependent on the political will of states. If such will is lacking, states can choose to remain outside the court's jurisdiction and will be under no obligation to co-operate with it, thus, a severe restriction on the jurisdiction of the court exists. The choice of the USA to remain absent is particularly problematic. As the world's only remaining superpower, and \" The failure of the League of Nations was largely attributable to the USA's absence and the UN has often been paralysed due to lack of US co-operation. The ICC therefore bears the risk of being similarly limited. Ibid. p.1 \" In similar vein to the ICTs, the difficulties of enforcement will also no doubt severely limit the efficacy of the ICC. Analogous to the statutes of the ad hoc tribunals, the Rome Statute obligates States to co-operate with the ICC but provides no mechanisms of enforcement to see that they do. Furthermore, one of the principle features of the ICC is that it shall be \"complementary\" It follows that before pronouncing on a case, the ICC must first establish that the State that This may clearly act as a disincentive for the much-needed co-operation from that State, and thus creates a major stumbling block with which the Court will have to come to grips. Again, when the gathering of evidence and the arresting of accused is so dependent on the good-will of states, the real impotence of international criminal justice can be demonstrated Article 1, Rome Statute of the ICC Article 17, Rome Statute of the ICC As Penrose notes: Considering these developments in international criminal law, it is clear that governments are no longer guaranteed to behave with impunity. The legacy of individual accountability established at Nuremburg has been furthered and refined by the ICTY and ICTR, and will no doubt continue with the future operation of the ICC. The tribunals have demonstrated that individuals from even the highest levels of political and military infrastructure can be brought to justice before international courts and the creation of the ICC with its more global reach serves as a formidable warning to potential belligerents, epitomising the international community's intolerance of genocide, war crimes and crimes against humanity. However, what is also clear is that international criminal justice remains escapable. While the will of the international community to", "label": 1 }, { "main_document": "Britain, Moreover 'sex' rather than being straight-forwardly determined by biology has been shown to be a problematic concept as some women dislike children, are not attracted to men and so on. Axtmann, R. (2003), Axtmann, R. (2003), In conclusion, it is evident from the analysis that \"feminism has been characterised more by internal variety in its theories and practises than by any semblance of unity.\" (Lent, 1998, p.212) While the non-existence of a collective feminist voice perhaps has made it hard for many people to grasp what feminism is about and what it aims to achieve, its vey plurality has been conducive for the cross-fertilization and cultivation of new and dynamic ideas with the more illuminating contributions coming from radical and post-modern feminists. Most importantly second wave feminists have compelled many political thinkers to reassess the actual fairness of liberal and democratic theory. The radical and post-modern feminists' challenging of the reactionary and misogynistic assumptions that women are distinct from men; and their exposure of societies' deeply gendered and exclusive nature, has proved to be immensely intellectually empowering enabling us to think more critically about the status quo and more imaginatively about how politics can be transformed. Pateman, C. (1988),", "label": 1 }, { "main_document": "model been followed and, if not, have any changes been documented? Test: Have all the requirements of the program been met? Is the build reliable and of a high quality? Have all test cases been executed? Have test results been analysed and documented? Deployment: Has a user manual been developed? Project Management: Have the project and iteration plans been updated as new estimates are available? Having previously defined milestones for phases (in the project plan), iteration milestones can be defined. Effort will be roughly partitioned as follows: Having set out the milestones and effort for each discipline, a precise work breakdown structure was created, based on the Cardozo and de Villiers guide. The project team was divided as follows: The Cardozo and de Villiers guidelines for consolidating planning suggest using the following questions to check that planning is complete: Are the deliverables consistent with the iteration objectives? Deliverables for each iteration are consistent with the iteration objectives. Are the iteration evaluation criteria consistent with the iteration objectives? Iteration evaluation criteria are based on iteration objectives and so are consistent. Are the milestones consistent with the deliverables? Phase and iteration milestones are consistent with phase and iteration deliverables. Are the activities consistent with the deliverables? Many activities have been specified to ensure that deliverables are created, therefore activities and deliverables are consistent with each other. Have all significant differences between top-down and bottom-up estimates of effort been resolved? As a formal method of estimating effort was not employed, only one estimate was used. This estimate will be updated as more data becomes available as the project progresses. Does the team constitution reflect the required amount of effort per discipline? The number of team members assigned to each task accurately reflects the amount of effort estimated for each discipline. Therefore, as each planning objective has been achieved, the initial plan is complete. However, as effort and schedule estimates improve, the plan will need to be updated an adjusted accordingly.", "label": 1 }, { "main_document": "market of loanable funds, When the price (interest rate) affects the nature of the transaction, it maynot also clear the market (...)\". We generally observe the quantity of credit that is transacted ... not the amount that isdemanded or supplied (...)\" (Jaffee and Stigltiz 1990, p.874). However, other credits such aspledge loans have important collateral and may also be less exposed to information problems. In order to address this issue Idecided to work with broader categories. It means that, as this group is more subject to information problems lendersask them for collateral. However, the opposite could also be true. As they give collateral they are less likely to suffer AsymmetricInformation problems. Whether the first or the second argument is correct is a topic for further research.", "label": 0 }, { "main_document": "Aztec world was an over-determined one, and they were incapable of understanding anything as a unique event. In short, in both messages and actions, on the battlefield and beyond, the natives were unable to improvise in the face of the unparalleled behaviour enacted by the Spanish, and the mastery of adaptation presented by their leader. Such baffling of the Indians, more than any deliberate or intended policy, worked to knock them off balance. Todorov, Ib id., p.87 Meanwhile, for the Spanish, absolute faith in the superiority of their culture and in particular their religion, gives them a unique edge in many situations. The sense that their mission was in the name of God created an inner confidence among the Spanish that allowed them to exploit the weaknesses of their enemies without fear or guilt. Ib id., p.107 Unlike the leaders of the Aztec army, bound by their ritualised conduct of war, Cort In short, the religion of the Aztecs, more likely to inspire resignation to death than to glory, was a poor match for Christianity For the Spanish, religion offered a motivation, justification and giver of confidence with the knowledge that God is on their side. For the Aztecs, on the other hand, the requirement for ritualised practices, and also its emphasis on omen and inevitability, places them in a position from which it is much harder to defend. A good illustration of this can be found in Elliott, 'The Spanish Conquest and the settlement of America', p.182 Leon-Portilla, In addition one must pause to acknowledge the importance of disease to the fall of Tenochitlan. During the period in which Cort Of course, this did affect both sides in terms of men available to fight, as many of Cort However the Aztecs, who relied on sheer manpower to overwhelm the technology of the Spanish, felt its effect far more keenly. The massive effect smallpox had on the Aztecs is clear from the way in which they subsequently counted the years from it. This bacteriological warfare, albeit the responsibility of the Spanish, was another G One's conclusions, then, about the role of Cort Although common soldiers undeniably carried out the conquest, it is perhaps unlikely that they could have done so without the presence of Cort Iglesia, Ib id., p.52 Cort His talent for great speeches, filled with words of honour and religious commitment, not only saved him the loss of potential deserters at many points throughout the campaign, but also roused confidence in his leadership. Many of the ideas Cort Yet it was Cort Thus, to give an example, while the Spanish must be viewed as forever indebted to their Indian allies, it was Cort As a leader, his determination never waned, even after his troops' expulsion from Mexico, and all negotiations were carried out by him. Cort His ability to adapt to whatever circumstances lay before him was perhaps his most commanding talent and saved the conquest from termination and disaster at many points, such as after the Noche Triste or on the arrival of Narvaez. G It seems likely that", "label": 1 }, { "main_document": "to success in this Internet era. The term 'Bricks and clicks' has been defined as the engagement of electronic business alongside conventional business operations in the form that best fit the strengths of each channel in a complimentary and synergistic manner (Bahn and Fischer, 2003). However, there is no generic frameworks or models on the formulation of the balance of the Internet strategy with traditional business operations. The challenges managers keep addressing included: Should we integrate our Internet business with our traditional business? How do we leverage our capabilities and connectivity to extend our ebusiness vision? Should we keep the Internet business separated? (Feeny, 2001, Gulati and Garino, 2000). The answer to those questions is largely determined by the company's Internet initiatives to manage the capabilities of integrations. In order to explore the match-up of \"Bricks\" operations with \"Clicks\" strategy, Gulati and Garino (2000) research on three established retailers and come up with a spectrum of choices available, with trade-offs associated in each choice (Figure 1). Garino and Gulati claim that decision on the integration-separation is not an either-or choice. Instead, different companies should follow their own paths along several dimensions (Table 1) to determine how closely or loosely of the integration with traditional and ebusiness operations. In general, Gulati and Garino (2000) advocate that by avoiding the binary choice on the integration and considering each aspect of its own business, a company can get the right balance between the benefits and tradeoffs, flexibility and creativity on the bricks and clicks strategies. Willcocks and Plant (2001) take another approach. They examined 58 major brick and mortar companies that employing a full scale deployment of ebusiness on the Internet. They found that though most of the companies have started with strategy based upon the idea of technology leadership, companies then follow distinctive routes migrating through interim stages to a market strategy that are really sustainable and profitable. Willcocks and Plant identify four crucial strategic quadrants in their frameworks: technology, brand, service and market. They argue that companies can follow either two distinct paths to fully reap the advantages of ebusiness (Figure 2). Willcocks and Plant (2001) suggest that firms initially take the path of creating or translating their original brands to e-business context, either by reinforcing / repositioning their existing brand on the web or by creating a new brand / copying rivals success in the market. The development of brand identity can be very expensive and can generate huge problems for companies that fail to deliver on the promises their brands represent. Levi Strauss closed its online operations due to its inexperience at selling in the Internet and the channel conflicts (Webb, 2002) with its retail partners (Willcocks and Plant, 2001). Firms that follow the service improvement initiatives develop an almost obsessive focus on customers and information (Willcocks and Plant, 2001). Willcocks and Plant argue that the focus on service strategy tends to be more effective than solely on brand management. Indeed, the brand leaders also adopted a range of initiatives in service quadrant in order to enhance their brand identity (Table", "label": 0 }, { "main_document": "similarity between ethnocentric particularism and nationalism, both being actually subversive to a nation's values. For her, the source of morality is only that community whose boundaries are measured \"by the sun\" Recognising and respecting humanity is the first principle of what she calls the cosmopolitan education. Only by giving our allegiance to humanity (based on justice and reason) can genuine political deliberation among people take place. However, she mentions that \"Becoming a citizen of the world is often a lonely business (...) a kind of exile - from the comfort of local truths, from the worm, nesting feeling of patriotism (...). Seneca, (Boston : Beacon Press, 2002), p. 7. Nussbaum, p. 15 In reply to Nussbaum's theory Putnam mentions that we must not choose between patriotism and universal reason because \"Tradition without reason is blind; reason without tradition is empty\". The weaker will be safer if standing on principles of group loyalty and equality. The rights of minorities and ethnic groups should be respected and protected in a multicultural society. Hilary Putnam , \"Must we choose between Patriotism and Universal Reason\", in Nussbaum, p. 94. Immanuel Wallerstein \"Neither Patriotism nor Cosmopolitanism\", in Nussbaum, See Will Kymlicka, More recent accounts on cosmopolitanism depict it as an idealist (but not unrealistic) multilateralist theory, concerned with human equality as well as celebration of difference, and above all, human rights as cosmopolitan rights prevailing over state sovereignty. Without our knowledge or willingness, our lives become part of other worlds, other cultures and other global risks. Nowadays we all seem to reach a status of minorities in this way and, we all seem to endure this kind of cosmopolitanism which strengthens the ties people have in a community of humanity. Mary Kaldor, \"American Power: from 'compellance' to cosmopolitanism\", Ulrich Beck, \"Cosmopolitical Realism: on the distinction between cosmopolitanism in philosophy and the social sciences\", The above chapter has critically examined communitarianism and cosmopolitanism as the main strands, which have offered solid argumentation of pluralism or universalism in domestic and international ethics. The following section will point out to the overlapping nature of the two responses to the dilemma of moral pluralism and universalism. The section will point out to the main overlapping and complementary points or ideas of the two main IR responses to the dilemma of moral pluralism and universalism. It argues for the necessity of both in overcoming the either/or deadlock and in furthering an emancipatory theory and practice in international relations. The chapter will also point out to the necessity of a plural and complex universalism required by the existence of transnational harm or crimes against humanity. While the old-world order was much more culturally homogeneous within the European system, the present world society is much more complicated in terms of its normative basis since it has to offer the same consideration for many other cultures than the Western one. Moreover, the international society as a 'practical association' is appealing. The practices of the international society are authoritative on all states, because they are not purposeful but out of a natural and general desire for", "label": 0 }, { "main_document": "There has been much discussion on whether Hobbes is the first man of modern political philosophy or not, but it is sure that Hobbes stands across the dividing line between the old and the burgeoning new world. The influence of both is intertwined in Hobbes's works and especially, shown in Hobbes's self-contradictory way of writing, which leads to different interpretations of Hobbes: he is liberal or authoritative, anti-social or humanistic, a scientist or a moralist, an individualist or a collectivist. Hobbes is just Hobbes, who defies any attempt to categorize him into one small label. The tension in Hobbes's system of theory also gains him much attention both from his contemporaries and from our own time. We might say that 'the state of nature' and its antithesis to the civil society is the most original invention of Hobbes. His description of state of nature, humanity's condition in predicament, which unavoidably calls for an omnipotent authority of government, invites the criticism both out of the emotional and of the intellectual. (Oakeshott, 1975: 55) This essay sets out with the criticism of Locke and Rousseau on Hobbes's explanation of transition from state of nature to civil society. The distinction and consistency between the three is discussed in the first section. In the second section, the following questions are examined: Is Hobbes's state of nature a historical account, some anthropological study or what else? What's the implied meaning of it? What's the role played by it in Hobbes's theory? By answering these questions, Hobbes's break with the tradition and opening-up of a new epoch is clearly seen. We will talk about it in the third section. Through the essay, the author wants to point out Hobbes's state of nature is neither a historical account of the origin of political society, nor a pure thinking game of a philosopher. His state of nature is the very portrait of human situation of the Civil War and the design of Leviathan is for the restoration of laws and returning to an ordered civil society. On the books of the history of political philosophy, the name of Locke and Rousseau usually appears right after Hobbes. This arrangement surely follows a chronological order, while it also implies a close intellectual relation between the three. Locke and Rousseau, they both criticized strictly Hobbes's assumption on the state of nature. Both of them, in fact, tacitly agreed with Hobbes's methodology and main themes, but presented a new story in a modified but similar way. They both developed their own theory based on the criticism of Hobbes and in fact inherited a lot from him. Let's compare them separately and take a close look. Locke's Though he didn't mention the name of Hobbes, he borrowed a lot from Hobbes. His central theme is also the transition from state of nature to the civil society, but Locke's version is less dramatic. The state of nature, in Locke's eyes, is 'a State of perfect Freedom', 'Equality' and 'Liberty'. People are free 'to order their Actions, and dispose of their Possessions, and Persons as they think", "label": 0 }, { "main_document": "for divalent ions are higher than monovalent ions (Fig 7). This is due to the increase in valency. Ca Overall, this gives a smaller ion size so therefore has a higher ion mobility and higher conductance. The trends of divalent ions should resemble monovalent ions. Mg However, when the ionic radius increases to Ca Weak electrolyte like sodium oxalate is not fully ionised in solution so it gives a reversible reaction. Solvation and ion pairing apply to sodium oxalate so can alter conductivity. The degree of ionisation is important as conductivity is dependent on the number of ions. Kohlrausch's Law does not apply to weak electrolytes so a different equation; Ostwald's dilution Law (6) is used to determine the limiting molar conductivity, where K A plot of 1/ At 30% viscosity, the K The number of ions dissociated reflects the molar conductance. Salt mixtures (Table 2) were carried out and found that the alkyl ammonium ions dominate as the trend is very similar to the alkyl ammonium salt. The large alkyl ammonium ions would undergo ion pairing to any halides in the solution therefore greatly affecting the conductivity (Fig. 6 and 8). The validity of Walden's Rule can be concluded that at about 30% sucrose solution, the rule starts to break and no longer obeyed. Results showed that conductance of ions is dependent on many factors; the concentration of salt, the viscosity , the ion radius, the mechanism of transport and the size of solvation This affects the relaxation effect, the elctrophoretic effect, the extent of ion pairing, the viscous drag, the retarding force and the electrostatic force. All these factors can alter the mobility of ions which is the key for how conductive the solution is. However, there are many errors in this investigation. The viscosity of the sucrose solutions and the amount of salt added were only an approximate. The solubility for some salts such as NMe The temperature of the solutions deviated slightly which could influence the viscosities of the sucrose soultion. As the results were gathered by different groups and each used different apparatus so there could be some calibration errors or human errors. The investigation could be improved by each groups using the same viscosities so it is easier to compare. The solubility of the salts could be checked before to ensure that the masses of salts added would dissolve in solutions. Temperature of the solutions can be kept at constant by using a water bath so the viscosities errors can be minimised. Other innovation steps could be carried out further such as the effect of temperature, the non- electrolyte used, other salts e.g.: HCl, differences in conductance and Walden's product between weak and strong electrolytes.", "label": 0 }, { "main_document": "What characterises the aerial view of the Wars of Independence is confusion: a sprawling area of uncertainty and strife undergoing great upheaval in a time of revolution and counter-revolution, grand mountain crossings of polyglot armies A landscape of general disarray and division between foreign and local, between races and classes and between different social groups with shifting allegiances, political or otherwise, in a war of huge scale and all-encompassing reach, basically a scene of chaos. What further adds to this confusion is the continued stubborn refusal of Latin American history to fit neatly into anything near a consistent and simultaneous chronology, the different areas reaching independence in their unique ways with individual consequences and in their own sweet time. The ebb and flow of forces that can be broadly categorised as separatist or royalist results in a back-and-forth struggle that cannot be used as evidence for the ascendancy of momentum or the inevitability of victory for independence, as the example of Cuba, Puerto Rico and the Philippines, under Spain until the 1898 Spanish-American War, testifies. To extract from this cross-continent jumble some 'principal processes and patterns' While the individual paths taken to independence must be constantly remembered as a demonstration of diversity in the region, the fact remains that root causes and processes are similar among all of them, and the key to discerning the pattern in the chaotic mass of conflicting information and issues is realising that it is possible to do too much analysis. That is, to treat every contradiction as a nullification, to see every exception as a parallel trend of equal importance, in effect, cluttering the facts with sheer information. This is especially applicable with the non-uniformity and lack of depth that the Wars of Independence seemed to have in their causes, but it needs to be controlled, because independence could occur without the deep roots easily pointed to as causation, it is possible for small groups of dedicated people to radically change history, and that is what is underestimated in the analysis of the Wars of Independence in the face of the slightly unsatisfactory 'heavyweight' factors of nationalism, colonial society and European influence. Williamson, E., Hamnett, Brian, 'Process and Pattern: A Re-Examination of the Ibero-American Independence Movements, 1808-1826', JLAS, vol. 29:2 (1997) p.279 A very credible, if euro-centric, understanding can be reached of the causes of the Wars of Independence as having all to do with events in Paris, Madrid, Cadiz and the rest of the old world. This can primarily draw from the unavoidable fact that before the Peninsular War and the crises of legitimacy that it provoked, there was little serious prospect for an independent Spanish America. While Tupac Amaru's campaign was large-scale and threatening, it lacked a definitive objective and must therefore be classed as simply the most significant of the various rebellions under Spanish rule. With the colonial authority split between the new crown of Joseph I and the Supreme Junta, the cabildo abiertos of the Americas found virtual independence thrust upon them, but still they took time before realising that Royalist reaction", "label": 1 }, { "main_document": "of this citizens do not show any disruptive or anti-social behaviour, which could lead to conflicts resulting when citizens feel that their concerns and needs are not taken into consideration. So, a potential advantage of decentralisation is the establishment of a better relationship between the governors and the governed as well as reduction of the conflicts between the two parties (Miller, 2002). Despite the positive aspects arising from decentralisation there are also possible risks and negative consequences that should be taken into account. Among such risks and negative consequences one could make reference to greater inequality and greater poverty gaps (Miller, 2002). The fact that the devolution of centre government to local governments as it has already been mentioned is in favour of the devolution of government resources and allocation of some of them to regions/localities. This helps to reduce the gap between central government and local government in terms of resource allocation. However, even among a certain region/locality there are substantial differences in terms of natural resources and how are these allocated among its citizens. In a decentralised system there is always the risk of 'resources and power being captured by local elites or special interest groups' (Miler, 2002). It is similar to the case where in a centralised system people at the centre concentrate all resources and use them for their own benefit. Indisputably, decentralisation is effective for ensuring distribution of government resources from central government to regions/localities, however, safeguard mechanisms are required to prevent gaps between regions (Miller, 2002). It has already been mentioned that decentralisation through people's participation in the decision-making ensures that local needs and interests are met. However, similarly to the case of not equitable sharing of resources between the centre and the regions in a centralised system there is also a similar risk arising from local governments in a decentralised system. That is to say, even within a regional/local community being governed by a local government system, the weaker and poorer sections of the society may have the experience of their needs and interests not being met by these local levels of government. A good example of this is India, where 39% of the rural households own only 5% of all assets while 5% of the households own 46% of assets. As a result of this it will take a long time until this gap is eliminated and poor groups of people will be able to raise their voice ((Meenakshisundaram, 1994). Undoubtedly, decentralisation helps toward the achievement of devolution of the power from the centre to a local level. It can also ensure a more equitable resource distribution between the centre and the regions/localities, however, poverty gaps between groups in the same regions/localities is inevitable to exist even within a decentralised form of government. Inevitably, corruption occurs both in a centralised and decentralised system because those people who have the power tend to allocate resources for their own interest. Decentralisation is thought to be a more complex form of governance since it involves the distribution of responsibilities, power and authorities among local levels of government.", "label": 0 }, { "main_document": "to tillage disturbance, mainly as a result of their particular ecological need (Titi, 2003). Mites rely on their dispersal pathway on pores, borrows, and tubes of partially decomposed plant tissues, hence, changes in soil physical structure by tillage can have negative effects on the requirement for their survival (Titi, 2003). On the other hand, Collembola demonstrate high adaptation depending on species. Some species are resilient to solar radiation, abrupt temperature change which may be caused by tillage operation (Titi, 2003). Loring No-tillage system in comparison was likely to result in higher density of Collembola in total, but difference in population between tilled and no-tilled systems seemed declined over time which may imply their high adaptability (Titi, 2003). Feeding patterns are important factors that determine the responses of invertebrate species to tillage (Wardle, 2002). Plant-parasitic Nematoda species were much vulnerable to cultivation than other Nematoda species probably by the direct effects of burial or removal of plants, destroying their host environment. However, the effects of tillage, though evident, were not consistent between different Nematoda species in many studies (Titi, 2003). Those investigation suggested several points of tillage effects on soil invertebrates; 1. increase in management intensity lead to raise the level of disturbance to soil ecosystem, 2. modification in species diversity, abundance would be induced not only by direct or indirect changes in the soil habitat but in liaison with relationship between functional groups (Wardle, 2002. Titi, 2003). However, many findings agree that selection of crop species is the most important regulator on the functioning of soil food web and the degree of disturbance is variable according to other farming managements including pest management, organic matter application (Titi, 2003. Wardle, 2002. Adl, 2003). For this experiment, the amount and quality of data required accuracy and effectiveness to argue the likely effects of autumn tillage on soil invertebrate diversity and abundance to be supported by statistical analysis (hypothesis 1). There was no statistically significant outcome which supported the correlation between soil invertebrate diversity, the rate of decomposition, and the tillage treatment in autumn. This study probably had a major problem in biases in the experimental design to obtain sufficient amount of responses from soil ecosystem, there are many possible improvements as suggested. There are increasing numbers of studies which would be referred to estimate the likely effects of autumn tillage on agroecosystems. In general, although not consistent in many cases, there seemed to have some negative effects of tillage operation on soil ecosystem in terms of disturbance, leading to the modification in the community structure. The question is still not well answered and research is under development. In conclusion, this study highlighted the difficulty of investigating the functioning of soil ecosystem under agricultural practice due to its complexity and inconsistent responses for anthropogenic disturbance. Further research is expected to apply this outcome.", "label": 0 }, { "main_document": "This problem-centred essay deals with all facets of bias that may affect the judiciary. It will give an answer to the problems raised in the case above as well as trying to give an impression of what bias actually is and where the law on bias as a ground for judicial review stands today. Judicial impartiality is a fundamental characteristic of a civilised legal system under the rule of law. In the UK the rule against bias together with the right to be heard form the principles of natural justice. Consequently 'the non-observance [of this rule] is a basis for judicial review.' Gordon, R., Judicial Review, p. 30 The origins of this principle date back to Roman times where the maxim was expressed with the Latin terms Codex Iustinian 3,5 Wade / Forsyth, Administrative Law 9th edition, p. 450 The significance is reflected by the implementation of this right into domestic law with the creation of the Human Rights Act 1998. Article 6 of the European Convention on Human Rights (ECHR) establishes the right to a fair trial see Convention on Two main grounds of personal interest for which a judge can be automatically disqualified from sitting in a case are established. Probably the most common ground for such a judicial review is a pecuniary interest like in the ' The case involved whether the dictator, Pinochet, could be extradited. A member of the judiciary, Lord Hoffmann, was involved in Amnesty International Charity Ltd. (AICL), an organization closely related to the accuser Amnesty International (AI) but Hoffman did not make public his link to the organization. Dimes v. 759. R. v. 119. Other personal interests, however, need a more qualified suspicion of bias. In this category fall family relationships, business connections and commercial ties, as well as membership of an organization interested, as we have just seen in the Enumeration found in Craig, p. 459. The House of Lords has held in ' This decision 'resolved the long-standing uncertainty' Bradley / Swing, Constitucional and Administrative Law, 13th edition, 2003, p.713. This new threshold set by the House of Lords does not require a real likelihood which underscores the long-standing rule that 'Justice should not only be done, but should manifestly and undoubtedly be seen to be done' Therefore 'the answer to the question [whether a judge was biased] depends not upon what actually was done but upon what might appear to be done.' The canon of judicial impartiality has been rescaled and secured. R. v. 259 The Employment Tribunal in the case is composed under the Employment Tribunals Act (ETA) 1996, s.4 with Fiona as the chairman and two lay members, Geoffrey and George. One of these lay members is representing the employees, the other one the employers and both are appointed by the Secretary of State for Trade and Industry. The reason for this representation through lay members is 'to give balance and to bring to the decision-making process an element of industrial relations knowledge, which a legally qualified person might not have. It is important to know that they all", "label": 0 }, { "main_document": "fusing his theories on the importance of caring and the important fusion between present and past events. Becker claims that it is impossible to divorce history from life, because man can not do what he desires without recalling the past, nor does he have need to recall the past without present desire. The past therefore becomes part of our present and expands our special present as a result. This point therefore is crucial - \"living history - the ideal series of events that we affirm and hold in memory, since it is so intimately associated with what we are doing and hope to do, can not be precisely the same for all at any given time, or the same for one generation as for another.\" Becker is criticizing scientific forms of history precisely because they do not fit these inherent truths. History is an \"imaginative creation of personal possession,\" Carl Becker, Everyman his own Historian (The American Historical Review, 1932) p7-8 Ibid, p8 These theories are not just for Mr Everyman (as in non-professional historians) they are relatable and are written to be relatable to the practice of professional historical writing. True, historians must be concerned with truth and detail as often as they can, but as history rejects myth, is written afresh and itself later become myth, because present conditions demand it, so history is a story, and needs literary art, not just scientific methodology, to be maintained. Like Mr Everyman, historical writing must remember events, and have a present desire for them. The historian must affirm these events not just record them, for un-affirmed records are just facts that don't really exist. Affirmation of the facts, present interest in them, is what, according to Becker, makes history, and this is vital. Within these two articles, Becker has established why this relativist history is important, and as a result, he is an important figure for historical writing, because although not alone in this endeavour, he has contributed greatly to the way in which history is presented and studied. We can also apply what he says to film, and in that, establish how a filmic writing of history is possible. Historical films are often dismissed by historians as inaccurate and therefore, fairly useless as historical resources. However, if the film-maker were to apply Becker's theories on what history is, then a particular history depicted in a film that may not factually depict what happened as far as a scientific history is concerned, is the film maker's affirmation of known facts, presented the way they see them, and this is history. In addition, films themselves are history, especially to Mr Everyman, whose past and special present may include the viewing of a film. This film may not be 'accurate' to scientific historians, but it becomes part of the special present of those who watch it. If we combine these two factors, that historical film is the affirmation of the maker in his present, and that a viewed film becomes part of the history of those who viewed it, then the conclusion is, that", "label": 1 }, { "main_document": "The internet term \"cookie\" refers to a piece of information that is sent and stored by a Web server (i.e. website) on a visitor's computer hard disk. This operation is done by the web browser (IE, Netscape, etc...) and its purpose is to help the browser to remember specific information. For example, cookies may contain information such as Login, Registration, Online shopping cart, User Preferences, etc... The sources used were \"Cookies - Simon St-Laurent\" (book) and internet references \" Web designers often use cookies in their website because it is the most accurate way of gathering information such as counting visits on a website. It can also provide the designer with information about what has been visited, for how long and how often. It also offers the following advantages: To remove cookies from your hard drive, you can either delete them manually from the directory where they were stored or you can also do this from your browser options or preferences (depending on browser). With IE 5.5+, to remove manually, open the internet options (from menu: Tools Internet Options) and press the \"Delete cookies\" button. Finally, you could also use a program (i.e. Content Cleanup) that would do all this for you. Cookies might contain considerable personal information that you do not wish to share, could be used to track every move you make on the internet and become a threat to your \"Internet privacy\". Also, such information could be sold to advertisers or other companies, just as mailing lists are sold to mail order catalogs. For these reasons, you may want to remove those cookies. EIDE (Enhanced Integrated Drive Electronics) is an improved version of the IDE interface, used to connect up to 4 drives such as mass storage hard drives or CD-ROMS to a PC. The controller is integrated in the drive and the transfer rates are about three to four times faster than the old version. It is very cheap and it is also sometimes called \"Super IDE\" or \"ATA-2\". SCSI (Small Computer System Interface) is used to attach a wide range of peripheral devices (up to 8 including host controller) to computers and can provide really fast data transmission rates (up to 80mb/s). There are varieties of SCSI (FAST SCSI, Ultra SCSI, FAST Wide SCSI, ULTRA Wide SCSI). It is more often used by Apple Macintosh computers. SCSI devices can also be used on PC's with a SCSI board. The sources used were \"The SCSI Bus and IDE Interface: Protocols, Applications, and Programming - Friedhelm Schmidt\" (book) and internet references \" EIDE is the standard interface for any modern computer. It is cheaper and unlike SCSI, it doesn't require a card adapter as the interface is built on the motherboard. Although, SCSI bus has a faster transfer speed of data and allows a larger number of devices to be connected. SCSI would be ideal for instance for a server that requires a lot of storage space. Using it, the computer server will be able to connect much more devices (Hard disks, CD-Roms, etc...). In a case like this,", "label": 0 }, { "main_document": "the following assertion encapsulates the neo-realistic view on trade \"if interdependence grows at a pace that exceeds the development of central control, then interdependence hastens the occasion for war\" (Waltz, 1979,138). The number of Regional Trade Agreements (RTAs) reported to GATT/WTO as of Jan. 2004 was three hundred and half of them have already been activated ( More than half of worldwide trade volume was expected to be conducted within RTAs in 2005. Highly-increased bilateral and regional trade agreements ought to contain the discriminatory treatments contradictory to Most Favored Nations(MFN) clause, the supreme principle of GATT. From the beginning, GATT aims at the 'freer and fairer' trade, thus, unilateralism, bilateralism and trade blocs were prohibited except in unusual cases (Dam 1970, 391-400, Gilpin 2001, 218) The post-war trade regime based on the principle of nondiscrimination reflecting 'commercial liberalism' was damaged both by the stagnation of multilateralism and reinforced bilateralism and regionalism. GATT also acknowledges exceptions in case of customs union and free trade associations. Finally, as a result of globalisation, freer and faster shifts of labour and capital in particular, more opportunities were provided to existing lower-wage economies to participate in global division of labour. This trend led to the specialisation in production focused on labour-intensive manufacturing industries and also gave rise to the rapid growth of manufactured exports from a range of developing countries (Perraton et al 1997, 263-265). For David Ricardo who considered the profit of fledgling bourgeoisie as supremacy, Ricardo opens the chapter on foreign trade in his However, the dispersion of a sarategic position for manufacturing resulting from globalisation in production inevitably causes the re-examination of conventional factors which have affected investment and manufacturing for trade. That is, the globalisation in production increasingly makes geographical elements and national borders obsolete. Comparative advantage, thus, is increasingly treated as secondary factor in investment decision under the specific circumstances in the globalisation of production. In particular, cutting-edge technological industries that need more sophisticated production chain - i.e. research, device, development, design, part purchase, assembly, wrapping, innovation, marketing, customer service etc. - are increasingly decentralised in production, thus, comparative advantage in national level is relatively less considered in an investment decision. In addition, the diffusion of the 'diversion road' such as tax heaven and transfer-pricing which enable to firms circumvent obstacles on national borders also erode the myth of comparative advantage. Moreover, some critics of the notion of free trade argue that the traditional national sovereign state trading is becoming the exceptional pattern of trade due to the increasing transnational manufacturing. They proclaim the end of the necessity of comparative advantage theory because transnational investment decisions are governed by absolute profitability rather than comparative advantage (Burchill 1996, 57). Globalisation in production, however, does not simply mean the obsoleteness of The sovereign state still remains as a stable and powerful actor which has benefited or suffered from free trade. The comparative advantage as a rationale in deciding the priority in investment and trade is still relevant. In this sense, Burchill's critique that regards comparative advantage as anachronistic shows the misinterpretation either on the", "label": 0 }, { "main_document": "the extension Take We have The resolvent of The polynomial is G-invariant since We can show that the resolvent is the minimum polynomial of It is possible to show that the icosahedral group of rotations has five tetrahedral subgroups that stabilize the configuration discussed in Section 7. 'Single out the stabilizing group Thus If Taking The rotation Then inverting So we see that the icosahedral inverse solves the Brioschi quintic, and inverting Now that we know that solving the icosahedral equation is equivalent to solving its Brioschi resolvent, the way forward is clear. We know the resolvent is of the form First of all how do we transform a polynomial to another, simpler, polynomial? A Tschirnhaus transformation is a polynomial Let Any monic polynomial has the expansion Shurman, Wiley (1997)). We can see that however we permute Theorem 9.1 If Proof: Assume that such a nonzero rational function exists. There exists no polynomial Let the nonzero rational function Adding 1 to both sides and multiplying by This gives a contradiction so there is no such nonzero rational function. We see that the extension Let Theorem 9.2 If Proof: There exists no polynomial Take Each entry is a polynomial So there is no polynomial g in So, going back to the Tschirnhaus transformation, Following from the previous theorem, By the expansion of p, we can take it as the general polynomial of degree n over C. So it lies in the field So Then p is a separable polynomial. So the Galois group of this polynomial is the Galois group of the splitting field extension This gives, Theorem 9.3 The Galois group of the general polynomial g of degree n over C is Definition 9.4 The discriminant of the polynomial p is The following follows, Theorem 9.5 The discriminant Proof: Let us have Then So we see that the discriminant is invariant under permutation. Let us have a transposition Under the transposition we have So we can see that Back in the Tschirnhaus transformation, we can calculate h from f and g in the following way. Ostensibly, the resultant of two polynomials is a good way to find whether they have roots in common without the tedious calculation of roots. Let These are general when the coefficient are algebraically independent. Theorem 9.6 Polynomials p and q share a common non-constant factor if and only if there exist nonzero polynomials Proof: Let In their splitting field over Since P and Q are of degree less than n and m we have P and Q changing a polynomial of degree less than m and P and Q have common factors. So then p and q must have common factors. Conversely, p and q have a common factor. So divide by the common factor an we can find polynomials of degree less than n and m such that The resultant of two polynomials Theorem 9.7 The polynomials p and q in We know that there exist P and Q of lower degree such that This is the same as saying that So If So the necessary", "label": 0 }, { "main_document": "Usually DNA exists as a double helix with two polymers held by hydrogen bonds between the bases on each strand. However, the double stranded DNA helices would separate into single stranded components if heated up gradually. This transition is measured by the absorbance of a DNA sample as a function of temperature. The following is the diagram showing the relevant properties of the DNA melting curve versus temperature. Figure 1 shows the plot of the DNA absorbance as the temperature goes up. The absorbance curve keeps constant at the beginning, from 20 to 60 degree centigrade ( From the data given, we can find that the maximum absorbance is 1.2498, and the minimum one is 0.9933. Thus, the estimated This value corresponds to Figure 2 is plot of the derivative of the absorbance curve with respect to the temperature versus the temperature. The peak occurs at the temperature of approximately 66 Using the first 15 data points, we can find a straight line that approximates the temperature dependence of the unmelted DNA absorbance (see figure 3). The equation of this baseline worked out turns out to be: The final absorbance on the melting curve plot is 1.2498. Let Then, is a function of temperature. We transform the centigrade temperature since the absolute temperature The plot of The midpoint of That is, Figure 5 is the plot of the derivative of From figure 5, we can see that the absolute temperature of the maximum of the derivative curve is Thus, Van't Hoff transition enthalpy of the transition can be determined by the following formula: where B' = - 4.38 cal K So, According to the previous steps, we have found three Thus, we take the mean of these three, and The entropy change of the transition can be determined by using So, Gibbs free energy of the transition is determined by Figure 6 can show the linear relationship between the change of Gibbs free energy and the absolute temperature: From the figure, we can see that the Gibbs free energy would be zero if the temperature is taken at the value The absolute value of Gibbs free energy decrease as the temperature increase from the beginning to From the absolute magnitude of Gibbs free energy, we can see that it is high in the system of double stranded DNA and a bit lower when all double stranded become single stranded. Then van't Hoff transition enthalpy of the transition is -402460 J/mol, which is the maximum amount of thermal energy derivable to a thermodynamic process of DNA melting when the pressure is held constant. The change of entropy is -1183.2 J K This is a measure of the amount of energy that cannot be sued to do work in this double stranded and single stranded DNA system.", "label": 0 }, { "main_document": "for absolute certain that we are not dreaming? This is Descartes second argument to give us reason to doubt. He reasons that we can't even know that the things we see even exist, as fantastical things exist in dreams. Descartes goes on to use the analogy of the painter who uses his experience of people and animals to create creatures such as sirens and satyrs, so at least the basic components of what we believe to be real must exist. However Descartes doesn't want to excluded the possibility that something entirely new could be created, however more basic ideas like the colours the painter uses can not be created. At least the more basic components of the objects we perceive as real must, then, exist. Descartes places in this group \"corporeal nature... extension; the shape of extended things; the quantity... the place in which they may exist, the time through which they may endure,\" Descartes Descartes It is from this point that Descartes brings in the \"malicious demon\" argument in order to lead the reader to have further cause for doubt. The evil deceiver is all powerful and so has created me with faulty perceiving and reasoning faculties, so that everything I believe to be true is, in reality, not the case. This example leads on from Descartes putting aside his belief in God as Descartes acknowledges there are some who doubt the existence of God and they must feel they have grounds for doing so. An idea such as the evil deceiver however improbable is still entirely possible, and so retains presence as a cause for doubt. For if there really was an evil deceiver I would be none the wiser, the world as it appears to me now could all be the result of an elaborate deception. Fundamentally this argument leads us to doubt our own cognitive nature, our ability to reason without which even simple mathematical equations could be false. So Descartes concludes at the end of the first meditation; Descartes Many contemporaries of Descartes, notably Gassendi, questioned the point of his hyperbolic doubt in the first meditation when he simply goes on to re-instate all his rejected beliefs latter in the meditations. Descartes replied to this query in the If we have a basket of apples and suspect some may be rooten there is no course of action to take but to pour all the apples out the basket and re-examine each one individually before re-admitting it to the basket. For if we simply look in the basket we will only ever be able to see one side of the apple. As such we must reject all our beliefs before we accept any as true. Another criticism is a problem with being sceptical in general; for the true sceptic may say 'I know that I know nothing', but then what does this person know? It would appear they know that they know nothing, in which case they do, in fact, know something (that they know nothing). The sceptical position appears to be a paradox and as such a", "label": 1 }, { "main_document": "whilst Guarani is the language used in less formal situations such as telling a joke, or general gossip between friends. A general statement we can make from this data, is that it seems Spanish is the language preferred in more formal domains, whilst Guarani is the language preferred in more informal domains. Holmes writes of four scales that relate to how various domains are viewed. These are - the Below, I will briefly outline each scale and explain how it relates to the relationship between Spanish and Guarani in Paraguay. This scale can be illustrated as follows - Holmes (2001) writes that this scale is Examples of this scale in action are decisions such as whether to call someone by their first name or full title - e.g. 'James' or 'Mr Smith'. In the situation of Spanish and Guarani, this scale would suggest that Guarani is the intimate/high solidarity language while Spanish is the distant/low solidarity language. This is because, if two Paraguayans were to meet in a neighbouring Spanish speaking country (hence one where Guarani is not an L1 or even an L2) using Guarani would be a way of expressing the intimacy and solidarity of their Paraguayan identity. This scale can be illustrated as follows - Holmes (2001) writes that this scale The basis of this scale is the idea that when we are speaking to someone whom we acknowledge to be of a higher social status than us we will change our speech accordingly. An example of this is that in business, it is acceptable for someone in a higher position of authority to address someone in a lower position of authority by their first name, whereas that person in a lower position of authority would not be able to address their superior in this way and would have to use their full title. In Paraguay, as Spanish is the language of administration and business, it would be reasonable to assume that people would address someone in a position of authority in Spanish. However, if the person with the higher status wanted to change domains to one where status was not so important, they could do so by switching to Guarani. This scale can be illustrated as follows - Holmes (2001) writes that this scale is This scale suggests that language use is influenced by the formality of the situation. Indeed, as I mentioned earlier in this essay, the language used in a formal setting like a job interview will be very different than the language used in the informal setting of a causal drink with friends. Taking the example of Paraguay, we know that Spanish is the language preferred in domains of high formality as it is frequently used in domains such as Education, worship and administration while Guarani is used in domains of low formality such as gossiping and joke telling. These scales can be illustrated as follows - Affective Referential These scales are concerned with the feelings of the speaker (affective) and the content of information within the speaker's utterance (referential). One example of this, is", "label": 1 }, { "main_document": "To separate the components of a simulated pharmaceutical preparation. Most commercial preparations are mixtures of many different substances. To obtain a pure organic compound from such a mixture, one must separate the wanted compounds from other components by using the differences in physical and chemical properties. Organic materials Tends to have very different solubility's in differing organic solvents and can often be separated by filtration/extraction. Organic compounds with functional groups such as amino and carboxylic acid can be converted to their water soluble salts, which can then be separated from insoluble components of a mixture, these salts can then be converted back to an organic soluble material and recovered. Precautions to be taken, appropriate lab wear to be worn including lab coats, goggles, gloves and hair to be tied back. All work is to be carried out in a fume hood. To begin the sample of pharmaceutical preparation - sample A (4g) was placed in a conical flask (100ml) with ethyl acetate (50ml) and swirled thoroughly; a semi-cloudy solution was formed. The insoluble material was followingly filtered at the pump; left to dry, weighed (2.93g) and its melting point taken. The filtrate was transferred to a separatory funnel and 2M Sodium hydroxide (25ml) was added. The funnel was stoppered and shaken frequently opening to release any pressure. The two layers were then allowed to separate and the aqueous layer was run off. This process was the repeated with a further portion of 2m Sodium hydroxide (25ml) and the two aqueous layers combined. Aqueous 6M hydrochloric acid (20ml) was added slowly, whilst shaking to the combined aqueous layers. The solution warmed and a white precipitate was formed. The Ph of the solution was taken with indication paper (PH1) to check the Ph was below 2. The mixture was then cooled on ice and the precipitate collected by vacuum filtration. The filtrate was washed with distilled water, and the solid left to dry under suction for 10 minutes. The weight of the crude sample was taken (1.4g) and followingly it was recrystallised from ethanol. The purified product was weighed (0.93g) and a melting point and Ir spectrum taken. Magnesium Sulphate was added to the ethyl acetate (remaining) layer and swirled creating a \"snowstorm\", This was then filtered at the pump into a pre-weighed round bottomed flask (100ml) washing with ethyl acetate. Following this the solvent was evaporated on the rotary evaporator. Mass of the crude solid was taken (0.8g) and then recrystalised from ethanol/water mix (ethanol added under heat until product dissolved, water added until solution went cloudy, ethanol added to clear solution). Finally weight of the purified product was taken (0.77g) and a melting point and IR spectrum recorded. From these spectrums we can check the identity of our final products. Both spectrums clearly show the carbonyl group (C=O). In addition the spectrum of Aspirin shows peaks representative of O-H (3020cm Acetanilide also shows a peak of 3296cm These spectrums clearly support the presence of our product. The sodium salt is much more soluble in water than the organic solvent. Hence the compound", "label": 1 }, { "main_document": "this is merely \"a blip in the dollar's decline, with the downtrend seen during 2002-2004 likely to be resumed in 2006.\" The strengthening of the Dollar is also reiterated in the Renold Interim Results which state that the \"recent strengthening of the US dollar against the Euro will, if maintained, provide benefit in the second half year.\" It is stated within the Chief Executive's Review that it is \"proposed to establish a wholly owned manufacturing facility in China...to support a number of the Company's product line\"(C). Which \"will not only provide cost reduction but, more importantly, will provide better access to markets and customers in the Far East.\" This indicates that future prospects for the company in this area of the world will be very good, due to a larger presence of the company (A)-Chairman's Statement (B)-Operations Review (C)-Chief Executive's Review future? It can be seen that the Annual Report does provide useful information about the future of the company, due to the fact that the information it gives is supported by outside sources to a high extent In the next financial year, Renold will be required to \"prepare its consolidated financial statements in accordance with International Financial Reporting Standards ('IFRS')\", due to changes to International Accounting Standards. This will affect the firm to a high extent in the next annual report. It is anticipated, according to the 'Update on IFRS' released by Renold, that the following will be the most significant changes, amongst others. * Under IFRS 1, there will be \"a material increase in the value of property in the Group's balance sheet.\" Also \"the basis on which freehold properties are depreciated will be revised from a reducing balance basis to a straight line basis which is considered to be a basis more in line with general practice.\" * Under IAS 19, there will be \"classification changes in the balance sheet.\" * IFRS 3, standard goodwill will now be \"carried at cost and subject to an annual impairment review\". * Also, the group \"made an acquisition that gave rise to negative goodwill\" in March 2005 which has \"been reassessed under IFRS but there will remain a significant level of goodwill.\" It can therefore be seen that some of the changes after adopting IFRS will be fairly large, but shouldn't have too much influence on the overall accounts. It can be concluded that the Annual Report is very reliable to an ordinary shareholder due to the fact that it applies relevant financial standards and practices in the creation of the document. Also, the report seems to be fairly truthful in its outlook for the future position of the company, although it can be seen in hindsight that some of its predictions such as the plateauing of steel prices were educated, but slightly wishful.", "label": 1 }, { "main_document": "A Company Strategy is a specific step which enables a company to accomplish a required goal. Making Strategy involves a continuous process of research and decision-making. Knowledge of yourself and your company is a vital starting point in setting objectives. A manufacturing simulation exercise \"Aerials\" was very useful in understanding the significance and application of tools used in manufacturing industries for planning and control. At the end of the game the total final cash left with our group \"Falcon\" was Since we took over the firm in the thirteenth week and were told that sale is through a chain of distributors we at the beginning of the game by mutual planning set ourselves two objectives: In order to achieve these objectives we made following strategy We tried to keep our production costs low by utilizing capacity and using different shifts when necessary by precise forecasting and efficient inventory management to ensure on-time delivery. The way we used logistics and operations management tools to help us achieve our objectives are described below. Forecasting is predicting or estimating before hand. It plays a very important role in capacity planning and inventory management and also provides valuable input to other functions of the organisation. Forecasting is based primarily on two main methods. Selecting the forecasting technique was a difficult task for our company (Falcon). With brief discussion and thorough understanding of the market and data available for forecasting it was decided that \"Moving Averages\" forecasting technique is useful and perform better for our company. We used this technique because it is the simplest way of smoothing past data that is used for forecasting. Most recent data is most relevant in forecasting short-term demand because it reveals latest trends better than data several years old. It is necessary to analyse the data after removing trends and seasonality from the set of data and incorporating them afterwards. Average sales for the first 12 weeks this year was 5416 units compared with last year's 4670 which is approximately 15% more. This helped us establish this year's weekly aggregate demand at 6250 units or 25000 in a month. Based on this approach our strategy was to forecast for four weeks and accordingly place orders for the raw materials so that we never run out of our raw material stock. We were reactive and chasing demand as it occurs, in 14 We were concerned with the cash flow so that in an attempt to order lots of raw material we may not run out of cash and go bankrupt. In the first level after gaining some profits we transferred money from current account to deposit account and earned some positive interest. In 16 The reason for this we thought we have enough finished goods inventory to supply next week. By this chase strategy we paid huge fixed cost which could have been avoided by utilizing a shift. Before starting this level we calculated break even analysis which came 3000 units every week so that we may not again miss the opportunity of utilizing capacity and pay the fixed costs. We", "label": 0 }, { "main_document": "The works of Jane Austen and Johann Wolfgang von Goethe seem, on first reading, to be so profoundly different, that if one were to compare them, one would find mainly differences, and not similarities. Indeed, existing criticism always qualifies any statements made to liken them. One critic has even moved from remarking on a resemblance, albeit relatively slight, in saying that, 'Jane Austen's novels could, indeed, be called educational novels though they bear little resemblance to the 'Bildungsromane' of Goethe' (Klieneberger 33), to reminding the reader that, 'It is significant that Jane Austen whose work marks the transition from the eighteenth-century novel of manners to the social realism of the nineteenth-century, should have started her career [....] by satirizing What can we make of this? Certainly, there are some major differences not only between the novels themselves, but also between their conception. Whereas 'Jane Austen no more drew her houses from life than she did her characters' (Nicolson 11), contemporary readers of Likewise, although we know that Goethe did not kill himself as a result of this unrequited love, Werther's death was clearly inspired by the suicide of Karl Wilhelm Jerusalem in Wetzlar, the town in which Buff lived. Like Werther, Jerusalem borrowed a pistol to kill himself, and the weapon was given to him by Goethe's model for Albert, Christian Kestner, who married Buff (Swales 13). Likewise, his suicide was a consequence of his unrequited love, in his case for a woman named Elisabeth Herd (Hulse 9). Despite these apparent differences in conception, there is still much to compare and contrast in the novels, and with these aims in mind, I now turn to discussing love in Since one critic has noticed the potential of Austen's novels, like Goethe's, to be 'Bildungsromane', it would seem that this is a good point at which to begin a comparison of the depiction of love in the novels. What exactly is a 'Bildungsroman'? 'A Dictionary of Literary Terms' describes it as 'a novel which is an account of the youthful development of the hero or heroine' (Cuddon 78), and indeed lists Certainly, both Emma and Werther's development is unequivocally concerned with love. One of the great ironies of Goethe's novel is that he has his hero imply, at the beginning, that one of his reasons for going travelling was to escape a girl named Leonore's unrequited love for him, when, instead of finding freedom in Wetzlar, he instead becomes fettered by the chains of unrequited love himself. So if we cannot see any positive connotations in Werther's development in the novel as a whole, because of his unrequited love for Lotte, how, then, can we compare the development of this unrequited love with the development of the mutual love between Emma and Mr Knightley? One aspect to analyse in the novels is the predominance of dancing and its significance for the two main romantic relationships. Austen herself proves the importance of dancing in Georgian society by suggesting, with supreme irony, its unimportance. In Austen's world, balls were inevitably associated with courtship, just as the", "label": 1 }, { "main_document": "system isn't structured correctly or is unfair to employees they will soon resent it and the will to work hard will reduce. Additionally, the incentive scheme could become a hygiene factor. Without it being in place the will to work hard will fall. (Fincham , R & Rhodes , P 1999: 254) Team Technology < Although, money is one of the basic needs for any person it can also be argued that to improve ones self and to become more of a rounded person is equally important, if not more so. Some companies believe that this desire to improve ones self can be harnessed to increase the willingness to work hard. Motorola realised this potential early on \"Concerned about the low comprehension level of its work teams, the company began a skill-based compensation program that rewarded them for improving their math and reading skills.\" (Flannery, T , Hofritcher, D & Platten , P 1996: 85) Detailed by Fincham and Rhodes (1999) Maslow wrote self actualization is the need to realise one's full potential Self actualization isn't simply a tool to make ones self more employable. \"Self-actualizing people enjoy life in general and in practically all its aspects\" Self actualization is rooted deeper in the psyche of an employee. The need for self actualization connects the world of work to the life world as a person feels that to enjoy their life more wholly one needs to obtain certain \"Need Levels\". (Fincham , R & Rhodes , P 1999: 132) (H.Maslow , A 1982: 31) The scheme introduced by Motorola can increase the will to work hard but it could also lead to a higher employee turnover \"Resentment soon built among team members who had to pick up the slack when their fellow members went off for six months of training at full pay\" Running parallel to this would be employees whose will to work hard also increased but only because they have to take on an extra work load. A possible drop in morale would be detrimental to both the company and the employees. The only way people will work hard towards the incentives of gaining extra skills would be if the system was correctly managed and structured or if they have a say in what incentives they would prefer (Flannery, T , Hofritcher, D & Platten , P 1996: 85) (Ulrich , D & Zenger & J Smallwood, N ,1999) Benefiting from the success of a company will obviously increase the will to work hard among any employees. If an employee holds a share in the business they are taking a risk like with any shares in business, but the crucial difference is that they have a roll in how well the company does \"They've benefited handsomely from the soaring stock.\" The case of Genentech can be seen as supporting the scheme of shareholding. During this survey it was also voted at the best Fortune 100 Best Company to Work For in 2006. CNN Money < Of course this scheme does have the possibility of increasing the will to work hard", "label": 1 }, { "main_document": "the real Maria had had incestuous affairs with first Corder's brother, Thomas, then a married man, Peter Matthews before William, and moreover, that she had given birth to three illegitimate children, is erased from the narrative. The theme of the 'fallen woman' did not become popular amongst audiences until later melodramas. The audience are only presented with Maria's anxiety to legalise her union with Corder. Although it is true that the real Maria had a sister- Ann Martin, the subplot of Ann's relationship with Timothy Bobbin, was created purely as a form of comical relief running parallel to the central narrative. Their comic roles, together with the gypsy type character Johnny Raw, were popular to melodrama and were therefore added to satisfy audience expectations. They bear no relation to the murder narrative, Many of the elements of A few to include would be the thrill of the unknown, great mystery and darkness and stereotypical settings- secluded and ominous in order to inspire terror. According to the confession of the real William Corder, he shot Maria on his arrival at the barn before he buried her. Horrific as this must have been, melodrama made it even more so. After Maria's exclamation, 'Oh William, William, to thee I trust for future happiness!' Corder enters and a sinister atmosphere builds throughout their dialogue, in which the true characterisation of the villain and the victim are allowed full glory, This precedes a violent struggle, in which Corder attempts to stab her twice before succeeding, Although the character traits and settings in the play are typical of the Gothic genre, a clear shift in audience taste and social climate is demonstrated in That is, a moving away from the sublime in the narrative, replacing it with real life events to place the audiences' familiar ground on the stage, as was the desire in the nineteenth century. Both the audiences of the play and the thousands of people eager to acquire a souvenir from the actual crime scene were quick to admit that they really only cared about the entertainment provided by the incident. Justice was not considered much of an importance, 'Every one of them was anxious to carry away something memorial...pieces of the barn-door, tiles from the roof, and, above all, the clothes of the poor victim, were eagerly sought after.' Mackay, Charles, Although Rahill, Frank, Leopold Lewis' The reason for its overwhelming success when it opened at the Lyceum Theatre in 1871 was the result of an audience seeking something new. In answer to this, the play breaks free from the confines of the conventional melodrama. At first hand, the synopsis of the play, with the character of the Burgomaster Mathias so central an importance, appears to reflect the mass audiences' desire to be presented with a really fine villainous character, Rahill, Frank, Henry Irving's portrayal of Mathias however, illustrates a more complex reflection of the contemporary social climate during the Victorian era. I will focus firstly on the portrayal of the Burgomaster. Henry Irving's philosophy on acting was similar to the re-enactment of the", "label": 1 }, { "main_document": "assigned the variable alpha_m to column one of lateral force data and assigned Fy_m to column two of lateral force data. The same process was carried out to assign variables s_m and Fx_m to column one and column two of longitudinal force data respectively. The variable names were chosen to correspond to the data forces and angles from figure 1. This data was then plotted on labelled graphs with black circles as points using the following commands. The command 'ko' corresponds to the black circles and plot, xlabel and ylabel correspond to the plotting and labelling of the graph. The graph was printed out and can be viewed in appendix 1 under graph 1. The data for the longitudinal data was also plotted and labelled using the standard commands as shown above. The graph for this data can be viewed in appendix 1 under graph 2. The pattern produced from the data for lateral force data was linear up to the slip angle of 2.5 degrees. After this point the pattern was non-linear. The model for the linear pattern was 3. where The coefficients The method of least squares can be applied to the lateral force data to produce To achieve this the following command was typed The command polyfit executes the method of least squares; the 1 at the end of the command produces a linear model. The new values of The graphs were also labelled using standard commands. The command alpha = 0:6, assigns values 0 to 7 in increments of 1 to the variable alpha. This allows ranges of data to be loaded quickly into the MATLAB workspace. The graph was edited to see the result of changing a few variables. The variable With the variables changed the lines were plotted on the same graph so that they could be compared. The command allowed plots to be added to a current plot. Typing changed this setting back to default. The graph can be viewed in appendix 1 under graph 3. The model for the non linear pattern was 4. the calculation of The values for This produced a line of best fit that was non linear. The graph can be viewed in appendix 1 under graph 4. The pattern produced for the longitudinal force data was linear up to a slip ratio of about 0.03. After this point the pattern was non linear. The model for the linear pattern was 5. where The coefficients The method of least squares can be applied to the longitudinal force data to produce To achieve this the following command was typed The new values of The graphs were also labelled using standard commands. The command s = 0:0.01:0.05, assigns the values 0 to 0.05 in increments of 0.01 to the variable s. The graph was then edited to see the result of changing a few variables. The variable This was done using the command in 3.4. The graph can be viewed in appendix 1 under graph 5. The model for the non-linear pattern was 6. the calculation of The values for This produced", "label": 1 }, { "main_document": "for example divorce or childbirth? The collusion with medicalised and 'personal tragedy' conceptions of disability have furthermore meant that the notion of biographical disruption has come under attack from disability theorists. Indeed, the word 'disruption' is loaded with negative connotations, and the presumption that chronic illness is a negative phenomenon, which shatters lives and forces people to gather up the pieces left of their selves and identities in an attempt to deal with their 'altered' (presumably worse) status, leaves us with a very negative portrayal of the experience of chronic illness and disability. Whilst Bury's consideration of 'style' has dealt with the way in which chronic illness can be transformed from a negative to a positive experience through the utilisation of various coping strategies and lifestyles, so that the individual can shift from an image of 'the disabled self' to one of the 'capable self' (Corbin and Strauss, 1991, cited in Bury, 1997), the underlying assumption is that disability and chronic illness are It is this negative assumption about the lives and experiences of people with disabilities which many disability activists and theorists have sought to redress. Indeed, many studies have uncovered an Williams (2003) also throws critical light onto the issue of pain, reminding us that this is not Thus, the potential positive elements of chronic illness and disability need to be worked into the concept of biographical disruption in order to address the wide variety of ways in which they are experienced and made meaningful. The question of the sequencing of biographical disruption in relation to chronic illness also opens up new avenues of sociological enquiry. Williams (2003), using the example of illness narratives, points to the possibility of biographical disruption ' As the example of 'Gill' in Williams (1984) reminds us, biographical disruption, in Gill's case the death of her son and husband together with the departure of her daughter, can, in the face of chronic illness, be re-worked as attributing factors. Williams (2003) points out that such narratives provide another dimension to biographical disruption, moving beyond the consequences and significance of chronic illness, to a consideration of the Thus, temporal and contextual considerations become crucial to the notion of biographical disruption; the point in the lifecourse at which the illness occurs together with the context of the individual's life and circumstances has called for perhaps a revision of the concept of biographical disruption (Williams, 2003). Although the concept gives voice to a range of responses to, and experiences of, chronic illness, which is of fundamental importance given their often private nature, there are still serious omissions, and perhaps a greater consideration of context and life circumstances will allow for the exploration of other ways of 'doing' and experiencing chronic illness in contemporary society, other than as purely a disruption to individual biography.", "label": 1 }, { "main_document": "The FlashMaster II (see figure 1) by Biotage (originally developed by Argonaut) has been successfully used to purify numerous compounds in organic synthesis The flash column chromatography system can automatically purify more compounds in less time compared with a manual approach, This method of purification has become common practice in industry, Thomas B. Poulsen, Mark Bell and Karl Anker J Biomol. Chem. , 4, 63 - 70, (2006) Argonaut, Synthesis & Purification catalog, Michael Ye, Craig Aurand, Dan Vitkuske, Shaoyin Wang, Brenda Nye, Becky Caproni, Michael Singer, Purification of synthetic products using flash chromatography The FlashMaster II system uses a column chromatography separation technique which exploits the difference in partitioning behaviour between a mobile phase (an elutent) and a stationary phase (generally silica gel) The mixture components may interact with the stationary phase based on charge, relative solubility or adsorption. The retention time of a compound is a measure of the speed at which a substance moves in a chromatographic system. In an interrupted system, like TLC (thin layer chromatography) the retention time is measured as the retention factor, R Developments in preparative scale chromatography columns and accessories, Column chromatography utilises a column filled with a solid support, usually silica gel with the sample to be separated loaded on top of the support (see figure 3). The rest of the column is filled with a solvent or solvent mix which under the influence of gravity moves the sample through the column. The various components of the loaded sample will move through the column at differing rates and so will have different exit times at the bottom of the column. The flash column technique is very similar to this but involves the use of positive pressure to drive the solvent through the column, allowing quicker separation times. Modern flash chromatography systems like the FlashMaster II have pre-packed plastic cartridges filled with silica and the solvent is pumped through the cartridge. Systems are linked with detectors and fraction collectors, providing automation. The FlashMaster II also introduced a gradient pump which results in quicker separations and less solvent usage. The system automates gradient solvent mixing, flow control, peak detection and peak fraction collection. The pre-packed columns come in a variety of sizes i.e. the amount of silica packed in the column, and so allow purification of samples in the range of 4mg-5g. The size of the column and amount of crude material loaded depends partly on how easy/difficult the separation is between components. The more difficult the separation the larger the column will be with a smaller quantity of crude mixture. A general guide has been drawn up by Argonaut to help choose the size of the column required for a particular separation. It determines loading capacities for each column size and states that the sample loaded on to a column should be 5-10% of the column bed mass. Ten columns can be loaded on to the machine and it has the capacity for four solvent lines. This allows for numerous purifications of different compounds with different eluting systems and gradients. The detection system is", "label": 1 }, { "main_document": "are system-based and suggest nurturing employees, by developing a sense of commitment in them, to achieve differentiation (Worsfold, 1999). They relate to total quality management (TQM) because employee commitment generates increased customer focus, cooperation and constant development, thus achieving competitive advantage (Redman & Wilkinson, 2001). Different authors disagree on the suitability of \"best practice\", and even more on improving recruitment & selection methods to achieve \"employee fit\". Hoque (1999) claims, that due to the growing importance of service quality, hotels and possibly the hospitality industry, are practising HR procedures. Price (1994) however argues that the hospitality industry is known for having a basic approach towards human resource management (HRM) that deviates from principles of \"good practice\". Price (1994) adds that although some evidence of \"good practice\" is seen in larger organizations, such as interviewing, application forms, job descriptions and person specifications, these are applied inconsistently. Kelliher & Johnson (1997) confirm the increased attention towards HR practices. Companies now seek employee commitment, employee integration in business strategy, incentives to retain employees and a change from a bureaucratic approach to a human relations one. However it is unclear if these improvements represent a move towards HR practices (Worsfold, 1999) or a response to other changes (Kelliher & Johnson, 1997). Furthermore, given the importance of employees in determining service quality, hospitality & tourism managers should lead HR practices, rather than follow other industries (Kelliher & Johnson, 1997). Price (1994) claims an improvement of personnel practices breaks the \"vicious circle\" mentioned above, encouraging quality and productivity whilst seizing opportunities offered by economic growth. Additionally, \"good\" recruitment & selection practices fight off competition from international organizations which have integrated TQM in their HRM strategy (e.g. Marriott). Ultimately, improved personnel practices lead to competitive advantage and increased market share thus originating a \"virtuous\" (Price, 2004, pp: 60) cycle, based on quality and profitability. McGunnigle & Jameson (2000) add that appropriate recruitment & selection encourage employee commitment and develop a strong organisational culture. They identified in their study that although hotel managers were willing to implement effective recruitment & selection methods, their practices were still far from an HRM structure. Therefore, McGunnigle & Jameson (2000) suggest new recruitment & selection methods, like psychometric tests, personality questionnaires and assessment centres (Redman & Mathews, 1998) to be continuously used alongside traditional \"good practice\" to develop high commitment. By integrating these methods in a \"bundle\" (Redman & Wilkinson, 2001, pp: 15), managers can better predict future performance and attain \"employee fit\" (McGunnigle & Jameson, 2000). Bundles also include employee retention practices, training, high performance and quality rewards, high commitment incentives, job security and employee integration in the business strategy (Redman & Mathews, 1998). Pr However, Redman & Wilkinson (2001) claim bundles vary widely and no \"best\" bundle has yet been identified. Redman & Wilkinson (2001) believe that although \"best practice\" seems like the perfect solution, companies find it hard to implement. Lockyer & Scholarius (2004) disagree with the \"best practice\" approach claiming it disregards the characteristics of the hospitality & tourism industry, and becomes challenging because of diverse property \"size, location and", "label": 0 }, { "main_document": "adapt to long payment periods and high quality requirements. However, the majority of fruit producers prefers to sell to intermediaries since transport costs are lower and since they receive payment in cash whereas with fruit processing companies they would have to wait 14 days (Maack, 2005). Still, the production of fruit is still highly fragmented, few co-operatives exist at the producer level, and growers tend to be in a weak position owing to their small scale of operation which is further exacerbated by the lack of a comprehensive wholesale marketing system (Faesel and Hill,1995). As a consequence, both growers and export traders are vertically integrating to take over the wholesaling functions (Maack, 2005). The Polish fruit and vegetable processing industry is dominated by foreign capital and the share of foreign traders in the food market has risen sharply (Csaki et al., 2004). This may have negative effects on small farmers who make up the largest part of fruit growers. They may lose out with increasing FDI in the agri-food chain because, firstly, the fixed transaction cost component in the costs of exchanges between farms and retailers makes it more costly for retailers to deal with a large number of small farmers than with a few bigger suppliers. Secondly, small farmers tend to be more constrained in their means for making necessary investments, because they lack sufficient own resources or they cannot access external funds in imperfect rural financial markets. Hence, small farmers may find it hard to increase their productivity and come up to the standards and sell to supermarkets. Studies from emerging markets in Latin America, suggest that these pressures may be real and important (Dries et al., 2004). Because of the low level of organisation of growers, the weak marketing infrastructure and increased the significance of hypermarket chains imports of fresh fruit and vegetables increase, while the limitations to increase exports of fruit to EU-15 persist in the short term due to insufficient competitiveness, product range, etc. (Duponcel, 2004). Against the background of Poland being a leading fruit producer in the EU with a potential competitive advantage in exports of frozen products, canned products and fruit juices (Duponcel, 2004), and the high levels of FDI in the food and agri-processing industry (Csaki et al., 2004), it seems interesting to find out how and to what extent FDI in the agri-food chain could help increasing the competitiveness of Polish fruit growers and bring about structural change. It is assumed that understanding variations in technical efficiency provides a basis for predicting structural change (Gorton and Davidova, 2004). In general, the term vertical integration refers to bringing together two or more successive stages of the vertical production and distribution chain under common ownership and management (Gow and Swinnen, 1998). FDI-induced vertical integration refers to private institutional innovations that help overcome problems of exchange systems and contract enforcement mechanisms which have broken down in many CEECs (Dries and Swinnen, 2004). Generally, private investment by foreign processing and retail companies having the know-how and finance to upgrade capital technology and management (Konings, 2001), has proved", "label": 0 }, { "main_document": "this improvisation, in the face of the Aztec ritualised way of understanding, was what offered the Spanish their chance of success. Although the Europeans military superiority was vast, and disease weakened their opposition, they perhaps might have been defeated if the Aztecs, and more importantly perhaps, Moctezuma, had been able to modify their behaviour and responses with changed circumstances, in the way that Cort Perhaps paralysed by the belief that Cort Where Cort What D We must attempt to see a real commander rather than the literary hero figure we hold in our imagination, a man who was indeed assisted by what fate provided him, but who still exemplifies extraordinary tenacity, ability to inspire and sharp-witted improvisation. Even though other crucial factors tended to fall to the Spanish advantage, without a figure of this ability leading the expedition, it seems likely that, at one of the several occasions where the Spanish are left vulnerable, the conquest would have been aborted. The story of the conquest is therefore inevitably the story of Cort Prescott,", "label": 1 }, { "main_document": "fruit of pure love relationship and justify his love towards Lesbia? As we have noted already Catullus' view towards love relationship is never one-sided, and from the poem LXII onwards, in fact, Catullus shows us a lot of different examples of marriage, some of which worked very well and others ended up in a disaster. And roughly speaking, we could read two most significant reasons why quite a few marriage relations do not work and therefore Catullus could safely say his Love with Lesbia is justifiable. Firstly, marriage is not purely about love, but it is also an important social institution. As Catullus points out in his poem LXI, marriage is essential in that it provides children who 'aged parents can lean on', and will be 'defending the borders' of their land. In other words, marriage is a basis for both the family and the country to be firmly established and to prosper. In the poem LXII, again Catullus let the chorus of young women, who argues that marriage takes away all the charms of a girl by beheading her flower of virginity, be defeated by the chorus of young men singing that keeping virginity is meaningless in that it neither matures nor 'brings forth a ripe grape'. Ironically, though, because marriage is necessary for social stability, there are many examples of marriage out of necessity or convenience rather than as a result of love. Although Catullus does not deal with a direct example of marital failure of this sort, we can still read that Catullus highly respects marriage as ultimate love relationship and dismisses the notion of marriage as purely social necessity from the poem LXVI. Queen Berenice had to assassinate her first husband Demetrius, with whom her mother had fallen love and to whom her mother had forced her to marry, in order to get her true beloved King Ptolemy Euergetes as her husband And Catullus calls this deed 'brave', which put an end to both her mother's abnormal sexual relations to her son-in-law and her loveless marriage life, and expresses no negative assessment towards her murder of her ex-husband. Godwin, p.187 and Ellis, p. 299 Secondly, Catullus shows us that 'fanatical devotion and despairing devotion' drive us into temporal madness and lead us to make a speed decision which we may regret after we come back to ourselves The two most obvious examples can be found in the poem LXIV and LXIII, the story of Ariadne who is deserted by her 'husband' Theseus, and of Attis who had 'irreversible marriage' to Cybele respectively. Ferguson, p. 34 Ariadne, in the poem LXIV, fell in love with the handsome stranger at the first sight, provided him with a sword to kill the Minotaur and a thread to guide him out of the labyrinth, left all her family members who love her so much behind to follow her passionate love for Theseus, and eventually ended up with being abandoned just like Queen Dido or Medea. Attis, on the other hand, completely fascinated by the Mother Goddess Cybele, had emasculated himself to become", "label": 0 }, { "main_document": "only able to detect frauds of a retrospective type. An advantage of using unsupervised methods as opposed to supervised methods is that previously unknown types of fraud may be identified. A neural network is a set of interlinked nodes devised to mimic the functioning of the human brain. Each node has a weighted link to several other nodes in adjacent levels. Individual nodes receive input from linked nodes and use the weighting as an integral part of a simple function to compute output values. Neural networks come in a variety of types and can be devised for supervised or unsupervised detection. The user can specify the number of hidden levels as well as the number of nodes within each specific hidden layer. Depending on the application, the output layer of the neural network may contain any number of nodes. The typical Internet Payment Processing systems have following actors: A Merchant must open an Internet Merchant Account with an Acquiring Bank to enable online credit card authorisation and payment processing. The issuer is responsible for the cardholder's debt payment. The merchant is typically pays a processing fee for each transaction processed, also known as discount rate. A merchant applies for an Internet Merchant Account in a process similar to applying for a commercial loan. The fees charged by the acquiring bank usually vary. The service is usually operated by a third-party such as Verisign. The processor is connected to merchant's site on behalf of an acquiring Bank via Payment Gateway. Payment processing can be divided into two major phases: During authorisation phase, verification is made that the credit card is active and the customer has sufficient credit available to make the purchase. Authorisation Steps: Customer decides to make a purchase on a merchant's Web site, proceeds to checkout and input credit card information. The Merchant's Web site receives customer information and sends transaction information to the Payment Gateway. The Payment Gateway routes the information to the Processor. The Processor sends information to the Issuing Bank of the Customer's credit card. The Processor routes transactions result to the Payment Gateway. The Payment Gateway passes result information to the Merchant. The Merchant accepts or rejects transactions and ships goods if necessary. The second phase is the settlement phase during which money is transferred from the customer's account to the merchant's account. Settlement Steps: Merchant request the Payment Gateway to settle a transaction. The Payment Gateway sends all transactions to be settled to the Processor. The Processor sends settlement payment details to customer's credit card Issuing Bank: at the same time the processor sends payment details to merchant's Acquiring Bank. The Issuing Bank includes the Merchant's charge on the Customer's credit card statement while the Acquiring Bank credits the Merchant's account. Figure 4 shows various methods that have been adopted in order to reduce the fraud. Facts infers, that the methods which are being used are not sufficient enough to bring down the fraud rate, but helpful in detecting fraud. The new Internet Payment Processing systems have following new actors: An application server is a software", "label": 0 }, { "main_document": "This could be preceded by allowing for cross-border milk quota trading within the EU Agra Europe Weekly, British milk quota sold to Italy, No. AE2197, March 3 2006. It still has to be seen how efficient the SFP Scheme will be in assisting dairy farmers through the difficult process of changes. Theoretically, thanks to the new support system they will have a secure source of income separate from market returns; and it will also give them the option of withdrawing from dairying if the market prospects were poor. Agra Europe Weekly, Writing on the wall..., op.cit. Under this more market-orientated system, the Commission might decide to abandon intervention for the lower-value dairy commodities, especially that it's officials have for years emphasized and still claim Speech by the EU Trade Commissioner, Peter Mendelson, at the UK National Farmer's Union Conference, 27 February 2006, Agra Europe Weekly, Writing on the wall..., op.cit. The alternative options for the EU dairy policy are now numerous, one thing however can be relatively certain - the general review of the EU budget in 2008-09 may not be the first opportunity taken. According to EU Agriculture Commissioner Mariann Fischer Boel's chief advisor, Poul Skytte Christopherssen 'the Commission would resist any calls for the 'big bang' change to the cap in 2009, because the big bang of [CAP reform in] 2003 is still working its way through our farm sector'. Outlook 2006, No 'big bang' CAP reform before 2013, Agra Europe Weekly, No. AE2200, March 24 2006. Attempts at the elimination of export subsidies for primary products from the agricultural polices have a long history. Paradoxically, the presence of this instrument in the world trade system was legitimized at the very early stage of GATT negotiations. At that time, the rules of international exchange in agricultural goods were developed to fit the highly protectionist domestic rural policies and not It was not until the Uruguay Round, when the era of the international regulation of food and farm policies The spectacular agreement coined in Hong Kong in December 2005 puts an end to that struggles. Supposedly, it also spells the end of one of the most trade-distorting policies in the world. With the deal on 'modalities' still awaited it is difficult to precisely prejudge the effect of negotiations. Nevertheless, as it can be seen by the simple example of the European dairy policy, even the sheer subsidy elimination would have a domino effect on all the other instruments of the old-style CAP it is logically linked with. It would help to finally erase the shameful and troublesome 'mountains' and 'lakes' from the landscape of the European dairy sector. Having that in mind, it is difficult to underestimate the contribution of the effects of the WTO talks to the policy change in Europe.", "label": 0 }, { "main_document": "hard to achieve for this test if statistically higher accuracy is required. Confounder of body height could not calculate into the results. The data of body height of each individual were recorded and attached in appendix in order to reconsider or reference if any need. All these limitations might effect on the accuracy of hypothesis testing. The errors might even influence the possibility of detecting the difference between two sets of samples. Because the calculated 't' value was very close to the critical 't' value, it was possible that was failure of detecting the difference between male and female's length of their little finger.", "label": 0 }, { "main_document": "and evolution in consumer taste. He also says that, social mobility within classes is very restricted, regardless of any changes we may come upon. Bourdieu (1979) implies that, from when our socialisation process initiates, we are designated to a social class according to our symbolic and economical capital. This legacy is pre-defined and social mobility plans can only alter it to a certain limit. Although people may want to change their lifestyles and undertake the way things are done socially (defined by the bourgeoisie!), their social mobility is restricted because of their origins and background; which means that as much as people may want, they cannot redefine themselves, because their cultural taste is dependant on their social class positioning. Warde (1997), however, suggests that social classes no longer reflect characteristics of status like morals, honesty, trustworthiness, friendship, or entertainment. Bauman (1988) proposes that there are no rules in consumption, giving people the option of making their own choices. It becomes as a doorway to freedom. People are no longer tied down to boundaries which restrict their choices, but have the opportunity and \"duty\" to create their own self-identity. The habitus is no longer an issue and social class is unimportant. This introduces the theory of individualization, which may be considered as a tendency originated by the continuous degradation of the importance of social class and breakdown of family, causing people to choose their own identity and express it through consumer behaviour (Beck, 1992). Warde (1997) proposes the regeneration of identities, the new approach on food-people being more health conscious-, the differences surging in gender relations-with women having more freedom and responsibilities (apart from the household!)-and increasing differences between generations and social groups as some plausible causes for these hypothesises of change. If consumer behaviour becomes the main distinction among classes, then the lower classes become more affected (due to less economical possibilities) and this is a much worrying concern than previous differences stated by Bourdieu (Bauman, 1990). Featherstone (1991) sees postmodernism not as a consequence of non-existing discipline, though as a stronger intensely incorporated rule. Therefore, there may be systems of confusion, enabling a balance between the two extremes, which was previously considered intimidating. Bauman (1988) faces postmodernism as an exact reflection of producers know-how, who see their position and image threatened as a consequence of the decrease in need for their products, obliging them to be more consumer driven, and to produce in accordance with the customers' needs and not their own. According to Featherstone (1990), postmodernist consumers are encouraged to have more than one lifestyle. Due to the freedom in consumer behaviour, people may very well express their style detached from any preconceived idea, therefore, expressing more than one lifestyle. Their identity is reflected through most material aspects of their lives and is to be continuously improved. Gouldner (1979) proposes that \"the new middle class\" is trying to raise the importance of symbolic and cultural capital in opposition to economical. Consumer behaviour is the way to communicate with the world. Warde (1997) invokes that aestheticization is becoming more part of", "label": 0 }, { "main_document": "as he attempts to write a more critical view than contemporaries such as Stephen Oates. However, these two agree categorically along one theme; oratory. Oates' biography itself is much more explicit than Lings' as it attempts to paint a far more positive portrait of King, but examples of King's oratorical abilities can not escape the attentions of either of these authors or indeed anyone formulating a serious discussion regarding the civil rights campaigner. Similarly, Tad Szulc dedicates a section of the opening paragraph of his book \"Fidel: A Critical Portrait\" to the oratorical talents of the Cuban Revolutionary leader. Despite King and Castro's vastly different racial, ideological and social environments, perhaps the greatest, and most similar trait they possess, is the ability to speak to and captivate and audience. Given the nature of their respective careers, this fact is hardly surprising. Marxist writer C.L.R. James was quoted as saying \"Fidel Castro is [as] a speaker.....one in a hundred years.\" A chosen career as a revolutionary leader meant that the ability to address a nation confidently and convincingly was always a necessity. Once Fidel had assumed power, revolutionary change would sweep the country. The socialist ideology adopted in 1961 would necessitate vast social and economic change. In order to convince urban workers to move to rural regions when the economy elicited a need for social mobilization, and yet convince them on moral and not material grounds, would require outstanding levels of charisma, a charisma obtained by fully captivating an audience through inspiring and engaging oratory. The very same levels of charisma would be required when informing a depleted, tired workforce that they would have to ration to an even greater extent, because a revolutionary economy, such as this, was bound to experience a time lag before the increased investments and use of labour began to reap belated rewards. Szulc writes that 'Castro is fascinated by the art of public speech......creating his own fiery yet chatty style. It is unlikely that there is another communist ruler in the world....who delights in dissecting classical oratory, or is capable of it.' Leonard Tim Hector, CLR James and the 21st Century, (1999) Tad Szulc, Fidel Castro: A Critical Portrait, (US & GB, 1989) p27 Martin Luther King's close friend throughout his civil rights career, Bayard Rustin, helped tutor King in Gandhian non-violence from the outset of 1956, and this valued teaching, in combination with Kings' own learned knowledge, served to further King's belief that only non-violent action could benefit the civil rights cause. Attaining the support of not just wiling blacks, but moderate whites, during direct-action campaigns in the South, would prove difficult enough, but King would have to convince every single man and woman taking part that they must remain non-violent. Attitudes towards violence will be discussed below, but needless to say, it would take a skilled orator to persuade those passionate men and women that such action would prove beneficial. By the end of 1961, King's arrival in Albany to preach a sermon to a black congregation in Albany was so well anticipated, that every", "label": 1 }, { "main_document": "means that the expected level Implicitly, the assumption is that the public selects the value of (3) Clearly, this function is minimised when Most of the literature which discuses the TCP assumes that the CB is able to directly control the inflation rate since any source of uncertainty is omitted (Romer, 2001 p. 480). However, strictly speaking, the CB does not directly control the inflation rate, but does so indirectly through, for instance, the money supply growth rate. For this reason, I will assume that the CB manages the growth rate of money supply (4) Since (4) will hold through the analysis it is worth pointing out that is the same to consider that the CB directly controls As previously indicated, the institutional arrangement under which the CB sets monetary policy is a central issue in the analysis. Furthermore, it seems plausible to argue that the absence of an institutional arrangement which commits the CB to strictly follow its policy announcements relies on the basis of the TCP. In such a case the CB will behave with discretion Under these circumstances or, in terms of KP, when the CB applies optimal control theory to determine its decisions in each period, the TCP rises and the final equilibrium will be suboptimal. An important issue to understanding the final equilibrium is the timing of events. The sequence of decisions is assumed as follows: first the public determines It is important to note that the CB decides This characteristic gives the CB the possibility of determining The CB minimises (1) considering the supply function stated in (2) and the relation between money supply growth and inflation addressed in (4) in a period-by-period basis as follows: (5) This minimisation process, after rearranging terms, gives the following first order condition: (6) Since the public sets the expected inflation according to the rational expectation hypothesis (i.e., (7) Replacing (7) in (6) yields the current inflation level under discretion (8) Equation (6) represents the reaction function of the CB. Since an equilibrium is only consistent when Figure 2 panel (a) shows indifference contours of the loss function stated in (1) and their respective tangency with the short-run Phillips curves (for given values of It is worth noting that outcomes farther away from It is now worthwhile to ask why a TCP may arise in such a situation. To answer this question it is useful to focus on Figure 2 panel (a). Given For instance, when public sets Since the public knows that the CB will behave in such a way (i.e., they have rational expectations) they will never set Furthermore, Figure 2 panel (b) emphasises the rational expectation assumption introduced previously. The reaction function of the CB is obtained from (6) and has a slope It is clear that only for The final outcome, denoted by Or, put another way, this is the Nash equilibrium of the monetary game between the government and the public. It is clear that this situation is strictly dominated by, say, Nevertheless, since the TCP arises in this case once the CB announces So,", "label": 0 }, { "main_document": "the Brazilian economy is largely dependant on its exports and given that the US constitutes nearly 25% of the Brazilian export market The persistent depreciation of the US dollar through most of 2004 may have potentially harmful effects on the Brazilian economy. As the dollar depreciates, Brazilian exports lose their competitiveness A serious threat posed is the growing competitive advantage the Chinese are gaining, particularly by having their currency fixed against the dollar. This allows their export prices to remain stable in the American markets, thereby reducing the demand for Brazilian exports. In 2000, exports accounted for 9.9% of GDP, 24.3% of which was directed to the US (2.6% of GDP). This competitiveness is determined by three factors: domestic prices, exchange rate and the state of the foreign economy. Similarly, the dollar depreciation may bring about a greater demand for American exports in Brazil for 2005, possibly having adverse effect in the country's balance of payments At the same time however, the loss of confidence in the dollar and the change in the demand for the dollar as a reserve currency may diminish capital flight to America in the form of direct and foreign investment (as returns will be lower when repatriated and the yield would be smaller) The loss of status as many nations' reserve currency may cause the dollar to plummet even further, making American imports in Brazil cheaper while discouraging Brazilian exports. This could result in a downturn in Brazil's Balance of Payments, reverting from a surplus to a deficit. On the other side, a possible benefit is that Brazil's debt burden which is calculated in dollars will be lessened.", "label": 0 }, { "main_document": "low (Magnusson This might be attributed to the fact that even though younger people may be more concerned about the environment, they do not have the purchasing power. Instead of young people themselves purchasing organic products, their impact may be seen as 'pester power', thus trying to convince their parents to buy organic food (Davies In another study carried out in Croatia, it was also found that women buy organic food more often compared to men. Apart from the gender characteristic, education was also important factor with people of higher education level showing bigger organic purchase activity. Finally, people from cities who live in urban areas purchase organic food more often. Any other socio-economic or demographic characteristic did not have any apparent effect (Radman, 2005). Similar results have been observed across the literature in terms of the profile of organic consumers. Von Alvensleben and Altmann (1987) found that people from urban areas buy more organic food than those from rural areas. Moreover, the main organic purchasers belong to the age group of 25-35 years and there is a positive correlation between demand and income. An Italian study based on organic olive oil showed that the organic consumer in Italy is 40 years old and has a four-member family (Cicia, Del Giudice and Scarpa, 2002). Tregear (1994) reported people with higher disposable income as the main socio-economic group of organic consumers. Among Swedish consumers women tend to have more positive attitudes towards organic food (Magnusson According to several studies, the presence of children is an important part of organic consumers' profile. Thompson and Kidwell (1998) found that households with children under the age of 18 years are more likely to purchase organic products. However, the presence of children should be examined in relation to their age (Fotopoulos and Krystallis, 2002). As it has already been mentioned there are various other motives driving consumers' demand for organic food apart from health and environmental considerations. However, the motives behind organic demand vary between different organic products as well as between different nations. For example, when it comes to organic meat and dairy products the ethical motive of animal welfare is predominant. In the case of fruits and vegetables on the other hand, where the skin is the edible part, the basic motive behind organic purchases is the absence of chemical residues which is more health and safety related (Padel and Foster, 2005). Regarding the nation effect, it was found by Worner and Meier-Ploeger (1999) that 'support to organic farmers' was the motivation behind German consumers' organic purchases, while for the British consumers Meier-Ploeger and Woodward (1999) found the 'animal welfare' to be the motivation (Fotopoulos and Krystallis, 2002). Variations also exist between different socio-economic groups with respect to organic products purchased. For example, in the Croatian study by Radman (2005), it was found that consumers with children whose age is under 14 years are more frequent purchasers of organic milk and organic dairy products rather than those without children in their family. Certainly, some organic products are of greater interest to some groups of consumers", "label": 0 }, { "main_document": "this piece in his collection of Wordsworth freshly interprets the old ballad tradition by retaining the simplicity of the ballad but incorporating a greater depth and complexity in the lyric form that is commonly used to convey an emotion or state of mind. Wordsworth ironically accompanies the grave subject matter of this poem with a light-hearted tone embroidered in the ballad form. The poet adopts many conventions of this form including a four-line stanza with a simple abab rhyme scheme and heavy use of dialogue. Wordsworth uses iambs in tetrameter followed by trimeter and repeats this metre pattern to complete the stanza: At the end of the iambic trimeter, a natural pause is formed before continuing to the second tetrameter that assists the formation of a sing-song, childlike rhythm of a nursery rhyme. Wordsworth adds to the buoyant, cheerful mood of this poem with internal rhyme on the words, 'green' and 'seen' in this particular stanza. The flow of the ballad is interrupted in the final verse where Wordsworth extends the stanza into five lines and modifies the rhyme scheme to abccb. This verse marks a change of speaker to the exasperated adult following a long period of dialogue from the young girl: The adult introduces and bluntly repeats 'dead', exposing a great contrast between these two voices following the child's spirited euphemisms that avoid the grim reality of this description. Wordsworth's use of punctuation emphasises the adult's frustration that the child insists on referring to life and death as a unified existence. Wordsworth suggests a level of sophistication and complexity in the mind of the adult that does not pervade the mind of the child by using enjambment for the first time in this poem: By allowing the child the last word, Wordsworth encourages the reader to allow the child's way of thinking to triumph over the adult's. Wordsworth challenges the reader's perceptions of death in this poem by offering two opposing reactions to this process. The poet asks if the child's cheerful perception of death is simply attached to the incomprehension of her age or if it is an inspirational and heart-warming attitude that even adults could adopt. Wordsworth questions if an adult is better prepared for the emotional distress of this experience or if years just develop a more bleak and sorrowful reaction to loss that can never return to a child's beautifully simplistic methods of dealing with adversity.", "label": 1 }, { "main_document": "as it was known, and the introduction of a basic income as a means to ensure equality and freedom for all. Another liberal feminist Friedan acknowledged that, despite legal reforms, women experienced frustration from being confined to the domestic sphere because of social expectations and stereotypes. This 'second wave' of feminism posed new challenges to politics that extended deep into the social realm, and propelled politics to become involved with social reform enabling women to realise their potential. As Heywood writes it was \"not merely political emancipation but 'women's liberation'\" Perhaps the greatest challenge of liberal feminism to politics is the idea that 'the personal is political' and the constant deconstruction of the public/private divide which supposedly ensnares women into domestic spheres where they are unable to realise their full potential. Therefore feminism urges politics to take up the cause of women's rights as they stand in social conception and 'behind closed doors' by reforming the identity of women, or rather enabling women to form their own identity, as opposed to one forced upon them by men. Wollstonecraft, M. (1792) Bartleby.com. Chapter II. Taken from Heywood, A. (2003) However liberal feminists has been accused of acting solely in the interests of middle class, white women with some degree of education to recognise their own rights and in this way only touching the surface of discrimination which is wide spread and continue to afflict the poorer, less educated women typically of different origins and living in developing countries. These accusations have been largely levied by socialist feminists, who seek to challenge politics with a more revolutionist outlook. For socialist feminists, the central theme to emancipating and liberating women is found in the abolition of private property, which they see as the fundamental tool used in discriminating against women. It considers, like liberalism, how the exploitation of women permeates into all spheres of life, emphasising the private sphere. It also reiterates that gender is used within society to exploit women and sees gender as a source of a great deal of discrimination. Socialist feminism recognises that gender has arisen because of man made conceptions of the differences between the sexes pertaining to their roles in society, and totally apart from their biological differences. Gender inspires the exploitation of women; however, for socialist feminists, this is the only branch of the exploitative power wielded by the elite over the majority. The exploitation of women is seen as part of the ongoing class conflict and would, seemingly, be solved with the introduction of socialist government. For many, the demands of socialist feminism on politics do not go far enough to recognise how entrenched exploitation against women is within society. Primarily these feminists are radical and their challenge to politics centres around the idea of patriarchy. For radical feminists the term 'patriarchy' describes the 'power relationship' where men dominate over women in a paternalistic and controlling way. They describe what they call 'sexual politics' that are inherent in contemporary politics as the established discrimination within all spheres of life and see patriarchy as the common enemy which", "label": 1 }, { "main_document": "To make up n approximately 0.1M potassium hydrogenphthalate solution accurately and use it to standardise the approximately 0.1M solution of sodium hydroxide provided in the laboratory. Through doing this I will: When carrying out a titration, a solution of unknown strength is titrated against a standard solution that has a known concentration. The standard solution e.g. sodium hydroxide, is usually obtained from the laboratory and its concentration (molarity) is given on a label. To make this solution solid sodium hydroxide pellets have been weighed out and dissolved in a known volume of water. However the pellets are very hygroscopic (water absorbing) and this process occurs during weighing so that the exact amount of NaOH used is unknown. The final solution is also frequently contaminated with carbon dioxide which reacts to form sodium carbonate. Sodium hydroxide is therefore not a primary standard. A primary standard is a stable, non-hygroscopic, pure solid material which can be weighed out accurately and dissolved in water to give a solution of accurately known concentration. A useful primary standard acid is potassium hydrogenphthalate, a monoprotic acid derived from the diprotic phthalic acid. It has a fairly high molar mass (reducing weighing error) but a low solubility in water (<0.5M at 25 Potassium hydrogenphthalate: The following reaction occurs when sodium hydroxide is added to potassium hydrogenphthalate: Accurately weigh out between 1.8 and 2.2g of solid potassium hydrogenphthalate into a tared weighing boat. Transfer the solid to a 100cm Dissolve the solid in about 80cm When the solute has dissolved completely (this may require shaking of the flask), make up to the mark with distilled water. Transfer some of the 0.1M sodium hydroxide solution provided in the lab to a 50cm Pipette 25cm Add two drops of phenolphthalein indicator to the flask, (this changes from colourless to pink at a pH of 9) and with a white tile under the flask carry out one rough and two accurate titrations. Swirl the flask regularly during the titrations and place a piece of paper behind the burette to accurately read the meniscus. Mass of potassium hydrogenphthalate weighed: 1.8320g One mole of potassium hydrogenphthalate reacts with one mole of sodium hydroxide. Therefore 2.2427 x 10 Average of the two accurate titrations = 25.2750cm So 25.2750cm Concentration of sodium hydroxide solution: Weighing uncertainty: Volumetric flask uncertainty: Pipette filling uncertainty: Burette reading uncertainties: End point detection uncertainty: B-grade glassware uncertainties:0.2% x 3(volumetric flask, pipette, burette) = 0.6% A titration of a solution of H It requires 26.43cm We want to find the concentration of the H One mole of H This was contained in 25.00cm", "label": 1 }, { "main_document": "clearer, with less of a range as the artistic ability level rose, although even at the highest level there are discrepancies between JRC numbers 2-4 and 4-6. For profile 3, the distribution shifts from being centred vaguely more around 12-14 to being around 18-20 as spatial abilities increase, although this is by no means exclusively the case. Profile 4 follows a normal distribution independent of artistic abilities, centred about 8-10 and 10-12. Sample A has a varied distribution that shifts from being vaguely centred on JRC number 6-8 to being centred around 10-12 as spatial abilities increase. This JRC of 10-12 corresponds with the values of its two dimensional match, profile 4, as shown in fig. 5. Sample B follows a normal distribution independent of spatial abilities although JRC 6-8 becomes much more prominent as they increase. This is similar to its matching profile (1), where higher spatial abilities respond by singling out the 6-8 profile much more resoundingly. Sample C is very varied throughout with no clear distribution except for on the highest artistic ability level where 4-6 is the most common response, which corresponds to some extent with the responses for profile 2. Sample D is largely centred around JRC 18-20 although at lower levels of artistic ability responses of 12-14 are more common, but this decreases in frequency as the level rises to where at the level 4/5 almost all answers point to 18-20. There is no general trend shown as to whether the mean increases or decreases with spatial abilities as it can be seen that as the artistic level rises some means increase and some decrease. However where there is an increase or decrease it seems to be a smooth progression, generally gradually occurring as the level rises. The standard deviation, as might be expected, seems to decrease as the artistic ability level rises, indicating a general decrease in the variation in responses. However, the standard deviation of artistic level 1 for some profiles is at a similar level to that of level 4/5. The majority of responses indicate that profile 1 corresponds with the standard profile for 6-8 has a similar distribution type to that of the artistic level 4/5 group where 6-8 is by far the most popular answer. Profile 2 again to some extent mirrors the responses of the higher level artists, where 2-4 and 4-6 are the clear choices, although there is a very slight normal distribution around 10-12. For profile 3, the responses are almost equally shared between 12-14, 16-18 and 18-20. Profile 4 shows a strong normal distribution based around 10-12, which is again similar to the higher to the normal distribution shown in the artistic level 4/5 but also in level 1. Sample A shows two poorly defined normal distributions around 6-8 and 10-12. This ties in loosely with the clear normal distribution highlighting 10-12 for the corresponding profile, profile 4. This distribution is not as clear as that shown by the higher level artists for this sample. Samples B and C are again comparable to that of the artistic", "label": 1 }, { "main_document": "little methodological concern due to the fact that being a woman worked in Taylor's favour: \"The fact that I am a woman...made me more easily accepted and gave me more freedom to explore aspects of the women's lives which a man would have found difficult.\" This is supported by the detailed descriptions of the women as mothers; the role of partners and husbands in their lives and even issues of violence and rape which may not have been discovered by a male observer. However, it could also be argued that a man, by developing a long-term and close relationship with the women could have achieved similar results. Secondly, problems of language were minimalised by the fact that Taylor is a Glaswegian working in a Glaswegian community. However, as James Patrick also found in his study of a Glasgow gang, dialect and slang can differ enormously and must therefore be learned through observation: \"Born and bred in Glasgow, I thought myself another serious mistake as it turned out.\" Taylor therefore successfully adapted her own accent and dialect in order to be more readily accepted by the group. She employed an effective snow-balling technique in order to increase the number of women with whom she 'participated'. Taylor's access gaining methods, while to a certain extent out of her control, proved to be successful in terms of appropriateness to the aims of her study. Taylor, A (1993), Patrick, J (1973), Studying criminals can often result in ethical problems for a researcher. Taylor swore to confidentiality with the people she interviewed and spent time with. This was essential as distrust on the part of the drug users would have resulted in an unreliable study. Apart from two incidents which Taylor was unaware of at the time Taylor, A (1993), Whyte, W. F. (1993), The task of balancing acceptance with observational detachment is a very difficult one. Clearly, Taylor was not captured by any member or section of the community. However, it could be argued that she was to a certain extent emotionally captured. Taylor uses Gold's four classifications of participant observation Yet as Whyte rightly points out: \"Most teaching resources on participant observation fail to note that the researcher, like his informants, is a social animal\". Before even conducting her study she sought to disprove some of the derogatory stereotypes of women drug users, including their inability to be 'good' mothers. Through spending fifteen months with fifty different women, eight of whom became \"key informants\", these feelings of sympathy and even respect and fondness clearly strengthened. \"In our society, the most fulfilling role for women is still regarded as that of motherhood...only the most articulate and confident of women are able successfully to challenge this interpretation. Other less fortunate, including the women of this study, labour under feelings of inadequacy, and hence of guilt.\" Whilst it is very difficult to say if and to what extent these feelings could have affected the ethnography, possible areas of influence could have been in the 'agenda-setting' process of the study. Whether aware of it or not, the desire to", "label": 1 }, { "main_document": "The 'Drude' and 'Sommerfeld' free electron models were evaluated using solid state simulations The simulations were validated against the theories and subsequently used to probe the accuracy of the models with regards to fundamental laws. The Drude model was found to obey Ohm's law, and both were shown to model the Hall Effect accurately for most metals but not for those with positive mobile charge carriers. This discrepancy was again shown in the Drude model by investigating cyclotron resonance, which was only allowed for left circularly or elliptically polarised fields in the model, when in fact certain metals can exhibit resonance in the opposite direction. It was shown that average drift velocity remains the same in a constant E Sommerfeld and Drude both failed to account for the magnetoresistive properties of metals in this instance. The Drude and Sommerfeld free electron models are two of the earliest ever of their kind, having been devised shortly after the discovery of the electron in 1897. Whilst both models do not fully describe free electrons in metals, they are suitably accurate and useful in certain scenarios where a more accurate theory might only add unnecessary complications. In 1900 P Drude attempted to describe the motion of electrons in metal just three years after JJ Thompson discovered the electron. Drude's classical theory of electrical conduction assumes that a metal is composed to stationary positive ions and a gas of valence electrons. The motion of the electrons in the metal was assumed to be governed by Boltzmann statistics. Thermal equilibrium is maintained by the scattering of the electrons (no assumption is made about the origin on the scattering). This probability of an electron scattering is taken to be The temperature is governed by the magnitude of the average velocity of the electrons. with The Drude equation for the average electron velocity in applied Supposing that the accelerating force acts on average for a time tau to produce a steady drift velocity the Drude model gives the magnitude of the Hall angle as where [2] The Drude simulation gives a two dimensional display of the positions of N electrons (up to 255) on the left hand (real-space) and their corresponding velocities on the right hand side (reciprocal-space). The electrons are shown five dots displaying their position in the last five time steps giving them Some of the electrons are coloured for ease of observation. The average position and velocity of the electrons is shown with a large green dot with a large red dot at the origin. To investigate the behaviour of a classical gas of electrons the user has controls time varying x and y components of electric field, the z component of the magnetic field, temperature and scattering time. The individual magnitudes of these variables can be altered by using the 'sliders'. With the development of quantum mechanical theories throughout the early 20 By 1928, A. J. Sommerfeld had finalised a quantum free electron theory by looking more closely at the collisions between electrons and the lattice ions, and also incorporating the Pauli Exclusion Principle As", "label": 1 }, { "main_document": "India's economy has grown dramatically since 1991 (see Appendix 2). Its GDP growth went up 9% in 2005 (World Development Indicators database, 2006), in turn it contributed a positive development to India's tourism (World Tourism Organization, 2004). The significant tourism growth is largely attributed to its economic reform, which for instance reduced the government control on foreign trade and foreign direct investment (World Bank, 2006). Thank to the strong connection between the UK and India, moreover, UK companies are reckoned to be well positioned in making the most of this growing export and investment market. Just as mentioned, former PESTE analysis (refer to Appendix 2) has shown a general depiction of business environment in India and Porter's five forces competitive analysis (refer to Appendix 5) in the previous section illustrated the recent competition of serviced apartment market in India. Additionally, weighted Porter's five forces analysis (as Appendix 5-1) shows the factors that related to market attractiveness (Porter, 1985) and hence offer the proof that India is a market worth foreign investment in hospitality industry. Before entering a foreign market, a business company needs to take the choice and importance of the market entry mode into account (Tallman and Shenkar, 1994). In that case, the business company then need to choose the most suitable mode in which the business will satisfy itself in the conflicting pressure of the foreign and domestic operating environment (Paliwoda, 1986) and will secure for relocating its sources and facility from the home country to the host country (Erramili, 2002). There are numerous marketing entry strategies depending on different level of control, however, soft services firms, where production and consumption can not be decoupled such as health care, hotel and tourism (Blomstermo Suppliers of soft services required higher control entry mode, Palmer and Cole stressed (1995). As a soft services supplier who attempts to entry an cultural distant country, it is also recommended by Blomstermo and Sharma (2006) to choose a high control entry mode (such as FDI or ownership) from a perspective of industrial experience, with the intention of building up unique competence, on-site research, adaptation to the needs of the foreign buyers and markets, and customer relationships (Blomstermo and Sharma, 2006; Hastings and Perry, 2000). On the other hand, in real industry world, Jones Lang Lasalle Hotels- the global hotel investment services group- argued that, local existing residential property developers are better equipped and familiar with serviced apartment products compared with hotel groups those seeking for opportunities to enter serviced apartment sector (Hotel News Resources, 2002). That is to say, cooperating with local Property Development Companies appears to be more practical and beneficial to a foreign entrant in the long-term view. Hence the more appropriate method of entry strategy for The Hockney Management Co. will be joint equity venture, in consideration with India's flavored investment climate (refer to Appendix 2- PESTE analysis), estimated low cost and risk of entry, applicable knowledge of serviced apartment sectors in India (Hotel News Resources, 2002), and also concerning its thriving domestic competition (refer to Porter's five forces analysis). Since The Hockney", "label": 0 }, { "main_document": "the way of thinking of the scholars. The arguments for the change from timber to stone (which I will analyze in more detail in due course) clearly bear the traces of traditional and the New Archaeology. We also should consider different cultural backgrounds of the researchers and the regional variations and different cultures, not only within Britain but also the whole Europe. Ian Hodder claims that there are as many interpretations as there are archaeologists and that archaeology can never be fully objective, so that subjectivity and modern approach can affect the process of interpretation (Renfrew and Bahn 2002). In this work I will analyze our changing knowledge about peasant houses from 1960s till 1990s, focusing mainly on the debates of different archaeologists and historians about the durability change of building materials and plans of buildings. Since 1960s our idea about the construction of a peasant house has changed considerably. If the data about the construction changed, this undoubtedly led to the reassessment of the opinions about their durability. The study of deserted medieval villages that evolved throughout 19 This was based on however limited number of excavations and contemporary excavation techniques. The most popular objects of investigation appearing in the publications seemed to be Anglo-Saxon sunken-huts like those at Upton or West Stow. This was not due to the common interest of contemporary archaeologists in these structures, but to the fact, that sunken-huts can be relatively easy recognized during excavation process thanks to the darker fill of the huts (Beresford and Hurst 1971). Also, the stone foundations or walls could be easier recorded, because of the lack of sufficient techniques to record the remains of timber. Beresford and Hurst in their work \"Deserted Medieval Villages\" provided only very limited amount of information about timber buildings, claiming that the later stone disturbances did not allow them to investigate the timber phase (Beresford and Hurst 1971). On the other hand, on the basis of the study of turf and cob walls (not very detailed either) from probably 9 Therefore, the surviving vernacular buildings, dated not earlier than 16 However, even those earliest and of poorest construction were evaluated as supreme to those from the excavations, which did not demonstrate any evidence for substantial timbers, able to withstand the weight of a massive roof structure (Wrathmell 1989). After twenty years of research the new methods and resources such as written documents provided the archaeologists with evidence for use of cruck, forks or siles, which would be the \"missing\" major timbers supporting the roof. Such evidence include the accounts of the vicar of Kirkby Malham, who in 1454 paid carpenters for placing stones under the crucks to stabilize the building. The use of these elements is also often recorded in schedules of repairs, like the fifteenth-century one from Northallerton or Durham. Furthermore, there is evidence for timber \"tenants' buildings being moved and re-erected\". This indicates the houses had to be of good quality, enough to make it worth moving them to another location (Wrathmell 1989). Already Beresford and Hurst (Beresford and Hurst 1971) noted a", "label": 0 }, { "main_document": "colony was the nature of the colonists themselves. The people who initially migrated to America from England were predominantly a mix of young male explorers and entrepreneurs whose aims and objectives essentially clashed with the survival of the colony itself. The survival of Roanoke was clearly not at the forefront of their motives. Their primary ambitions were to obtain land and wealth before returning shortly after to England. Very few had the idea of settling there permanently. Consequently, when it became apparent that they would not be able to achieve their aims, as there was no easy money to be made, many simply uprooted and returned back to England at the first available opportunity. The fact that the colonists faced constant food shortages - 'shortages produced discontent and... inaction led to boredom and pessimism' Quinn, The colonists' attitude in terms of not caring about the survival of the colony manifested itself in their approach to dealing with the Native Americans. They felt superior and that they had nothing to learn from those whom they regarded as 'savages'. On the initial two voyages, many of the colonists were soldiers, whilst very few were farmers. Soldiers are by no means ideal to start a colony. It was primarily their bigoted nature which led to the poor relations with the Native Americans. Only with the third and final voyage in 1587, was there a genuine attempt to colonize the island and attempt to construct a permanent settlement. However, by this stage it was too late, as it appears that the Native Americans were not pleased to see the return of the Englishmen due to the actions of the previous settlers. As stated above, the settlers were never seen again after the governor John White left for England. When he returned, there was no sign of the colonists and the people of the settlement have been termed the 'Lost Colony'. Thus, the majority of the colonists themselves were responsible for the failure of Roanoke including their own inadvertent actions. That is, with regard to the initial two voyages many colonists did not possess the skills to flourish in a new environment (for example, a lack of farmers) but they also contributed to the downfall of their colony with their primary ambition of obtaining wealth, as well as there chauvinistic attitude toward the Native Americans. Only with the final voyage were the colonists more likely to settle successfully, but by this stage the damage had already been done. The final vital factor which contributed to the failure of Roanoke was the lack of support from England towards the colonists. There was, in general, a lack of drive for a colonizing movement from back home in England, and in particular from the government itself. Also 'From the viewpoint of the queen and her advisors... it was not of any major significance.\" This was very important in explaining why the colony was not able to survive. The reason for the lack of government support was because it was extremely expensive to finance the Roanoke expeditions. The food shortages and", "label": 1 }, { "main_document": "by the apoptotic death of cells covering much of the lamellar surface during normal conditions, resulting in its protrusion (Nilsson and Lutz, 2004; Nilsson and Renshaw, 2004) Other minor adjustments but worth mentioning appear to be the temporal blindness and deafness of the carp as a result of the activities auditory nerves and of those involved in vision being strongly suppressed (the reason for this will be discussed in part C). These adaptations are minor compared to the process by which the Crucian carp prevents the damage of the brain and insures it survival through a strategy involving glycolytic activation. ATP is the most important form of energy in the brain and it is mainly associated with the control of ion pumps when needed to sustain the electrical activity of the brain. The carp must be able to match ATP consumption with ATP production in the absence of oxygen thereby avoiding energy failure and accumulations of high concentrations of potassium leading to the depolarization of the neuron. The net result of this is the activation of degenerative and lytic processes involving the degradation of DNA, proteins and the cell membrane itself by lytic enzymes, free radicals and nitric oxide (Nilsson, 2004). The Crucian carp avoids this catastrophe by using glycolysis to make ATP. Since glycolysis yields 2 mol ATP/ mol glucose (in contrast to the production of ATP by aerobic metabolism which yields 36 mol ATP/mol glucose), the carp maintains ATP level by increasing the rate of glycolysis. Glucose is the only cellular fuel that can be used by the brain (Nilsson and Renshaw, 2004). At the commencement of anoxia, the organism undergoes glycolysis to produce extra large glucose stores that can be utilized later on. The carp starts producing ethanol as the major end product of anaerobic glycolysis (instead of lactate as in other vertebrates) (Nilsson and Lutz, 1997; Nilsson, 2001). Carassius sp. utilizes this glycolytic strategy to avoid the problem of self-intoxication and the high lactate levels and consequent acidosis faced by other vertebrates (Nilsson and Lutz, 1997). The main steps in this conversion of glucose into ethanol are the transformation of pyruvate into acetaldehyde in a reaction catalyzed by the enzyme pyruvate dehydrogenase, also releasing CO Acetaldehyde slips out of the mitochondrion rather than being converted to acetyl CoA (in contrast with other vertebrates). It is finally converted into ethanol by alcohol dehydrogenase (ADH). This enzyme is found at extraordinary high levels in skeletal muscles. Although ADH is predominant in this part of the body, the It is then translocated to the muscles via the blood where it is transformed to ethanol and CO The ethanol then leaves the organism by spreading over the gill membrane (Nilsson and Lutz, 1997; Nilsson, 2001; Nilsson and Lutz, 2004). Figure 2 shows the ethanol-producing pathway of the carp. The obvious advantage of this system is that the organism has tolerable steady-state levels of lactate and ethanol and it still swims, in other words it is still active although its activity is reduced. And it can survive until it supply of glycogen stores", "label": 0 }, { "main_document": "The Objective of the experiment was to use the given apparatus to measure the experimental value of the moment of inertia of various discs using suitable measurements & graphs and finally comparing it with the theoretical values. The Apparatus consisted of a wooden plate with a drum with a helical groove on one side and a disc holder n the other side. There was a string attached to the drum and the string could be wound around the groove. The other side of the string had a lasso which could be used to suspend weights. A disc was attached to the holder and various weights were suspended on it. They were allowed to fall from the maximum possible height and the time they took to fall to the ground was measured using a stopwatch. This process was performed on two different discs with different radii. Then a graph was plotted between the mass suspended against the reciprocal of the square of the time taken to fall to the ground. The slope of the line of best fit obtained in the graph was used experimental value of the moment of inertia of the disc using Newton's equations of motion. This value was then compared with the theoretical value of moment of inertia obtained using the Formula The experimental error was then judged using the difference in the values obtained. The percentage error was found to be very high and hence I personally don't think that this method can effectively measure the moment of inertia of the discs. The Purpose of the experiment was to analyse and judge the accuracy of the experimental measurement of the moment of inertia of a disc using the given apparatus. The word INERTIA is defined as the inability of a body to change by itself its state of rest or of uniform motion which is an effect of its mass or can be considered as the mass itself. The rotational equivalent of this inertia is called the moment of inertia or the second moment of mass. The expression for the moment of inertia can be derived as follows. The Linear velocity of a body (v) is the product of the radius or distance from the axis of rotation (r) and the angular velocity (w). The kinetic energy of this rotating object is hence The total kinetic energy in the body is the sum of the individual energy of all the particles contained in it. Looking at this equation in a rotational context, the angular velocity replaces the linear velocity in the linear context and the summation replaces the mass in the rotational context and is better known as the moment of inertia. Hence Since the summations are similar to integrals. In order to derive an integral equation for the moment of inertia Let us divide the body into a number of smaller pieces of equal mass This integral equation can now be used to derive a linear equation for the moment of inertia of a disc that we are going to work on in this experiment. A disc", "label": 0 }, { "main_document": "In George Byron's Byron, George Gordon. ' Manfred A Dramatic Poem' in Romanticism an Anthology. ed., Duncan Wu.(Oxford: Blackwell Publishing, 1998.) III. I. 160-167, p.745. The notion of 'chaos' influencing the 'mind' and 'passions' is a direct reflection upon issues of the self, the category of the individual, which permeated the literature and philosophy of the nineteenth century, due to the economic, religious and social upheavals caused by advances in science, the debate on slavery and industrialisation, all of which disrupted the class structure. Critic David Punter concurs that \" it is conventional, and reasonable, to say that the society which generated and read Gothic fiction was one which was becoming aware of injustice in a variety of different areas ... there was a dawning consciousness of inequality.\" Punter, David. The Literature of Terror. London: Longman, 1980. These external conflicts were internalised by the writers and poets of the genre and immortalised through literary tropes, symbols and figures in Gothic writing. Therefore, to explore the category of subjectivity, which depends \"on an individual's perception for its existence,\" I will examine elements of George Byron's Oxford Dictionary of English. 2 nd ed., s.v. \"subjectivity.\" Oxford and New York: Oxford University Press, 2003. By entitling the poem A strong, stark entity unto his own, whose unconventional codes of behaviour separate and define him as an overreacher and a liminal figure. Based strongly upon J.W. Von Goethe's Whilst Faust desired 'certainty,' Manfred desires 'self oblivion,' I recall Mary Shelley's Only the death of his sister, Astarte, has brought on Manfred's agnorisis, which consequently has caused his sense of self to deteriorate into an extremely negative self-image. Taken from overhead projector notes provided in class. Byron, op.cit.,I. I. 13. p.718 Ibid., p.719 Ibid., p. 722 Ibid., p. 722 Ibid., p. 732 This notion of negativity is adopted and used by Keat's in his idea of 'negative capability,' which he puts to full use in In a letter to his brothers in 1817, Keats argues that the poets of the day are \"incapable of remaining content with half knowledge.\" He wished to implement \"negative capability, that is when man is capable of making all disagreeableness evaporate, without any irritable reaching after fact and reason.\" He desired the reader to immerse the self in what is perceived, and lose oneself in a dream like reverie, instead of constantly analysing the text. The ambiguity of the passage depicting Madeline's rape by Porphyro highlights the liminality that exists between consummation and purity. The reader is left in doubt as to whether the rape actually takes place because Byron's description is so poetical: Doe, Nicholas. John Keats and the Culture of Dissent. Oxford: Clarendon Press, 1997. Ibid. Keats, John. 'The Eve of St Agnes (1819) ' in ed., Duncan Wu. (Oxford: Blackwell Publishing, 1998.) 318-321 p.1052.l This liminality between the dream world and reality leaves the reader without 'fact and reason' and they are forced to remain 'content with half knowledge.' Liminality also encompasses notions of religion and sexuality. Like Matthew Lewis' creation, Ambrosio, in The Abbot reminds Manfred twice that", "label": 1 }, { "main_document": "India and the UK have been sharing a global vision and democratic value for long, and the relationship in between was more improved as a result of signing a Joint Agreement by the two Prime Ministers in September 2004 (British High Commission, 2006). This encouraged the two nations to form the strategic partnerships (British High Commission, 2006), continuously discuss, and subsequently corporate in the contemporary agendas (see Appendix 2) involving terrorism, nuclear energy, science, technology, security, economic partnerships, culture and education (International High Commission, 2006). From a technological perspective, India and UK are both easy accessible by air and roughly have the same amount of international airports (IHC London, 2006, Incredible India, 2006 & Visit Britain, 2006), while Indian domestic flights are more common than in the UK due to its size of India, up to approximately 60 destinations. Additionally, domestic flights within India are very cheap compared to the UK. Nevertheless, the quality of Indian airports has been especially concerned, indicated by the High commission of India (2006), which might then make effect on possible return visits. In addition, Indian government has made more constant efforts than the UK in IT education and many kinds of technological development plans over the past years (Asialink, 2006). India consequently has become the IT leader and a central for outsourcing business services among the world (Mipimasia, 2006), leading to its ongoing growth of number of domestic and international business travelers (Zavari, 2006) those who take part into the start-up period of the entry of new companies and frequent training (Thaker, 2006). Middleton & Clark put emphasis on India's rapid economic growth (see Appendix 2) creates vast demand for domestic holiday and business trips, as a result of the exchange rate (44.10 INR per US$), that Indian currency is cheap compared to the UK. Consequently, its economic growth helps to attract foreign visitors. Adding up, refer to Appendix 3, India's main overseas markets, the UK (15.9%) and US (14.9%), are experiencing a strong economic growth, which may further increase the tourist traveling to India (Travel and Tourism Forecast, 2005). Numbers of Tourist arrival (see Appendix 3-1) over the past years gave the evidence to place its tourism industry in the stage of \"Involvement\" in terms of \"Tourist Area Life Cycle\" (Butler, 1980). Among all the international tourists, according to a study of world tourism organization in 2003, there were 2.5 million holidaymakers whilst only 0.18 million people visit India for business purpose in all kinds (World Tourism Organization, 2006). Yet the continuously escalating economics offers the capacity of supporting tourism growth as well as being a criteria reflecting on numbers of trips for business purposes (Middleton & Clark, 2000); therefore, the demand of international business traveler in the recent years and following future can be estimated optimistically. Aside from the volume of international tourist, domestic tourism also makes great impact on a nation's tourism industry (Bigano Experiencing general development of India ( Purpose of domestic traveling differentiates depending on regions ( For example, the southern states of India account for the higher share of tourists", "label": 0 }, { "main_document": "excess staff that can occur with a level capacity plan by, whilst it also aims to satisfy customer demand. This is particularly suitable to the store because: it has high variations in demand, perishable goods, a flexible workforce (see section 1.3.1.) and a limited storage capacity (section 1.3.2.) - \"A pure chase demand plan is more usually adopted by operations which cannot store their output \" This section answers: How the store efficiently allocates its resources over time and changes its capacity according to forecasted demand changes? To efficiently address capacity management, the store employs students as temporary staff to work during term time to cope with the extra demand. This flexible workforce also has multiple skills and the ability to switch easily from one task to another. The staff have broad training in job enlargement and rotation so they know how to restock shelves, clean, use the tills, assist in the storage room and do stock checking. The store also controls staff at peak times by using an 'overlap' system. Since the store is open 14 hours a day, the employees work shifts instead of full days. If organised correctly this can mean an overlap of staff at lunchtime meaning that the extra staff needed are available solely for the busiest time of the day. When the store is quiet, staff engaging in other activities such as cleaning or maintenance can limit 'idle time'. The limited storage capacity affects inventory management by restricting the quantity of stock held and the quantities ordered. The store manages with such limited space because they outsource their warehousing and a significant part of its sourcing and supply chain management activities to a service provider: Nisa. With almost a delivery every day and managers who are experienced in demand fluctuations, the store has the flexibility to order 'just-in-time,' each evening, to meet its precise needs so it can adjust itself to demand. This system is vital to achieving the important quality objective of stocking fresh food. This means that socks must be managed according to their perishable nature as well as their demand structure. However, during our research we found that the stock room capacity is not entirely utilized because of its disorder, which causes problems and leads to under-utilisation of resources We also discovered that the management want a larger stock room, to give a larger safety margin (or buffer stock) for highly demand goods that sell quickly, so the chance of having stock-outs is reduced. This is possible that the store makes use of a buffer inventory system for certain products because \"its purpose is to compensate for unexpected fluctuations in supply and demand.\" Although a larger stock room would be hard to obtain, it is expensive and it may lead to more wastage, especially of perishable goods. Layout is a key consideration for any operation to ensure there are not overlong or confused flow patterns, long process times, customer queues or high costs. As mentioned in part one, cost is not a key performance objective to Costcutter yet, because it is a supermarket,", "label": 1 }, { "main_document": "expected to be up to 6 hr. Severe constraint on size was required as the slicer required to go onto a work surface where space was limited. In conclusion, the initial specification could be summarized as follows: Hence the gear ratio U = Since the reduction gear ratio was enormous (20:1), a double reduction gearbox was selected so that a higher reliability could be achieved. The gearbox was connected between the motor and grinder and bearings would be placed at both ends of shaft. Straight spur gears would be the best combination to employ as such arrangement could provide a more convenient procedure in further analysis. Before the commencement of design analysis process, the initial layout of the product was illustrated (Figure 1) and the gearbox housing was represented by the grey rectangle. From the diagram, power was transmitted from the AC motor to gear 2 through the shaft and gear 2 was then connected to gear 3. Again, power was transmitted through the shaft from gear 3 to 4 and gear 4 was connected to gear 5. Finally, power was transmitted from gear 5 to the grinder through the shaft. In order to select the most suitable component, a range of analysis was performed and the limitation was identified so that the gearbox could be operated under safety condition. 1) First of all, the nominal velocity ratio U was determined: However, a double reduction arrangement was employed, therefore: 2) As the material used in both the pinion and the gear were identical, the pinion was always weaker since more undercutting could be found from the smaller gear. In order to avoid any undercutting appear in the operation, a high pressure angle was selected (25 To ensure no interference could be found within the gearbox, a minimum number of 12 teeth were required in the pinion (Appendix A). Hence the number of teeth could be determined for the gear: If After that, the modulus Since D 3) Moreover, the pitch line speed was computed: Since 4) As power P was provided, the transmitted load F 5) On the other hand, the dynamic factor K 6) In order to avoid torsional distortion, the face width if b was assumed to be 7) Since the bending stress within the gear depended on the torque, geometry and the form factor, the expected bending stress The Lewis form factor Y could be identified (Appendix B). For the pinion, Y For the gear, Hence, Therefore the maximum bending stress occurred at the pinion with the value of 20.21MPa. However, in order to achieve a safety and reliable operation, the allowable bending stress S 8) where and Hence assumptions of factors were made for a meat slicer: k As Gray cast iron was selected as the ultimate tensile strength was approximately 160MPa, in which the bending stress could be covered sufficiently. Further analysis would be carried out in order to investigate the contact stress between gears. The contact stress (Modified Hertzian stress) where Similarly, the allowable fatigue strength S where Therefore Since the yield strength of gray cast", "label": 0 }, { "main_document": "Democracy\", p. 198. Andrew Linklater, \" Citizenship and Sovereignty in the Post-Westphalian European State\", in Daniele Archibugi, David Held and Martin Kohler (eds.), However, democracy among nations is not something easily accepted. Cosmopolitan thinkers emphasise that possibilities for democracy should be analysed comparatively. The Ancient \"polls\" is no longer democratic enough for us today, just as Plato or Rousseau could not have imagined democracy in vast regions. Moreover, the construction of identity is a long-term extremely difficult endeavour, which is now experienced by the European Union (as a supra-state institution, which resembles the model of cosmopolitan democracy). Daniele Archibugi, \"Principles of Cosmopolitan Democracy\". The cosmopolitan democracy mainly refers to reforms of institutions like UN or creation and consolidation of other global institutions, but it does not give substantial consideration to more fragile and subtle issues like ethics or identity. \"Cosmopolitanism will only become a substantial ethical vision if it is able to interrelate a number of questions related to politics and society, culture and the self. (...) Cosmopolitan moral progress can be accounted for when 'they' become 'us'\" Otherwise it will probably face the same democratic deficits as the EU is facing now, being more of a technocracy rather than democracy. How are cosmopolitan feelings going to be nurtured? Nick Stevenson, \"Cosmopolitanism and the Future of Democracy: Politics, Culture and the Self\", 2, 200, pp. 251-256. In conclusion, a cosmopolitan democracy faces many difficulties as well as resistance on the part of political actors. Interestingly, the same states that call themselves the advocates of democracy world-wide are extremely reticent to a more democratic decision-making in global regulatory frameworks and choose to act more or less unilaterally. Unfortunately, in reality the realist discourse has not been replaced by a cosmopolitan discourse. Faced with political pressures, a cold realist politician may demolish in one second the whole fragile temple of democracy built in decades. The paper has shown that cosmopolitanism is based on the assumption that globalisation is transforming the system of governance and the political loyalties of people. Sceptics of globalisation can easily reject the cosmopolitan urge for change. Secondly, the moral universalism advocated faces serious controversies since the Western values may not be morally superior. In addition, the paper exemplifies with the EU experience the difficulties inherent in forging transnational 'imagined' communities. Cosmopolitans should focus more on moral cosmopolitanism and not only on institutional cosmopolitanism.", "label": 0 }, { "main_document": "In This contention is supported by the judgment of Blackburne J in The effect of this is to render the trustee entirely culpable even in a case where the gross negligence of a beneficiary had contributed significantly to her own loss. Without a mechanism to effect a just allocation of responsibility, this seems to assert considerable hardship on the part of trustees. [1999] Lloyd's Rep PN 241 (ChD). However, New Zealand A plea of contributory negligence has been allowed in the context of a trustee's breach of fiduciary duty. Moreover, it has been pointed out that while Blackburne J rejected apportionment in cases of G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 207-208. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 207-208. In this paper we examine whether there is room for the introduction of contributory negligence on the part of claimants into the realm of liability of trustees for breach of their duty of care. The starting point of our analysis is the issue regarding fusion of common law and equity. We distinguish breach of fiduciary duty from breach of duty of care and make certain assumptions before proceeding to identify the relevant distinctions between tort principles and trusts law. We then focus our attention to the fault principle by comparing tortuous negligence to trustees' breach of their duty of care. After taking a closer look at the common law damages and equitable compensation, we observe that the compensatory goals of the two remedies are essentially the same. Based on this remedial congruence, we argue that both should attract the same sort of analysis in determining limitations on the remedies. Since we are concerned with introducing a tort concept into trusts law, caution is taken with regard to the issue of fusion of common law and equity. While some opine that fusion is of crucial importance in introducing contributory fault into trusts, This is clearly out of the scope of a paper merely concerning the issue of contributory fault. Also, it must be pointed out that common law and equity are still held to be separate to this day. However, given the institutional difference between legal and equitable title in modern trusts, See A Burrows, 'We do this at Common Law but that in Equity' (2002) 22 OJLS 1. See J Martin, 'Fusion, Fallacy and Confusion: A Comparative Study' (1994) The Conveyancer and Property Lawyer 13. See G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 210-211; see also P Cane, G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 210. It would be appropriate to distinguish breach of fiduciary duty from breach of duty of care at this point. This has been described as a \"dimly lit area\" of law. This observation will be carried on to the next phase of analysis. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 208. In [1998] 1 Ch 1 (CA). [1998] 1 Ch 1 (CA)", "label": 0 }, { "main_document": "The two men I will be discussing are Aristotle ( 384 - 322 BC) and John Locke ( 1632 - 1704 AD). They were both Empiricists, so they believed that knowledge is gained through experience and formulated their ideas from observation. Great thinkers before them influenced them both, yet both constructed new and refreshing ideas on old beliefs and concepts. Many of these beliefs and ideas they both share are very similar yet there are some distinct differences in they way think. The main similarity is their focus on sensation being the basis for all knowledge. Yet Aristotle furthered this thought by proposing the 'active intellect', suggesting the mind is indeed divine and immortal. Whereas Locke believed cognition began solely with sensation. Aristotle was a true empiricist, gaining all his knowledge from observation; he had no time for the use of experimentation. However he did combine this use of observation with careful reflection and reasoning. When discussing Aristotle it is important to mention that he was the pupil of Plato, which must be considered as a historical and intellectual influence upon his beliefs and ideas. Yet his philosophical position was at odds with his masters. This difference lay in the precise nature of the 'forms' and their relation to the empirical world. For Aristotle true reality was the perceptible world of concrete objects, not a world of eternal ideas that were imperceptible. This was similar to Locke's view that the human idea of objects and what they resembled where actually incorrect, as there was no guarantee that the human idea of an object genuinely resembled what it was supposed to. So whilst the two views are in a way similar Locke puts doubt towards the human perception of concrete objects. There are according to Locke, three factors involved in the process of human knowledge. Firstly the mind, then the physical, or in the term of Aristotle, the concrete object. Finally, the perception or idea in the mind that represents that object. For Locke, man only directly knows the idea, not the object, he only gains knowledge of the object through the idea. Outside of man's perception is simply a world of substances in motion. For Aristotle, Plato's theory of Ideas was both empirically unverifiable and full of logical difficulties. Aristotle saw that Plato's confusion lay in the treatment of a quality as a substance. Aristotle instead developed categories, in which the substance is the primary reality and the qualities are an abstraction of the substance. Richard Tarnas uses the example of a tall, white horse to attempt to explain this theory. The horse is the substance, while the quality is the whiteness and the quantity is how tall the horse is. Tarnas (1991). Apart from the substance everything else exists in that it is relative to an individual substance. Aristotle placed a large emphasis on the substance, as in his view they underline everything. If substances did not exist then nothing would exist. The real world consists of distinct and individual substances. A substance is not simply a unit of matter, but", "label": 1 }, { "main_document": "maintain the structure of that society, with its function being the contribution it made to the \" His theory was labelled 'Structural Functionalism' the logistics of which shall be examined shortly. An example of this exists in Radcliffe-Brown's monograph the Andaman Islanders, where he is commenting on the weeping rite, which exists in this society. It exists in many contexts however one is in the situation where friends are reunited after a separation. The weeping rite is obligatory, and during this time of emotional tension it must be relieved in some way. The two embrace and weep, this ceremony reinstates the condition of intimacy and affection that existed before the separation, Radcliffe-Brown believes that it is \" The overall aims of these two scholars were different, which subsequently led them into producing two different theories around social anthropology. Malinowski was primarily concerned with customs, his aim \" The term 'function' to Malinowski under his headline Functionalism conforms to his focus upon the individual by \" Radcliffe-Brown's aim was somewhat different; he wanted to \" Through this it becomes clear that another very important difference arises; the use of the comparative technique in order to support your given theory. Malinowski's theory was unable to explain the diversity of human societies, primarily because \" Radcliffe-Brown regarded function, under his theory of 'Structural Functionalism' as the contribution any institution gave in the maintenance of the society's structure. He often referred to his theory as comparative sociology, with its aim being \" Through his comparison he wanted to show that different cultural areas with no historical contact, show similar phenomenon in culture, presenting a universal development of the human mind. An extremely scientific aim, in comparison to Malinowski One of his greatest contributions to anthropology was his classification of non-Western societies into types and sub-types (Layton, 2003). In studying 130 Australian aboriginal tribes, he noticed many correlations between them, one being a totemic religion. A religion whereby different moieties are identified commonly by a specific animal species, two common species include the eaglehawk and Crow. Radcliffe-Brown was curious as to why these moieties were assigned to these particular animals; the answer lay in opposition. In reference to the above two species their opposition comes from many Australian tales, where they come into conflict. Radcliffe-Brown sees opposition as a universal feature in human thinking, for example up and down, black and white. Radcliffe-Brown later wrote that classification was \" As it can be seen Radcliffe-Brown was very specific in his approach to functionalism looking for specific attributes, which could relate different groups of human society together, Malinowski's theory did not work from this spectrum. Nadel (1957) commented that Malinowski's thought moved on two levels only, from the Trobriands to general primitive man and society at large. The only time that he did refer to other primitive societies was to gain supporting evidence. \"His generalizations jump straight from the Trobrianders to Humanity, as undoubtedly he saw the Trobrianders as a particularly instructive species of humanity\" (Nadel, 1957, cited in Barnard, 2003, p69). This quote shows that Malinowski's theory", "label": 1 }, { "main_document": "It cannot be denied that the European Commission occupies a \"central position\" The Commission has come a long way from its humble beginnings in the 1950s. The Commission's \"influence usually looms large\" It has been at the heart of the Single European Market project and much EU policy emanates from the Commission. However, it does not necessarily follow that the Commission is the government of the EU, even if it views itself thus. The traditional image of the Commission is of a monolithic supranational bureaucracy and for many euro-sceptics \"the European Commission is a natural bogeyman\" Yet, it is far from being omnipotent and lacks some crucial features of a government. Indeed, it can be argued that it is \"a bit of a dog's breakfast\" Furthermore, the Commission has been plagued by allegations of corruption and for it to be a proper government, it needs legitimacy and accountability. Neill Nugent - \"The European Commission\", page 10. Neill Nugent - \"The European Commission\", page 15. John Palmer - \"Brussels Bogeyman is Thatcher's Nightmare\", John Palmer - \"Brussels Bogeyman is Thatcher's Nightmare\", Neill Nugent - \"The European Commission\", page 329. It is firstly appropriate to consider what is meant by the term government. It is important to remember that the EU operates under a \"sui generis\" structure. Therefore, one cannot really compare the Commission to any of the national governments of the member states. Nevertheless, any pseudo-governmental body must have authority and a means of exercising control over the people or states it governs. A government should propose and pass legislation in the interests of the people. This should span a wide range of policy areas including the basic welfare of the people, as well as global issues such as defence. In democratic countries, governments are elected by the people, to whom they are accountable. In many systems, the Government is accountable to a Parliament, whose job it is to scrutinise legislative proposals. Governments can be organised in many ways, but it is a common feature nowadays to have a large civil service and bureaucracy. Governments generally have a cabinet of ministers, each responsible for a particular policy area. The government is led by one person, in Britain the Prime Minister or in Germany the Chancellor, who has ultimate authority. The role and functions of the Commission by no means form a perfect parallel with the classic notion of a government. The Commission's official powers set out in Article 211 EC do not do justice to its real job. The Commission, as \"a collegiate, vertically-organised institution\" It has a plethora of functions and is not characterised by \"any rigid doctrine of separation of powers\" Dionyssis Dimitrakopoulos The first contains systemic roles, namely maintaining the EU as a credible system of governance, for instance by monitoring the implementation of the Treaty. The second comprises sub-systemic roles, namely policy initiation and formation. To achieve this, the Commission has a strong underlying philosophy, which was expounded by Jacques Delors during his presidency. The Commission has developed \"a strong sense of Europe's destiny\" Ideologies often exist in a", "label": 1 }, { "main_document": "A protein is a polypeptide, or complex of polypeptides that has attained a stable three-dimensional structure and is biologically active. A polypeptide is a sequence of amino acids linked by peptide bonds between the amino and carbonyl groups. Therefore, amino acids are sub-units that are joined into condensation polymers which make up proteins. Glycine (aminoethanoic acid) This normally exists as a zwitterion A condensation reaction occurs between the carboxyl group of one molecule of gylcine and the amino group of the other molecule of glycine to form a peptide bond (-CONH-) and a molecule of water is given off. Gylcine + Dilute Hydrochloric Acid Glycine + Pure Water = Zwitterion The hydrogen atom from the carboxylic acid group protonates the basic amine group Glycine + Dilute aqueous sodium hydroxide Using the valence-shell electron-pair repulsion (VSEPR) theory it can be seen that in Glycine the side groups are arranged in a tetrahedral shape. There are no lone electrons on the -carbon. There are four covalent bonds, one N-H, one C-C and two C-H bonds, which are arranged evenly around the central -carbon. There is a separate region of high electron density corresponding to each bond. The bonds are spread out as much as possible due to the repulsion between them. This arrangement is taken up because it gives the lowest potential energy due to electrostatic repulsion and yields a bond angle of 109.5 When Glycine is placed into a weak acid the amino group acts as a acceptor and the chloride ion attaches to the amino group. The amino group is normally arranged in a tetrahedral shape but with bond angles of 109.5 However with the addition of the Cl- the bond angles will be reduced to approximately 104 This means that this group within itself will be more compacted but it will not affect the overall shape of the molecule and it does not affect the bond with the -carbon. Globular proteins are three dimensional structures that contain both secondary and tertiary structures. There are two main types of secondary structure, the -helix and the The -helix is a rigid formation of the polypeptide chain produced by the hydrogen bonds forming between peptide groups, C=O and N-H. The bond forms with the peptide group three residues ahead leading to a helical uniform structure. The hydrogen bonds are able to form due to the two permanent dipoles on the carboxyl and amine groups and the lone pair of electrons present on the nitrogen atom which acts as an electron donor. The hydrogen bonds are at their optimum length leading to strong bonds. The -helix also leads to a tightly packed core maximising the association energies of the atoms and enabling Van der Waals forces to further strengthen the structure. Hydrogen bonds occur between neighbouring polypeptide chains to form These can be parallel or anti-parallel. In order to obtain the optimum hydrogen bonding conformation the sheets are pleated. Parallel sheets are less common and hence rarer than anti-parallel In globular proteins such as carbonic anhydrase Carbonic Anhydrase, illustrated here contains both -helices and -pleated", "label": 1 }, { "main_document": "table 2 that the project will require Therefore, the company needs appropriate What's more, the pay back period will be near 2.1 - 3.5 years and the accounting rate of return is between 0.28 - 0.47 (see Appendix 3). This profit could be an underestimate number because the revenue (such as the sales) is estimated in a conservative manner. The revenue will be increasing due to the growing webpage impression and online visit quantity. My Mp3 will have a small inventory for the products, and the e-business not does require an expense office in a good position, therefore, together with the lease policy, the fix cost does not tie up a lot of cash. What is more, the risk of a dead debit or bankrupt is quite low in My Mp3, because all most of the sales are received by cash and the outflow of cash is quite low because of the credit payment strategy. Furthermore, it is more than 90 percent possibility that the variable cost of per product will be decreasing due to a growing bargaining power from a larger amount of orders raised with increasing webpage impression and online quantity visit. Furthermore, there are a lot ways to cut the cost and raise the revenues. A lot of online communication and marketing techniques can be used to increase the sales online in a cost-efficiency way. For example, in the future, auction model could be a new important income streams as My Mp3 develops. Or ideally, after a strong cooperating relationship having been built between My Mp3 and its suppliers, the products can be delivered from the suppliers to customers directly in order to cut the transportation cost for customer and company, as well as building another competitive advantage over other competitors. My Mp3 will be willing to establish sound debt to equity relationships which contribute to the successful of the business through sufficient equity or excessive leverage (Nevitt & Fabozzi, 2004). My Mp3 is planning to obtain funds for business through a number of different sources. Firstly, because My Mp3 is a small e-company with a manageable business risk, long-term loans on a annual payment basis - 5 years loan-- are an excellent and main source of finance when the interest rates are relative low at current (Green, 2006). My Mp3 will give a loan guarantee scheme to require financial supports from financial institutions (such as banks), individuals or business angels (which will provide 3-5 years within small investment size). Loans, being engaged about 50 percentage of the working capital ( The advantage of is long-term loans that the interest payments will be lower than short-term, and brings advantage of tax deductions (Banks. 2001). My Mp3 will quest additional short-term financing to manage the uncertainty affects the cash budget (Broyles, 2003). It will be probably 10 percentage of the total working capital ( The main source is the flexible rated overdraft bank loan in the bank account for a short period, which will help My Mp3 to overcome the cash shortage (often is used for buying supplies) during the", "label": 0 }, { "main_document": "400 radiocarbon years gap due to the absence of applying \"marine reservoir effect\" (Richards, Price and Koch 2003:288-293). The results of Richards and Hedges article indicate the complete dietary shift in Neolithic Britain, because the results of 78 radiocarbon-dated human remains suggest the marine base diets. Moreover, the structure of article is well planned, and also main arguments and isotope results are highlighted clearly. However, it might be preferred that the application of scattergram with valuable isotope results. Furthermore, this article is quoted by the numbers of authors, therefore it is clear this article have been considered as making large contribution to the study of subsistence change at Mesolithic-Neolithic transition. However limitations for isotope measurements are introduced only a small number, there are, in particular, \"preliminary models\" which show that a diet dominated by terrestrial food might include up to 20 per cent of marine protein without raising the stable isotope values (Barbereba and Borrero 2004:191-195). Moreover, this article has not been supported by the cultural continuity, which tends to be observed from faunal assemblages. In general, faunal assemblage refers to the generalised dietary tendency over long period, whereas stable isotope data refers to the dominant protein intake by an individual during the last 10 years before death (Barbereba and Borrero 2004:191-195). Thus, the term \"stable isotope analysis as a direct evidence of human diet\" which is used in this article, is considered as not suitable expression, because the dietary reconstruction needs to consider with combination of other evidences (Milneral. 2004:9-22). Furthermore, this result did not clearly represent subsistence change from hunter-gatherer to agriculturalists at Mesolithic-Neolithic transition, because generally stable isotope analysis cannot distinguish between wild and domesticated resources (Thomas 2003:67-73). This conclusion attempts to summarise the brief critique of \"A Neolithic revolution? New evidence of diet in the British Neolithic.\" written by Richards and Hedges in 1999. According to this critique, a limited representation of isotope analysis drawback could be identified as weakness of this valuable article, especially for people who are studying stable isotope analysis. On the other hand, the important results are represented well, and this isotope results are frequently referred by other isotope studies. Thus it can say the usefulness and values of this article is relatively high. Therefore this article would be playing important role in the study of investigating nature of the transition between Mesolithic and Neolithic. RICHARDS, M.P., and HEDGES, R.E. 1999. A Neolithic revolution? New evidence of diet in the British Neolithic,", "label": 0 }, { "main_document": "The \"first generation\" model is first pointed out by Krugman in 1979. It is a model about the timing of the balance of payments crisis and how the country's currency is being attacked. When a central bank's foreign currency reserves are limited but the bank is committed to tolerate the persistence of government budget deficits, it will find itself unable to maintain the fixed exchange rate and eventually leads to the collapse of the exchange rate regime. This essay sets out to look at the macroeconomic model within which the \"first generation\" speculative attack model is conducted. The next section looks at the empirical evidence in support of this model and finally, the essay examines the limitations and extensions of the model. The model used in this essay is a simple linear example developed by Flood and Garber (1982). The basic setup of the model includes three important relations: The LM equation is a money demand equation. This means that the demand of money depends positively on price level and output and negatively on interest rates. The purchasing power parity condition (PPP) requires that the prices of goods, The uncovered interest parity (UIP) is an arbitrage relation stating that domestic and foreign bonds must have the same expected rate of return, expressed in terms of the domestic currency. All starred (*) variables are assumed to be given exogenously. In this model, the country is having a fixed exchange rate regime. The supply of money in the country is where Domestic credit grows at a constant rate of In this economy, the government budget constraint is where If the public is already holding the maximum level of bonds issued by the government, the only way for the government to finance its budget deficit would be through issuing bonds to the central bank. Thus, the central bank has two roles in this model. It has to monetize the government budget deficit by buying government bonds and it also has to defend the exchange rate of the country by buying and selling foreign reserves. Since the requirement to monetize government budget deficit is given priority, the inconsistency of the central bank's two objectives will cause the exchange rate to float ultimately. Under the fixed exchange rate, When all exogenous variable are normalized, the exchange rate will be equal to money demand. This show that under fixed exchange rate regime, the monetary policy is endogenous since it must be consistent with the level of exchange rate chosen by the government. Knowing this, the supply of money in the economy, which is made up of domestic credit and foreign currency reserves will now be The inverse relationship between domestic credit and reserves indicates that as domestic credit is growing constantly at the rate of When government spending increases and leads to increased domestic money supply, the central bank has to sell foreign reserves in order to buy domestic money such that total money in the economy remains the same. The central bank stops intervening in the foreign exchange markets when it has no more foreign reserves to", "label": 0 }, { "main_document": "a production rate 8700units/week (6000units/week for XL model and 2700units/week for XL model) will be adopted, with day shift plus night shift. Safety stock is set by 700units. Owing to the constraints of initial inventory and lead time of accessories, we have to do a little modification in week 25. Available (t) = MPS + available (t-1) - demand Adjusted for SS = available - safety stock It can be seen that all the demand can be satisfied, but the indicator of adjusted for SS sometime is negative which can be changed by raising capacity of production. Actually, we should monitor the performance of MPS and modify it weekly. The bill of material specifies the makeup for each part in the end item; it answers the question that what ingredients do we need to make a finished good [6]. In this case, the BOM is simple and displayed in figure 1-6 and 1-7 Before producing a material ordering plan, we should consider the initial inventory, EOQ and safety stock. s/lsc Staring inventory levels: Main Bodies =9100; Aerials =9900; Accessories =13400 e/lsc Due to the constraint of frequency of delivery, EOQ can not used in this situation. Because the suppliers are not reliable, so, we have to hold a reasonable safety stock Main bodies, aerials and accessories are dependent demand in this game which will be determined by quantities of finished goods which will produce weekly. Combining the MPS, BOM and initial inventory level, we can get the material ordering plans. Forecasting aims to improve quality of decisions. Generally, a forecasting approach involve in several steps as follow [6]: In the case, there are two kinds of products for different customers. On the one hand, the OEM's demand for completed units is fairly steady, on the other hand, the demand of other customers for individual components vary in time and quantity. So, two forecasting systems are required for different market demand. The objective of this forecasting is to respond the change of OEM's demand as quickly as possible. The demand is fairly stable, but it is possible that there is some slight change during each order quantities. It is appropriate to choose exponential smoothing method to forecast the OEM's demand, because exponential smoothing can arrange a weight for actual data and old forecast to chase the error of the forecast just made. In the general case, the formula of exponential smoothing method is given by: In fact, the value The increase of value Once the customers' demand fluctuates, accordingly the value of forecasting will respond quickly to match actual demand. When the value So, if we maintain original OEMs' demand, a small value of As to other customers, the historical demand pattern is random without any trend and seasonality. Randomicity is a highlight characteristic. The forecasting objective is to filter the effect of random fluctuation and meet major customers' requirement. Moving Average could be adopted in this situation, because the average can remove the random effect over a great deal of observations. The formula of moving average generally looks like [4]: In the", "label": 0 }, { "main_document": "'Clinical linguistics is a core element at the centre of the interdisciplinary education of students training for a professional qualification in speech and language therapy.' Speech and language therapists work with individuals with language and communication problems. The problems they deal with can be extremely varied: they range from language delay in children, stammers and hearing difficulties, to communicative disorders such as aphasia which result from viral infections, trauma to the head or strokes. This essay aims to examine the importance of the study of linguistics to a speech therapist, looking specifically at its use in the assessment and treatment of two well known types of aphasia, Broca's Aphasia and Wernicke's Aphasia. The ultimate aim of a speech and language therapist is to assess the severity of a patient's communicative disorder and to offer appropriate clinical management and treatment to that patient. Whilst a speech therapist will need a knowledge of psychology, medicine, biology and anatomy, together with clinical skills and counselling skills, it is their linguistic training that will enable them to make accurate and detailed linguistic descriptions of the patient's communicative abilities and enable them to measure the effectiveness of the treatment. Since communicative impairments can affect all aspects of language, it is essential that the speech therapist has knowledge of all the different levels of language (these include phonetics, phonology, morphology, syntax, and semantics) as well as other key areas of linguistics such as sociolinguistics, psycholinguistics and pragmatics. The latter two have become extremely important to speech therapy in recent years and will be discussed in detail later in the essay. A speech and language therapist can only begin to analyse impaired language once they have gained an understanding of normal language development. Therefore knowledge of all linguistic areas and how they function in normal communication is very important. It allows the speech therapist to recognise when a patient is producing atypical language structures, whether their problem is extremely obvious or much more subtle. Aphasia is a good example to refer to when looking at how linguistics is used in speech therapy, as it encompasses a variety of language problems and can range in severity from very mild to extremely severe. It is a language disorder caused by damage to the brain either by a viral infection, a stroke or head trauma. It affects the production and/or comprehension of speech and the ability to read and write. ( Aphasia may affect mainly a single aspect of language use, for example the ability to put words into a sentence or to retrieve the names of objects, or it may impair multiple aspects of communication, leaving few channels open for a limited exchange of information. ( The latter is unfortunately more often the case, and it is the job of the speech therapist to decide the degree to which each of these channels are able to function for communicative purposes and to design treatment which might improve their use. For this, the speech therapist will have to use their knowledge of linguistics firstly to assess and then to interpret the aphasic's", "label": 1 }, { "main_document": "factor present in EMS and P It specifies the fate of the former and is prevented from acting on the latter by PIE-1 (Seydoux However, SKN-1 is probably not the only transcription factor whose activity is repressed by PIE-1. Nevertheless PIE-1 appears to have another important role in germline lineage along with another protein. As stated earlier on, PIE-1 is also present in the cytoplasm of germline cells. In the cytoplasm, it is associated with P granules (Tenehaus et al., 2001). These are germline associated RNA-rich organelles believed to be involved in germ cell development (Guedes and Priess, 1997). In conjunction with its role in the nucleus, PIE-1's cytoplasmic function promotes the expression of factors that are maternally encoded and that are crucial for primordial germ cell development such as the maternally encoded Nanos homolog NOS-2 (Tenenhaus , 2001). The cytoplasm found in germ cells is called germ plasm. In the zygote, by an MT- dependent manner P granules are carried to the posterior end, where they associate with the cortex (Spike and Strome, 2003). This results in most P granules being segregated to P P granules also turn up to be unstable in the cytoplasm fated for the somatic blastomeres. Experiments by Caroline Spike and Susan Strome described this phenomenon as: \"P granules being trapped near the anterior cortex of a P In addition to this, it has been found that the germ plasm also contains the MEX-1 protein. MEX-1 is very similar to the PIE-1 protein (Guedes and Priess, 1997). MEX-1 is encoded by the It also has the same 2 copies of the unusual finger domain found in the PIE-1 protein (Guedes and Priess, 1997). Although MEX-1 is only present in the cytoplasm, it is a granule component which appears to be required to restrict PIE-1 expression and activity to the germline blastomere (Guedes and Priess, 1997). Figure 6 shows the effect of the absence of the 2 proteins on the germ lineage. The specification of the germline therefore involves 2 major proteins, PIE-1 and MEX-1. PIE-1 blocks zygotic programs that drive somatic development and activates maternal programs that drive germ cell development. As for MEX-1 it regulates PIE-1. Figure 7 summarizes the role of the 2 proteins. The anterior blastomeres produced at the 2 cell stage have the equal developmental potential (referred to briefly in section II). They are eventually committed to different fates: ABa progeny give rise to anterior pharyngeal cells and ABp descendants contribute to the formation of the anus and rectum. Specification of ABp fate requires cell-cell interaction between ABp and P This was demonstrated by Shelton and Bowerman (1996) by incubating intact embryos overnight and using antibodies to distinguish the cell types normally generated by ABp. In addition, when AB and P It follows that ABp-P2 interaction engages the maternal effect gene This gene encodes the protein GLP-1, which is a membrane bound receptor and a homolog of the P Two line s of evidence suggest this: in At the 2 cell stage, APX-1 is found in the anterior of the P Moreover, it", "label": 0 }, { "main_document": "any remaining trade would still follow the pattern described by the theorem, and hence it would still be applicable. An interesting extension of the Heckscher-Ohlin with regard to policymaking is discussed by Grossman & Rogoff (1995) Policymakers may also have different preferences, such as transferring resources to favoured groups, being re-elected or maximising social welfare. The assumption that income distribution is irrelevant in the Hecskcher-model may also mean that policy will be chosen in order to maximise the aggregate real income of the economy. Grossman, G.M. & Rogoff, K. (1995), Policymakers' interventions can be in the form of price policies, such as tariffs or export subsidies, or quantitative restrictions such as quotas. Grossman & Rogoff (1995) also explain that the Heckscher-Ohlin model cannot account for the large share of world trade that takes place between industries. This means that the relevance of the model to policymaking may be depleted in terms of policies that may be undertaken in order to redistribute welfare between industries. Magel, Brock & Young (1989) developed ideas relating to the role that political contributions play in conjunction with the Heckscher-Ohlin model with regard to policymaking. They hypothesise that political lobbying groups interact with political parties by making campaign contributions in order to increase the probability that their favoured party is re-elected. The two lobbying groups are those of labour capital owners; price and quantitative policies tend to redistribute welfare unevenly between these two groups. In general, political economy models often work within a Heckscher-Ohlin setting, and modify the objective function that is maximised by policymakers in order to illustrate preferences for particular distributional outcomes.", "label": 1 }, { "main_document": "and institutionalised As legally recognised entities or 'insiders' to the political process, NGOs have acquired considerable influence and autonomy in decision-making processes. Transparency International, for instance, builds public pressure against corruption, lobbies individual governments to advocate policy reform, and monitors the compliance of states to multilateral conventions Submitting NGOs to a state's domestic regulations may consequently be difficult, leading governments to extreme measures: the United States government, for instance, recently launched an NGO-monitoring project threatening certain organisations with loss of funding if they did not adhere to its policies Such antagonism suggests that today's transnational civil society can limit the sovereignty of even the most dominant states. Kaldor p583 Kaldor p585 Florini p72 Kaldor p585 Florini p72 Transparency International Florini p73 A similar power struggle pits states against a third form of transnational actor: International Organisations (IOs). The majority of international organisations - such as the United Nations or the International Monetary Fund - have states as their constituent membership and operate within formal procedures. Nonetheless, these bodies can act with a degree of autonomy from states. According to Abbott and Snidal, this independence gives IOs considerable power to \"modify the political, normative, and intellectual context of state interactions\" By shaping the norms, values and preferences of member states, International Organisations acquire significant power over the decision-making processes in the international system. Abbot and Snidal p17 Abbot and Snidal p17 Indeed, the influence that IOs exert on state policy can directly \"challenge the state's monopoly on decision-making authority at the global level\" IOs regularly serve as \"managers of enforcement\" in ensuring that governments comply with international commitments; by threatening to withhold their benefits or to apply sanctions if states renege from these obligations, IOs truly diminish the extent to which states are sovereign to choose their own courses of action. Moreover, certain political figures propose to strengthen the role of IOs in areas linked still more closely to state sovereignty, such as security. The previous UN Secretary-General Boutros Boutros-Ghali, for example, advocated the creation of a UN standing army, arguing that issues of domestic security were a \"legitimate concern of the United Nations because [they] ha[d] the potential to undermine regional security.\" Likewise, the 1993 Australian foreign minister Gareth Evans called for a greater use of sanctions by the UN, which would enable this organisation to establish international regimes for peace enforcement. Abbott and Snidal p18 Boutros-Boutros Ghali (1995), Gareth Evans (1993), Not all political theorists see transnational actors strictly as threats to state power, however. Alternative interpretations suggest that NGOs, IOs and even TNCs could be tools by which dominant states of the international system would reflect and promote their personal interests. Some thus suspect certain TNCs of promoting the hegemonic ideology of dominant countries - in this fashion, both Coca-Cola and MacDonald's can be perceived as part of an apparatus by which the United States disperses capitalism to the developing world. Similarly, NGOs could also be used by states to voice their own concerns through a smokescreen of democratic consultation; as critics have expressed it, \"these unelected guardians of the", "label": 0 }, { "main_document": "time and cost absorbed in research and development. The researchers might find they don't have to reinvent already existing wheels. They will admit the advantages that openness brings in: it encourages innovation and fosters bold new products to market which build on proven technology at a markedly lower cost. Then they may find adoption of an open-source design with large base of customers as a win-win deal. Companies can refine the open-source design with affordable prices and make use of bug fixing provided from the community, which has no bad effect on their patents. The end result is cutting-edge reliable products with affordable prices. Another key issue for open source is that it has to build confidence and credibility. Good confidence and credibility among huge of users is the perfect \"patent\" for open sources. The suggested solution is that designers produce high quality and completely documented designs. It will be only a matter of time to convince the user community of the credibility of open designs. For instance, the Linux operating system has become reliable and competitive due to efforts exerted to enhance quality and performance from the developing community. For me, it is also more preferable operating system compared with Windows. What measures we should take to reach the better co-existing of openness and patents? First is for government authorities to create a legal circumstance to make a balance between trade and competitiveness; second is for SDO to regulate the disclosure of IPR information and ensure the openness, fairness and balance during the process of information obtaining and standard setting and protect the rights both for users and holders; and the third is for IP owners and companies to understand and use the \"RAND\"(reasonable and no discrimination) rules to implement the process. In the long run, as better solutions emerge through time, we may look back on this point as the time when control over our lives as passive consumers began to be replaced by creation of our lives as active participants. We can efficiently and effectively get what we need on the shoulders of already-existed. We also respect and protect their fruits and are willing to share our harvests with others anytime. In a society that seems determined to force unwanted pay-per-view, ridiculous encryption systems, and privatized knowledge on us in order to maintain profits, there is a chance to build an alternative technology which is truly convivial. We are just trying to find the best position to lean on.", "label": 0 }, { "main_document": "that though the \"war on terror\" has enabled the US to gain control in the above-mentioned countries (as explained in section one), it has failed to secure alongside for the US the \" 'spontaneous consent\" of the peoples and governments of the above-mentioned countries, which is essential for the US to gain hegemonic status in the region. On the contrary in fact, the war has led to the refutation of the legitimacy of the US' new role in the international arena, by highlighting the true intentions and tendencies of the US; thereby, dashing the American dream to become a hegemon in the region, for years to come. Taking into account the Gramscian concept of hegemony, this paper argued that \"war on terror\" has prevented the US from establishing its hegemony in three countries of East Asia, namely Indonesia, Malaysia and the Philippines. According to Gramsci hegemony refers to a state of absolute control that is established and maintained through the \" 'spontaneous' consent\" of the controlled. Such a conception of hegemony stresses on the need for the controlled to be convinced as to the legitimacy of the control which thereby leads the controlled into \" 'spontaneous[ly]' consenting\" to the control. This element of \" 'spontaneous' consent\" is what distinguishes hegemony from power or control. It is thus that I argued that the \"war on terror\" has not enabled the US to establish hegemony in the region but only control. The security cooperation arrangements that the war necessitated have indeed led the US to becoming indispensable to the above-mentioned countries through the support the former (US) provides to the military and counter-terrorism machinery of Indonesia, Malaysia and the Philippines; thereby, enabling the US to gain control through the support provided. The control gained thus established, I moved on to point out the sites of resistance at the level of the government and the people, to prove that the control is not absolute (hegemonic). Interestingly even the governments of these countries, through political manoeuvrings, try to resist more US intervention than suits the country's needs (and their own). Thus, while the governments seem to be cooperating with the US in the \"war on terror\" it is because of bilateral benefits (as explained in section one) , and not faith in the legitimacy of US power. The resistance of the people on the other hand is less ambiguous, though it not only manifests itself directly but also through its influence on government decision-making. In this paper, I identified two main reasons for this missing \" 'spontaneous' consent\" amongst the peoples and governments of these countries--- the hypocrisy and unilateralist tendencies of the Bush II administration. I argued that by revealing these characteristics of US foreign policy the \"war on terror\" has served a severe blow to the reputation of the US and its growing power in the international arena; thereby, preventing it from gaining \" 'spontaneous' consent\" in these countries. Thus, by laying out both sides I have argued that the \"war on terror\" has certainly enabled the US to gain control in Indonesia, Malaysia", "label": 0 }, { "main_document": "samples, hence the use of bog oaks by the Belfast chronology. And of course, if the species of wood being sampled doesn't have an existing chronology to be compared to, all archaeologists can judge is whether it is contemporary to any other ring sequences in the area, no calendar date can be ascertained. Aitken cites a 'second radiocarbon revolution', the first being its invention, the second being its use in conjunction with tree ring analysis. The aforementioned first calibration curve by Suess in the 1960's was a significant development in chronometric dating, that proved 'major discrepancies between radiocarbon age and calendar age' (Bowman, S. 1990. p17). The older a sample is, the more inaccurate the radiocarbon dates are likely to be. One of Libby's initial assumptions that he based radiocarbon dating on was that the percentage of carbon 14 in the earth's atmosphere has always remained the same- this turned out to be untrue. As a result of this, radiocarbon dates from before 1000BC are erroneously young and need to be calibrated using a curve. These curves are based on radiocarbon dates taken from individual tree rings, the dates of which have already been calculated. There was a confusing 'proliferation prior to 1985' (Bowman, S. 1990. p17) of more such curves. Bryony Orme's case study of radiocarbon dates in the Somerset Levels demonstrates the importance of using two methods of chronometric dating when analysing a site. It looks at Garvin's Track, from which conflicting sets of radiocarbon dates were obtained from brushwood samples, reproduced below. The track forks off into two sections, West and East, which the archaeologists had reason to believe were built around the same time, but the radiocarbon dates are confusingly spread over a long period: By performing a tree-ring analysis of the samples, it was ascertained that the tracks were in fact contemporary; 'sometimes virtually identical patterns of growth' (Orme, B. 1982. p18) were found. The construction of Garvin's Track was subsequently placed within a narrower period, between 2470-2330bc. Orme later notes that 'the two approaches to dating are therefore very complementary' , with radiocarbon placing the context in a large timescale, and tree samples identifying it within a shorter period. She also states that young tree samples are especially useful in proving 'exact contemporaneity of material' (Orme, B. 1982. p20) in a way that radiocarbon is simply incapable of doing. The use of both methods is also wise considering the limitations of dendrochronology. Even after a sample of wood has been linked up to a master chronology and its date of felling ascertained, there is still no certainty that this is when it became part of the archaeological context. As with the issue of heirloom artefacts giving a site or context an inaccurate early date, so a piece of wood may have been left to mature after felling, or have been reused in several buildings. For this reason, brushwood rather than timber was used to date Garvin's track way. It would not be of much use to label one of the discussed methods of chronometric dating as", "label": 1 }, { "main_document": "and rapetum (6). In terms of the evolution of Primary Specificity is suggested to be occurred in the progenitors of anemophilous gymnosperms for ovule protection and Secondary Specificity to be derived from the progenitors of angiosperm as cross-fertilisation promoters within species. It is also supported that the latter to be a duplicated form of Primary Specificity complex (6). Furthermore, S-gene complex may inherit the pollen growth capacity of a species, which is involved in the overcoming of incompatibility by the 'switch-on' effects of a 'regulator' in the incompatible pollen into the recipient's S-genetic element; the production of pollen growth substance is transferred to the incompatible ones, hence allowing the mimicry (6). This study indicates the importance of genetic transformation via irradiated pollen and drafting in producing the new recombination of genes. The role of genetic control over a plant is further applied to study on the sex determination mechanism, in this study in maize (7). Unisexual flower is the result of selective arrest and abortion of the organ primordia within a bisexual floral meristem. There are two genes identified to be involved in this process; masculinising genes which regulate gynoecial abortion and feminising genes inhibiting stamen development (7). The research found the failure in the process of the pistil abortion and the induction of stamen abortion result in the stamenate-to-pistilate conversion in masculinising genes and also the occurrence of the reverse ferminisation process by environmental factors. Gibberellins, a plant growth substance, were concluded to be an important element in the regulation of stamen abortion procedure and the floral tissue ferminisation, while it is also noted that dwarf mutants of ferminising genes showed the inhibition of biosynthesis pathway induced by gibberellins (7). The classes of DWARF genes, TASSELSEEDS2 (Ts2) and SILKLESS1 (Sk1) are also involved in the process and need the further investigation. This study shows the significance of identifying the responsible factors in the sex determination and revealing the overall process for agricultural purposes in particular . A study on fern species The aim of the study was to investigate if these pinnule morphologies correlate to the genetic variation by means of AFLP (Amplified Fragment Length polymorphism) DNA-fingerprinting techniques and DNA sequencing (8). The findings from the constructed phylogenetic trees were that there was no significant evidence leading to the separation into two groups corresponding to morphology but suggesting the strong influence in the genetic variation by geographical distribution rather than by morphology. Therefore, in terms of morphological expression, the polymorphism of the species is supported (8). However, the research failed to detect the genes which have control over pinnule morphology, only suggesting the higher possibility of the key genes be in chloroplast than in nuclear genome. The comparison with another study on one European species of The significance of identification of genes which correlate to the morphological features of a plant and the link with evolution is also highlighted by Irish and Benfey (9) with particular focus on the comparison of studies on The study on homologous genes with the establishment of knowledge on the developmental programmes including the controlling mechanism", "label": 0 }, { "main_document": "violently, entirely futile. He could never share Ulrich's passionate interest in morality, but is rather something of a disturbed animal - it is never clear when he could be set into a frenzy. In the case of Clarisse, the conviction that something must be The danger in this kind of attitude of made clear when, in response to Walter's jealous complaining about Ulrich, Clarisse orders Walter to kill Ulrich, without having ever thought this before, simply so that an idea would be put to action. This kind of impulsive energy to act is condemned by Ulrich when speaking to Agathe: \"it's so easy to have the energy to act and so difficult to find a meaning for action!\" As Jonsson notes, the Other Condition functions as something of an agent of undoing, in that it offers not a singular and thus necessarily exclusive and arbitrary entity as reality, but instead \"negates ideological appellations and is thus forever deterritorialized\". Payne, Philip. Jonsson, Stefan. Jonsson, Stefan. The contrast of the characters Moosbugger and Clarisse raise the interesting complexity of Musil's contemplative mysticism into focus. Although Pike claims that \"the figure of Moosbrugger is so ambiguously presented that [...] his importance lies rather in what he means to the other individual characters in the novel,\" Of course, despite the clarifications we have seen, significant question marks remain about what, with the presentation of the Other Condition, Musil is recommending. If \"faith mustn't be even an hour old,\" However, although it is true that Musil's descriptions of the Other Condition do not amount to a positive picture of Utopia, I agree with Jonsson that rather what is being offered is the \"Only a \"being with possibility\" is endowed with the senses needed to discover the right place to be zoned for the construction of a new world. [...] But if he is not to lose his unsettling ability, that highly prized sense of possibility, he must also renounce the decisiveness and resolve that at the same time are needed in order to bring forth even a vague blueprint of Utopia.[...] a guide showing the way to Utopia, and yet unable to settle there itself\". For as Italo Calvino wrote, there is no better place to keep a secret than in an unfinished novel. Pike, Burton. Jonsson, Stefan.", "label": 1 }, { "main_document": "the Web site, allowing the text to 'breathe'. Navigating the Web site is extremely easy, navigation bar is provided and there are no dead links. Unique features increase the overall appeal of the Web site; however, there is still room for improvement. In terms of personalization, there is a wide variety of linguistic options available when booking is made; however, no personalization is achieved after registration. Online transactions are available but there is no feature 'Help with booking' and no offline payment option offered. The reservation system is secured with the use of Secure Sockets Layer. According to TrustGauge the Web site is unrated. The overall mark for Fosshotel Web site is average (3). Avis Web site can be fairly easily accessed as the URL address is user friendly and the site can be quickly found in a search engine. The content is relevant, useful and written in a professional way. In terms of search capability, there is still room for improvement and a search engine could possibly be provided. Considering value adding features, the Web site can be rated as average and some additional features could be offered. The Web site is well organized and structured and navigation through the Web pages can be achieved with ease. There are no bad links. In terms of accessibility, the Web site can be rated as average. Registration process is quick and easy, adequate guidance is offered and customer can manage his/her bookings online at any time. Offline payment option, however, is not available. Encryption is high and security certificate, which ensures customer that the Web site is authentic and secure, is provided. On TrustGauge Scoring Chart the Web site is rated with number 3, meaning that the site is recognized as somewhat trustworthy by others. From the above discussion it can be concluded that the Avis Web site can be rated as good (4). Icelandair Web site can be accessed easily and quickly. Information provided is relevant, accurate and written in a professional way. In terms of graphics, there is a good balance between text and images on the Web site; however, the images are mostly generic photographs of nature etc., with only one of them being a photograph of Icelandair aeroplanes. Navigating through the Web pages is straightforward and no dead links were found through NetMechanic. An advantage of the Web site is that customer has an option of choosing bigger fonts while browsing the Web site. Overall, a good level of personalization is achieved. Nevertheless, the customer can only choose his/her preferences regarding seats, but not regarding meals. Online booking assistance is provided with every step of the booking process clearly explained. However, offline payment option is not available. The Web site is protected by SSL encryption. According to TrustGauge the Web site is not particularly trustworthy. Overall, Icelandair Web site can be rated as 4 (good). The main disadvantage of the Web site is that it does not have any search capabilities available. In terms of personalization, the Web site can be rated as poor (2), as it does not", "label": 0 }, { "main_document": "to optimize its function, balance it between individual attractiveness and Oticon's identity. As extrinsic reward system is the main part of the whole reward systems, and is the first element employees consider when they choose to attend or retain in Oticon, so it should be optimized and enriched as much as possible. Basic Salary: The basic salary is confirmed in the formal employment contract in order to guarantee staffs necessary life level. Generally speaking, it's based on time attended and different to individuals. When decide it in Oticon, staff's main competence and special skill or experience, length of service time, market average level of pay and other relevant elements should all be considered. Incentive Schemes: Wining incentives may be the core motivation for individuals to improve their performances. As to Oticon's employees, the basic salary is similar for people with same main competence, but their amount of pay may have large distance. Why? The main difference comes from here. When decide the amount of individual's incentives schemes in Oticon, we mainly use individual PBR (Payment By Results) combining with group PBR, also considering PRP (Performance related Pay): (1) Individual PBR: Because of Oticon's identity, people working here receive high risk-high reward, which means low base pay, high incentive pay. We could improve Manchester Plan for applying in Oticon. Since people are required to work on at least two projects in the mean time, and encouraged to do as much projects as they can, the first two projects then could be calculated as the basic earnings on time based, and the extra projects could be calculated by rate per piece. Earnings=(hours worked*hourly rate) + (number of extra projects*rate per piece) (2) Group PBR: As for group PBR, Profit sharing (Gain-sharing) is a good choice. Oticon could refine The Scanlon Plan to suit its companies all over the world, and The Rucker Share of Production Plan to be compatible with its subsidiary factories. (3) PRP: PRP is also one of the considering elements. When adjusting individual incentives schemes, Oticon refers to the evaluations of relevant project leaders and technical specialists to achieve a fair assessment. Competence-Based Pay: As in Otincon, multi-functional is required, competence-based pay offers a more flexible reward system whereby people are paid for learning and changing in line with the business objectives. The more skills employee develops and uses, the higher pay he/she receives. By rewarding people who multi-skill themselves, Oticon could save money, for people cover more work now, thereupon it almost doesn't need external recruiting. Share Options: Similar with gain-sharing, instead of giving monetary rewards, Oticon uses share options to attract and retain employees. In the share option plan, employees are given the option to buy Oticon shares in a preferred rate. This strategy is now being demonstrated successfully, for employees will pay more attentions to the corporate performance than before, also will work harder to achieve corporate goal. Cafeteria Style Fringe Benefits (Flexible benefits): This strategy is now very popular all over the world, for it maximizes the value of limited monetary amount of fringe benefits and gives", "label": 0 }, { "main_document": "disc has to be well lubricated to avoid errors due to friction Choose a fairly inextensible string that doesn't stretch a lot when pulled. The weights holder should be as light as possible but should still be string and durable enough to perform it's functions. An automated sensor that senses the start and fall of the weights will reduce the error caused due to human reaction time. This instrument can be much more effective if other existing but inevitable sources of error like friction, tension of string, reaction speed of person measuring the time, etc can be known so that we get a much more accurate value. The values obtained in this method seem to have a high percentage error and are inconsistent. The accuracy also depends a lot on the instrument being used. Various specimens of this same type of instrument can produce results with a strikingly high variation in them. Hence I think the moment of inertia is best measured using theoretical approaches.", "label": 0 }, { "main_document": "taking liberties with the accounts. Apart from this the depreciation adjustments are reliable. Another accounting adjustment was made to the stocks. On page 26 of the annual report it states that 'stocks are valued at the lower of cost and net realisable value' The Inland Revenue website states 'One of the acceptable basis of stock valuation is the lower of cost and net realisable value' According to the report 'cost is calculated on an average or 'first in, first out' basis.' Both of these methods are valid however it may have been more useful if they stated what costs were calculated via a particular method. When making accounting adjustments Kidde plc have used well known adjustment methods for providing a true and fair view of the accounts. Inspecting the figures themselves there seems to be no discrepancies. Kidde plc has made reliable accounting adjustments. It is important to note that the PricewaterhouseCoopers LLP have audited the accounts and found that 'the financial statements give a true and fair view of the state of affairs of the Company and the Group at 31 December 2003...' Ernst & Young, a separate accountancy firm, worked with Kidde plc to make better financial decisions It is clear that the auditors were completely independent from any other interaction with Kidde plc. There is a slightly puzzling figure in section 5 of the accounts when mentioning the disposal of fixed assets. Kidde plc made a However Baxi, an associated company of Kidde plc managed to make a These values match up all too conveniently and perhaps further investigation could be made. However it is noted in the exceptional items section which would suggest it is a one-off and would not be happening too frequently. Apart from this the Kidde plc annual report and accounts 2003 can be deemed reliable, and represent and a true and fair view of the financial health of the company. As stated in the conclusion to the performance section Kidde plc is a successful business. The growth rate has not been too explosive suggesting there is more growth yet to come. Kidde plc has a good relationship with investors, a strengthening market position and reputation and has a promising business strategy. For these reasons and those suggested in the conclusion of section 2 I would recommend buying more shares in Kidde plc. Kidde plc is a growing business. Every year it is implementing new strategies that help the business to thrive. The approach employed in 2004 was to concentrate on core business while still gaining extra income from acquisitions Early in 2004 Kidde plc sold their shares in associated companies Baxi and Robbialac In the previous report Baxi and Robbialac were regarded as non-core activities Kidde plc also sold their investment in professional paint. These sales provided a small, one-off income for the business at the cost of losing the steady income obtained by acquisitions. To overcome this Kidde plc made new acquisitions of Harden SA, a provider of fire equipment and services This strategy is very beneficial to the company these acquisitions relate", "label": 1 }, { "main_document": "done. In the test cases the children themselves differ in value from each other. Therefor parens and the two children (if they exist) have distinct values. Only a single 2 level subtrees (i.e. parent,left node node and right node) were tested, except where more than two levels were required (such as height testing). The other subtrees, weather they are above, below, to the right or left were ignored. It is assumed that once the program get a single 2 level subtree correct it can get the whole structure correct after checking all the nodes. The min and max values of the three nodes (parent,left node and right node) were 98 and -6 respectively. Maximum depth (height) was 6 (levels). These were the original ranges in the program so the trees with only those conditions were tested. All the positive combinations of negative and positive values of a subtee was not tested to keep the test cases volume low. The objective is to generate combinationssuch that its sufficent enough to test the correctness of the program without a large volume of test data or test cases.", "label": 0 }, { "main_document": "P and Q exist, so p and share a nonconstant factor. We can think of R(p, q) as a formula applying to all polynomials of degree n and m. Calculating detM is obviously a long string of calculation but we can use a much simpler way. The following theorem gives us a simple method. Theorem 9.8 The resultant of In their splitting field over We write This is a homogeneous polynomial of degree mn which is zero if and only if p and q share a root., so dividing the resultant. For p, each Then we see that the (i, j)th entry of the corresponding Sylvester matrix has degree j - i for We then see that any nonzero term in the determinant So Finding the constant is a simple matter of matching coefficients. Going back to the Tschirnhaus transformation again we note that since we know that This is explained below, Example 9.9 We know So We know that We know Two examples interest us, a Tschirnhaus transformation that reduces the general polynomial to a form with the a zero coefficeient to the term Example 9.10 To find the depressed form we need to choose the transformation We know So The principal form is a more complicated reduction since We need some new mathematics to ease us through the calculations. The power sums of Let The derivative If we derive lnp we get So Equating coefficients gives Newton's identities Theorem 9.11 The first j coefficients Outline of Proof: This follows from the fact that for any We see that for some j = n we have When j = n - 1 we have Then so This works the other way in a similar manner. Take Replace the last column by Then the Vandermonde matrix is Take the transpose of the Vandermonde matrix If we left multiply the Vandermonde matrix by its transpose we get We can now transform the general polynomial p of degree n into principal form. Example 9.12 To find the principal form we need to choose the transformation We know Then We know from Newton's identities that This means that So From Theorem 9.11 we set So we solve for T the quadratic Solving this quadratic for We now know that we can write the general quintic in principal form. This is displayed as This has three paramaters To do this we must first explore some new ideas. Consider the Tschirnhaus transformation geometrically. This is a fascinating subject and it is intuitive that the transformation moves the so-called root vector Without getting distracted, we can make a few observations. We define n-dimensional projective space We see An algebraic set in We see that these sets are structures in By the transformation Affine Tschirnhaus transformations Considering what this implies for the operations on root vectors we can consider a Tschirnhaus transformation as acting on This surface is known as a hyperplane. A hypersurface A(F) for a single form, a hyperplane is a hypersurface for a linear form. We see from Theorem 9.11 that reduction to principal form can", "label": 0 }, { "main_document": "into 3 processes- Controlling a stage, Managing product delivery and Managing stage boundaries. However, there are 2 processes that extend beyond phases, e.g. The process Directing a project applies for the length of the project, while Planning applies for all phases apart the final one - Closing a project. To understand how PRINCE 2 works, it is essential to understand each process element. This is a pre-project process that evaluates if the project would be worthwhile before gong further. Project Mandate identifies the major project stakeholders and the composition of the project group. The project aim, objectives and scope is also decided at this stage and presented in a document called Project Brief. The project approach and risk analysis (implementation strategy) are usually decided and done at this stage, as a blueprint for the initiation stage. This process defines the power of the Project Board who are responsible for the project. Their function includes authorising appraisals and execution. The project manager keeps the Project Board informed with regular reports and report progress in scheduled review meetings. This process is similar to control, that is carried out through the project life cycle. However, one should notice that PRINCE 2 has a special characteristic feature in terms of the role of the project group. PRINCE 2 is a process management approach by exception, which in other words mean project group only make corrective decisions when the project is predicted to fail achieving targets and goals. Proper planning of the project is needed to persuade project group to authorise go-ahead to the project. It contains details of how the project would be done, how much the cost would be i.e. budgeting, how to minimise risks, etc. It should also contain basic information on project progress and financial control. All these planning should be presented in a proposal called Project Initiation Document, which must be approved by the Project Board before implementation can start. Prince 2 encourages breakdown of projects into manageable stages so they can be managed and controlled more easily. Each stage has its own detailed plan and plan for the upcoming stage. Next term cannot be started until the current stage has been completed. PRINCE2 is a product based system. A product can be a physical thing like a book, or it could be a more intangible thing like a service agreement. In fact everything created by PRINCE2 including documents is a product. Products can be created by anyone including external suppliers. This process creates the products of the project and is where most of its resources are used. As PRINCE 2 encourage division of project into stages, each stage has to be clearly defined. Each stage has to be completed and approved by the project group before the next stage is started. Projects implementation must be closed down in a controlled and orderly way, according to PRINCE 2 principles. This involves evaluating the project's result in post project review. Experience learnt is then recorded. PRINCE 2, unlike traditional project life cycle, does not include the operation part of projects and it takes", "label": 0 }, { "main_document": "ethnocentric focus normally should have a standardized approach to managing diversity (Harris et al, 2003:166). There are however large differences between labour markets in Kenya and the UK (Appendix F and Carpenter et al, 2004). The Group's strategy reflects these differences as stated in its mission statement (Appendix A). While in the UK and Europe it adopted a diversity management (DM) approach based on the MOSAIC method (Kandola and Fullerton, 1994) in order to reflect cultural, ethnical, gender and demographic differences and to attract the best workers (HCIMA, 1999), in Kenya the general policy has to be adapted for several reasons. The young labour market mainly exists of unskilled workers; hence managerial positions have to be filled with expatriates first in order to ensure success. Although there is not much equal opportunity (EO) legislation in Kenya yet (Appendix F), the company implements an EO approach with an aim to adopt practices of the MOSAIC approach to diversity management in future with the change to a geocentric approach. Black identified that economic development is linked to demand for higher labour standards (1999:593) hence with an improvement of the Kenyan business environment (increase of skill level and diversity) the Group will adjust its strategy accordingly. Training nevertheless will be available to all employees equally and adjusted according to their backgrounds because of the understanding that people are the main assets (Storey and Sisson, 1993:1) and hence have to be valued and cared of. Recruitment for non-managerial jobs will reflect an EO approach in order to form a diverse workforce which is especially important in sub-Saharan countries representing multi ethnical groups (Nyambegera et al., 2002) although they show a discernible national pattern (Nyambegera, 2000:640). The former authors emphasize that neglecting ethnic diversity will have severe impacts on organisational success as well as on a country itself (2002:1087) especially in conjunction with corruption as can be seen at neighbouring countries having civil wars. The Lakeside Group hence aims to be proactive, utilizing and maximizing employees' potential (Kandola and Fullerton, 1994) in order to gain a competitive edge (Cassell, 1996:55) at the same time not discriminating any group. Despite its importance for organisational success in an industry heavily relying on human assets, training is not yet integrated in organisations' strategies (Worsfold & Jameson, 1991:114) and often neglected (Cannell 2003:20). Organisations learn through managers whose task is to contribute to the incremental and discontinuous organisational leaning (Pedler et al, 1994:4). The HR literature further suggests that training is a vital component to unlock the underlying potential of employees (Santos & Stuart, 2003:27). Boella (2000:117) defines three main components of training: knowledge, skills and attitude. The Lakeside Group however will differentiate between PCN's and HCN's as these two groups need different approaches to training due to their differing culture and educational skill level. Expatriates will be trained already before their move to Kenya, throughout their stay and through a reflective seminar in order to recall problems and difficulties for future learning (see also Appendix H). Areas of training include the most important, although not widely implemented (Dowling et al.,", "label": 0 }, { "main_document": "more discretion and responsibility with regard to individual jobs (especially if the forms introduced promote the delegative and not only the consultative element of direct involvement), there are also more negative consequences that should not be underestimated. Last but not least, the perceptions of the managers who actually implement and lead the direct EI schemes should not be overlooked as well. The experience of senior managers that introduce the systems and middle managers and supervisors who usually implement them could limit the gains from them if this experience is connected with functional rivalry, control anxiety, insufficient training or fears of job or status loss (see also Marchingtonal. 1993). Evidently, direct EI initiatives could benefit all the parties involved in the process, so gains could be felt at all organizational levels but, equally, if not handled properly, they could not only not offer any gains, but bring new challenges with them. This essay has tried to explore the objectives behind the increased or consistent utilization of direct employee involvement techniques in the UK, as demonstrated by empirical survey data. It has also attempted to note the gains that are derived from such schemes in practice and their dependence on the various factors and actors that influence them. Clearly, direct involvement schemes are being introduced in the workplace in a management-driven effort to increase employee motivation and commitment to their work, thus leading to enhanced organizational performance and flexibility (see also Cunninghamal. 1996). The real gains are more difficult to detect as materialized in practice, not least because as IRS points out 'most organisations do not have any formal mechanisms for monitoring the effectiveness of their EI strategies' (1999:8). In addition, the strong contextual dependence of EI schemes should be noted. They are introduced in different ways, at different organizational levels and by different functions as their champions (Ackers et. al. 1992). Moreover, their success is largely shaped by the business environment of the company (e.g. more prominent utilization in manufacturing firms with consultation traditions, IRS 1999), the different agendas actors involved in their implementation have (Marchington and Wilkinson 2005, Fenton-O'Creevy 2001) and their overall fit with the remainder of employee policies and business strategy (Marchingtonal. 1994). In sum, although managerial logic behind using direct EI schemes is manifest, the real gains from them are more problematic to demonstrate in practice.", "label": 0 }, { "main_document": "little when my desire to speak standard English became strong; then I would have more concerns with pronunciation and tried to standardize the sound before pronouncing it. But if I lost the concentration on pronunciation, the foreign accent would be more distinct in my feeling.. However, it is not a totally effective factor. Firstly, some studies like Oyama (1976) and Thompson (1991) did not find relevant support from their research. Secondly, motivation is more likely to be internal and psychological activities and it is difficult to be scaled. In other words, it is not clear to what extent the individual subjects differ in their motivation to pronounce an L2 and it is not sure whether the same grade of motivation can have the same effects on pronunciation. Language use means the use of languages in different situation including home use, work use and social use and it also includes L1 and L2 languages The language use is reckoned as an important factor in some research. It accounted for 15% influence on non-native speakers' foreign accent for both males and females in Flege et al (1995). In the following table which is from the same article, the ratings of factors affecting foreign accent were estimated by Italian subjects of both genders. It can be easily seen that language use was identified as a very important factor and the difference is that males thought the language used at work was the second important factor just less than the age influence, while females thought a lot of overall language use. Other studies from Piske's article also agreed with this point such as Tahtaal. (1981 in Piske 2001) who said that the home use of English as L2 accounted for 9% of the variance in degree of foreign accent but this percentage raised to 26% for early bilinguals with AOLs of 7-12 years. Divergences about this factor exist in different research. Thompson (1991 in Piskeal. 2001) found that language use was simply related to the degree of foreign accent but this factor was confounded with AOL as well as gender mentioned before. Moreover, Flege & Fletcher (1992 in Piskeal. 2001) and Elliott (1995 in Piskeal. 2001) discovered little or no influence on subjects' foreign accent. In my experience and observation, language use does not automatically lead to accent-free speech, especially for adult learners, but it is quite useful for improving fluency in speech. Sometimes people may have a wrong impression that their pronunciation has become more standard when they can talk fluently with native speakers. Moreover, if the subjects speak to non-native speakers in L2 for a long time, it is not difficult to speculate that their foreign accent will be more distinct than the accent of those who practice with native speakers. The use of mother tongue is also taken into account by some linguists but as a negative factor. It is said that the proficiency and frequency of L1 use is in an inverse proportion to the degree of L2 foreign accent. Thompson (1990 in Piskeal. 2001), as already mentioned, did not find the", "label": 0 }, { "main_document": "1. The data set consists of three dependent variables measuring some form of cognitive or motor development : IQ, reading comprehension score (Rcomp) and test of motor impairment score (ToMI). Figure 1, a histogram of the dependent variables, gives an indication of their distributions. IQ appears to be fairly well Normally distributed, with the majority of values clustered around the mid-90 level as would be expected. There appear to be a few outlying observations of IQ but this is most likely to be the result of random variation. On the other hand, reading comprehension score and test of motor impairment score seem to fit a Normal distribution poorly. Reading comprehension score appears as though it might be Normally distributed with a significant positive skewness. Test of motor impairment score also has a significant positive skew, but does not seem to fit the Normal distribution well at all. Most of the values are found at zero and one, with the bulk of the data falling into this region with a long positive tail gradually tapering out. The data set also includes three quantitative birth characteristics : birthweight (bw), birthratio ratio (rbw) and gestational age (ga). Birthweight seems to adhere fairly well to a Normal distribution, with a clear peak in the centre of the data. However, there are some outliers in both tails, although these seem to be well within the limits of standard variation. Neither birthweight ratio nor gestational age seem to fit a Normal distribution well. Birthweight ratio has a broad peak containing the majority of the data, with a few extreme values falling outside this region. Gestational age might be considered fairly Uniformly distributed, excepting a few extreme premature births occurring before week 28. There are no extreme long values of gestational age since the study only occurs preterm births (those with gestational age 32 weeks or less). Finally, the data set includes five qualitative variables : the sex of the baby, whether the mother left education before or at the age of 16 (ed16), whether the mother lives in owner-occupied housing (owner), whether the family receives social service benefits (benefs) and whether the mother is a non-smoker (cig). Table 1 indicates that sex is fairly evenly distributed between male and female, although a slightly higher proportion of male babies were included in the study. Table 2 provides some interesting results about the distribution of the social characteristics. A very high proportion of the preterm births included in the study had mothers who left education at or before the age of 16. The babies in the study are fairly evenly distributed in terms of having a mother living in owner-occupied housing and having a family receiving social service benefits. Finally, a fairly low proportion of babies included in the study had mothers who were non-smokers. It would be interesting to compare this data to the general proportions found in the population at large, as these results do not seem to be representative of the general population. A basic indication of the relationships between the variables can be found with a", "label": 1 }, { "main_document": "The right to silence is a prominent feature of adversarial justice systems and is often erroneously viewed as a single entitlement The right to silence is in reality a collection of privileges; with the most important being the right of a suspect to remain silent under accusation or interrogation and the right of a suspect not to have to testify at his own trial. R v Director of Serious Fraud Office ex parte Smith [1992] 3 All E.R. 456, at 463-64 six different meanings are identified by Lord Mustill The right to silence derives from the privilege against self-incrimination The judiciary have recognised the privilege as being \" However, as Lord Mustill indicated, \" described in the Latin maxim: \"nemo debet prodere se ipsum\" - \"no one should be obliged to give himself away\" Lam Chi-ming and Others v Reginam (Privy Council) [1991] 93 Cr. App. R. 358, per Lord Griffiths, at 364 see: R v Sang [1979] 2 All ER 1222, at 1230; Lam Chi-ming and Others v Reginam (Privy Council) [1991] 93 Cr. App. R. 358, at 360 R v Director of Serious Fraud Office ex parte Smith [1992] 95 C.R. App. R. 191, per Lord Mustill at 206 see: Jackson, J., The Right of Silence: Judicial Responses to Parliamentary Encroachment, Vol.57, No.2, 1994, Modern Law Review, 270, pp.270; R v Gilbert [1977] 66 Cr. App. R. 237; R v Alladice [1988] 87 Cr. App. R. 380 There is a clear tension between trying to strike a balance between protecting the defendant's established privilege against self-incrimination and the need for the courts to obtain the necessary information in order to convict criminals. Prior to 1994 the judiciary became increasingly reluctant to allow the defendant to thwart the prosecution case through silence, especially in civil matters The legislature reacted to this and other stimuli, especially the prevalent use of silence by suspected terrorists in Northern Ireland see: Re London United Investments plc. [1991] B.C.C. 760; AT & Y Istel Ltd v Tully [1993] A.C. 45 see: Jackson, J., Recent Developments in Criminal Evidence, 1989, 40, Northern Ireland Law Quarterly, 105 The CJPOA disposed of the absolute right to silence and allowed, for the first time, courts to draw adverse inferences from a defendant's silence. The most important limitation of the right to silence, and the focus of this essay, is section 34 CJPOA which allows an inference to be drawn if the defendant, \" The rationale behind section 34 is that it is expected that an innocent individual would wish to provide his exculpatory explanation at the first opportunity. If he fails to do this the jury should be allowed to draw adverse inferences from his later fabrication, which is little more than an attempt to ambush the prosecution and ensure an unmerited acquittal. Criminal Justice and Public Order Act 1994 section 34(1)(a) Cooper, S., Legal Advice and Pre-trial Silence - Unreasonable Developments, 2006, 10 International Journal of Evidence and Proof, 60, pp.61 However, even prior to the CJPOA the \" For example, in cases where a person is accused of", "label": 1 }, { "main_document": "in some languages. The word order we use to convey a past event is different from that of other languages. British Sign language adds a time marker to the end of a sentence. For example, to say \"I ate\" you would sign \"I eat\" and then add the sign for \"finished.\" In many pidgin languages, particles replace tenses as time markers. In some languages, the word order and grammar for each tense may be the same but the phonology may change. For example, in the West African tone-language, Bini, present tense is indicated by a low tone and past tense by a high or high-low tone. Other languages also show a difference of tense system from speech to writing. In French, the simple past tense does not occur in speech, only in writing. However, this does not mean that they can only convey this concept if they write it down. These are just some of the differences in tense marking across languages. It shows the variation in how time relations are conveyed. Syntax is just one of the factors affecting tense. In this essay, I have discussed the three main approaches to tense. Firstly, the slightly archaic traditionalist view that there are three tenses (past, present and future). Linguists generally reject this view. Secondly the structuralist view that there are two tenses (past and present) and the future tense is made up of combinations of these with auxiliaries. Finally, the functionalist view which is dominated by Reichenbach's theory of speech time, event time and reference time. These three make up several combinations to give the tense of a sentence. The functionalist view is concentrated on tense as a matter of syntax and sees tense as having a deeper structure than the surface grammar shows. It is all to do with where these three points are located in the sentence, which shows how they are related to each other in time. I have also discussed the weak relationship between tense and time. This has led to the discussion of other factors affecting tense and how tense in turn affects these factors. For example, how other languages cope with or without different tense systems but all maintain the same concept of time and how context affects how tense is used to convey a different time than expected (for example when the present tense is used to express a past event in narratives). The functionalist view seems to be the most widely accepted throughout the literature. It is generally agreed amongst linguists that tense and time have a weaker relationship than many people think. The structuralist view goes deeper into the structure of tense and suggests that it is not necessarily just grammar that creates tense, but syntax and meaning as well.", "label": 1 }, { "main_document": "note that those five years have passed for the Sheffield and Midland providers and for the majority of the Manchester system. In reference to the life cycle of the product, light rail systems are somewhere between rising star and maturity. Barriers to entry are considerable. Three key barriers are the vast amount of capital required for business establishment, legal and government restrictions and, to a lesser extent, switching costs. These barriers are explored below. Capital Intensity required for business setup The table below shows capital investment in some light rail systems shown on the map on the first page. The figures show that, realistically, a capital investment in the order of According to the National Audit Office's report already referred to, the cost of construction of light rail systems is rising and although most of the schemes have been kept within their budgets, this has been achieved by excluding some of the design features that were incorporated in the original plans. Legal or Government enforced restrictions \"Local authorities decide whether a new light rail line or system is appropriate for their area and usually have to seek funds from the Department [for transport] and be granted legal powers by the Secretary of State for Transport before their schemes can proceed.\" This has threefold implications for prospective market entrants; the local authority must be satisfied that a new light rail system is appropriate; funding must be accumulated; the Secretary of State for Transport must grant the scheme permission to be constructed. Likely costs incurred overcoming these problems would be; Switching costs Customers who already use the Midland Metro will have passes which are only valid on Metro trams. Users may not be willing to purchase a second pass for a new operator's trams which would affect passenger numbers adversely. A market collusion with With an understanding of these barriers to entry the argument for acquisition is stronger than that for setting up a new firm. Acquisition is a more attractive business venture because it removes two of the barriers to entry described above, those being government restrictions and switching costs. These do not apply since a new firm would not need to construct a new line and would not need to gain local or central government backing or financial support. Furthermore, provided With reference to Michael Porter's generic strategies for business placement within a given industry, a A low cost and industry wide strategy (the cost leadership strategy) would maintain the widest possible customer base and is the only strategy that would be supported by government since a fundamental principle of public transport is service for all. Investment required for a new light rail system would be a high risk venture and, in contrast to some high risk business opportunities, there does not appear to be a lucrative reward for those who succeed. The acquisition of Midland Metro would not be as risky a business manoeuvre for reasons detailed in the previous section. My recommendation is that, if you are to either set up a new business in competition with the Midland Metro", "label": 1 }, { "main_document": "make more money. Unfortunately, often \"commercial donors\" \"Psst, wanna buy a kidney? Organ transplants\"; 18 November 2006; The Economist Newspaper Limited, London 2006. \"Commerce in transplantation: how does it affect European legislation?\"; Bernard Cohen, Guido G Persijn & Yves Vanrenterghem; ClinicalTransplantation; Volume 14 Page 28 - February 2000 They are not able to think prospectively in the 10- 20 years period time. People, who arrange this \"commercial transplantation\", abuse this lack of knowledge in very cruel way. From time to time newspaper' s headlines \"scream\" about another scandals over the sale of organs as for example it was found in one of the private hospitals of London and then hurried up passing Human Organ Transplant Act 1989. \"In purpose\" -it is not a desirable way of setting any regulation, that's why it is so important to carefully look at this issue before cases as mentioned above, occurs. It might cause some benefits for both parties of those agreements, at first to provide better care for live donors after removing organ. Now, very often it looks like a journey \"into the unknown\", black market donors are transported to hospital were their organs are transferred to \"paying customers\". There are no evidences for how long they live after removal, but in addition to problems which occur in cases of legal live donations (for example between family members) it is quite obvious that they suffer from health problems. Setting regulation probably decrease number of abuses on this matter. That's why it is vital to draft reasonable regulation on this subject also to be sure that live donors would receive sufficient explanation about the organ removal procedure, its consequences and risk. What's more, really and fully understand it. Furthermore, maybe that's the reasonable solution- to establish organs transplantation as a very last way of saving life, define it as a normal contractual agreement with large part concerning donors consent and both side health consequences. In addition, besides money, maybe live donors should obtain live health care or other privileges from recipients? \"Ottawa urged to run legal kidney trade: Bioethicist's proposal: Would discourage black market, end waiting list\"; Tom Blackwell; 3 November 2006; National Post Doctors transplanted damaged kidneys ; 4 November 2006; Daily Yomiuri Newspaper For some reasons mentioned above, organ sales would be unethical, but on the other hand to let people die, should be treated as moral behavior? The problem of commercial trade in organs will not disappear by itself. Looking at the Iran's example it should be admitted that the sale of organs is not very common. It is more treated as an exceptional solution, concerned only where other possibilities had failed. In addition it has to be said, that Iran eliminated completely the potential recipients waiting list. Possibly, drawing conclusion from its experience in comparison with British one this matter would be the most effective. Probably better way of solving this problem would be to establish one institution as \"legal buyer\"- created by National Health System? Thus, redistributing organs to donors and then within the live insurance providing to potential recipients. Probably", "label": 0 }, { "main_document": "way between them. The first way is by the complex vector bundle of rank 1 - complex line bundle - which is determined up to isomorphism by We can also use smoothly embedded, two-dimesional, oriented surfaces in X. A surface of this type carries a fundamental homology class in Then, last of all, we have de Rham cohomology - a representation of real cohomology classes by differential forms. Let X be a compact, oriented, simply connected four-manifold. The Poincare duality isomorphism is equivalent to a bilinear form It is a symmetric form which induces an isomorphism between the groups We call the latter property For two oriented surfaces We can, of course, move this over to homology to give the form F. Then the cup product is In terms of de Rham cohomology we are given the intersection number by the integral We know that the Betti number of the manifold For the second Betti number The signature of an oriented four-manifold is then defined to be The intersection form on the real vector space An n-dimensional A A hermitian metric where In this case, Hodge theory involves a study of the algebraic topology of a smooth manifold X of the partial differential equation theory of generalised Laplacian operators associated to a Riemannian metric on X. In other words, Hodge theory gives representatives for cohomology classes on X, a compact Riemannian manifold. The theory was developed by W. V. D. Hodge in the thirties - as an extension of de Rham cohomology - and it has major applications in the discussion of Riemannian manifolds, Kahler manifolds and algebraic geometry of complex projective varieties. The The Hodge theorem gives Given a Riemannian metric we have the decomposition For an oriented four-manifold the characteristic classes to hand are the familiar Euler and Pontryagin classes along with the The second Stiefel-Whitney class w2 is from the mod2 reduction of the intersection formula We finish with a few useful definitions. Given two disjoint connected n-manifolds Define a complex surface as a complex manifold of dimension 2 - which is a smooth 4-manifold X. Suppose that From Hodge theory we have the decomposition of 2-forms on This extends to the curvature form Write A conection is anti self-dual if On the space for ASD connections are solutions of the Take a twice differentiable function from An integral is defined on the first function which we allow to vary while fixing the second. The first function must satisfy a certain differential equation when we try to find the Lagrangian. Then this equation is known as the Euler-Lagrange differential equation or the Euler-Lagrange condition. Take l a Hermitian line bundle over some manifold The curvature form Take a new connection This is the Chern class c Let The 4-form These forms represent a characteristic class of for a complex vector bundle The Pontryagin class discussed earlier is the standard four-dimensional class for a real orthogonal bundle given This bundle also has a Stiefel-Whitney class, satisfying First we define the The gauge group acts The The moduli space C has", "label": 0 }, { "main_document": "The affluent society, a consumer based culture of comfort and conformity, emerged after World War II, and this essay discusses the reasons for that emergence. To achieve these ends, the main body of the study will be split into two sections, detailing economic factors, and factors involving the very desires of the consumers that were an integral part of this affluent society. Each of these two sections will further be split into sub-sections which individually look at one potential facet of the emergence of this society, and attempt to account for it. The essay will discuss the war-time economy as well as the broader post-war economic status of the country, before discussing government intervention, looking at the GI Bill and the FHA, before looking briefly at the 'other America' and accounting for their lack of government funds and resultant white flight. In addition, the second section discusses the idea of the family nucleus, its consumer desires and their abundant availability, especially in the housing industry and other off-shoot mass-produced industries. The conclusion draws upon these sections in order to evaluate the importance of all factors, playing strict attention to means, supply and demand. It is important, at the outset, to account for the increase in personal savings and the general post-war overturn of the depression era economy. As both a factor accounting for the affluent society, and as an impetus for it, the availability of finance is important. During the years 1939-44 for example, salaries doubled, as a result of overtime pay and an expansion of jobs. Liquid savings sky-rocketed and this improvement affected corporations as well who benefited from wartime prosperity. William H Chafe, The Unfinished Journey (4th ed. Oxford University Press NYC 1999) p8-10 These factors are certainly important, but certain texts, such as ' Economist Harold G. Vatter produces a more comprehensive view of the post-war economy however, that suggests this essay will have to look at more complex factors in order to fully account for the emergence of an affluent society. Economic history is an important component part of any study of affluence, abundance and increased consumer spending if one wishes to discuss these matters in any sufficient depth. Harold G Vatter's ' In terms of statistics, this essay does not attempt to challenge Vatter's findings, but rather uses his study to understand why, despite such a moderate economy, certain factors, both at the government and consumer level, enabled the affluent society to emerge after the war. Vatter's Statistics, in regard to the US economy as a whole, clearly show that, in terms of GNP, the economic growth in output was lagging behind that of the period 1921-29, at a rate of 2.9% (1951-59) compared to 4.6% three decades earlier. According to Vatter, \"the 1950's seemed to extend a...long-run downward drift in the growth rate of total output.\" Vatter even goes as far as to say that \"the rate of economic growth in the late 1950's was insufficient to stimulate population growth rates as high as those obtaining in the early 1950's.\" Despite these conclusions however, statistics are", "label": 1 }, { "main_document": "deposed king of Alba Longa was imprisoned by her uncle and became the first Vestal Virgin. However the god mars entered her prison cell and impregnated her. She gave birth to the twins Romulus and Remus. However her uncle discovered her children and tried to dispose of them by putting them in the river Tiber during a flood. They were washed downstream where they were found and cared for by a wolf. Some stories say a shepherd also raised them. Eventually the twins were recognised by their grandfather and they decided to found a city. After much disagreement Remus was killed and Romulus built the city of Rome. His descendents were said to have become the kings and emperors of Rome which is how Augustus could claim he was related to mars and the god that Romulus became, Quirinus. There is another, less known myth about the foundation of Rome. It is documented in Virgil's poem Aeneas was the son of a mortal called Anchises and the goddess Venus. After the Trojan war Aeneas leads the surviving Trojans to Italy. The gods make plans for him and he ends up in Carthage where he begins an affair with queen dido. However Jupiter prophesised that he would one day found a great city. The Aeneid describes Aeneas being shown the future history of Rome, including the battle of Actium, which was won by Augustus. (This was Virgils' way of honouring Augustus, as he was asked to, without the poem becoming just another piece of Augustan propaganda.) Aeneas leaves dido and founds the town Lavinium. His son goes on to found the town of Alba Longer, which was said to be the birthplace of Romulus and Remus. Augustus was believed to be the son of a god through adoption but it was also rumoured that his biological father was a god as well. Augustus has always shown favouritism to the god Apollo and he was sort of a patron god to Augustus. The temple of action Apollo was dedicated to Apollo in 28 BC and was built on the palatine hill, the place that Romulus is said to have taken for his augural quarter when deciding who should name the twins new city. Augustus relationship with the god is explained by Suetonius, who argues that Apollo was in fact Augustus' father. Augustus mother Atia was asleep in the temple of Apollo when a serpent entered her and a strange mark appeared on her body. He goes on to say \"the birth of Augustus nine months later suggested a divine paternity\" (Suetonius Augustus was able to spread his propaganda about his divinity through the building of monuments, such as the Ara Pacis and his statue, the Primaporta. The Ara Pacis was an altar of peace but was also a memorial to Augustus. It was situated in the campus Martius along with Augustus mausoleum and an obelisk that acted as a sundial. The friezes on the outside wall of the Ara pacis depict a procession headed by Augustus. He is dressed in robes and s leading", "label": 1 }, { "main_document": "of Lords did not take account of the realities of modern work because they chose to ignore the individual responsibility of 'a professional' (who in this case was a teacher) to take care of themselves for the benefit of the organisation they work for. 14 Ibid Kelly, P, Colquhoun, D. (2003) In addition to individual responsibility, there is a strong argument for adherence to contractual responsibility. Lord Roger suggests that it would be unfair on the employer to expect anything less than that which the employee has promised to carry out. However it is very difficult to see how an employer could prevent psychiatric harm without actually interfering with contractual obligations. In the past, when the industrial sector was developing, it was quite straightforward to impose a duty to try and prevent physical harm as this simply meant ensuring that reasonable precautions were taken, without having to interfere with the contract. However, over the past thirty years the service sector has expanded quite significantly and thus the emphasis of health and safety has now shifted to psychiatric harm. Thus it has been argued that this shift has placed an unfair burden on the employer who cannot now depend on the rights of his contractual agreement. Furthermore, Lord Scott recognises that it is very difficult for an employer to predict who among their workforce will be able to cope with the stresses of a job, especially if the employer is not fully informed of the problems that their employees are encountering. Furthermore lowering the standard of care from that prescribed by the House of Lords judgement to that presented by the Court of Appeal may give rise to an argument in favour of interfering with contractual obligations as this would therefore balance the employer's rights, by taking account of the modern realities of work, with that of the employee's. Ibid. per Lord Scott at para. 14 However, Lord Walker did not accept that the standard of care offered to Mr Barber was satisfactory. He believed it was reasonable for Mr Barber not to inform the head teacher of his circumstances as frequently or in as much detail as was set out in Lady Hale's guidelines. This is because as Lord Walker explained Furthermore in light of what was known by the head teacher and the two deputy head teachers, that is, that Mr Barber had taken three weeks off school for stress and depression, and had approached senior management with his concerns, he argues that positive steps should have been taken to limit the damage possible and improve the situation so as not to lose such a conscientious and dedicated teacher. He did not accept that the strains of modern work or changes in the workplace placed too higher burden on the employer from doing so. However, the House of Lords decision did not seem to look at much more than the interaction between the employer and employee- singled out from the rest of the world. The majority decision looks at the employer's duty of care without taking into account the general circumstances", "label": 1 }, { "main_document": "There are many myths that circulated throughout Greece. These would have served as entertainment or even as metaphorical 'moral lessons' to give knowledge in everyday life. Myths always entailed many different themes, such as war, glorification, relationships with men and women; and almost always involved the role of a God. Many of the myths known today, have some sense of a moral or achievement that was to be gained. They usually involved a conquest of some sort, such as a physical fight against another man, or more occasionally, an animal-creature. To comprehend the role of these 'creatures', it is clear that an analysis of many myths would need to be considered. Such creatures to be looked at could be the transformation of Scylla, sirens, harpies, centaurs, and also Medusa. One main aspect of the roles of these half human half animal creatures in myth is that they are always portrayed as evil or ghastly. \"The centaur is the only one of the fancied monsters of antiquity to which any good traits are assigned.\" This in turn leads to the argument that the role of these creatures provided literature heroes with a conquest and 'capture' to add to their tale. This can be seen in the majority of all heroic stories, such as Odysseus, Aeneas, Perseus and Jason and the Argonauts. Odysseus has to defeat many different creatures while struggling to get home to Ithaca. While on his journey he had to sail past Scylla and Charybdis and as a result of some help from Circe, Odysseus and most of his men survive. In Book 12, Circe gives a description of the creature; \"she has 12 feet, 6 long scrawny necks, each ending in a grisly head with triple rows of fangs, set thick and close...up to her waist she is sunk in the depths of the cave.\" Bulfinch, T. Homer, Additionally on his voyage home, Odysseus had to encounter the Sirens, who were known to lure ships to their rocky shores. They were \"bird like\" \"They would lure passing sailors onto rocks; all around them were the whitened bones of their victims.\" Odysseus was one of the few that survived the passing of these bird-creatures. The cause for this was the help of Circe; she advises Odysseus that \"the sirens, who bewitch everybody who approaches them. There is no homecoming for the man who draws near them unawares and hears the siren's voices.\" She advised him to have his men put wax in their ears to prevent them hearing the songs and being 'lured' to their shores. These encounters arguably make the tale of the hero more interesting and exciting. Grant, M. and Hazel, J. Morford and Lenardon, Homer, One other creature that was fought and slain was Medusa, the Gorgon. \"The Gorgons, whose home was somewhere on the edge of the world, usually situated in North Africa\" These three sisters were said to look so horrific that when looked upon the onlooker would turn to stone. Medusa was the only mortal Gorgon and so makes her important as she was the", "label": 1 }, { "main_document": "an interest in international assignments and the myths do not necessarily reflect the reality. It is therefore important to analyse the real reasons for the lack of female expatriates in multinational organisations. Although women in all industry sectors make up almost half of the total global workforce, they are highly underrepresented at the senior management level and in positions of authority domestically and as expatriates internationally (Caligiuri and Tung 1999). Authors such as Linehan and Scullion (2001) argue that women managers are confronted by a glass ceiling, which prevents them from advancing into higher management positions. This glass ceiling is a metaphor for obstacles women experience that male managers normally do not encounter. These barriers start with the selection process of female expatriates for international assignments. It has been argued that expatriate assignments involve a high level of uncertainty due to the unknown territory the manager is facing when accepting an assignment. In the absence of reliable knowledge, future performance predictions or situations where past experience is limited, senior managers tend to rely on the use of stereotypes, rather than actual performance from female managers to make expatriate selections (Selmer and Leung 2002). Linehan and Walsh (1999) underline this argument by stating, \"It is suggested that the greater the uncertainty, the smaller the probability that a multinational corporation will assign a woman to an expatriate role, and under conditions of high uncertainty, headquarters will prefer men as expatriates (Linehan and Walsh 1999, p523). The senior managers making the selection decisions tend to be male and it is therefore not surprising that the majority of expatriate assignments are largely distributed to male candidates (Harris 1995). Female expatriates on the other hand have to constantly raise the attention of senior managers in order to be even considered for the selection process. Research conducted by Guerrier (1986) highlights an issue that has, until recently, not being investigated thoroughly by the literature. The author has argued that women themselves are responsible for the lack of success in multinational companies. She has found that women in the hospitality industry generally tend to have less self - confidence and self - esteem than men. This means that women were prepared to settle for lower - level jobs in the organisation, and some even supported the same stereotypical views of women as managers as men (Guerrier 1986, p237). Moreover, recent research conducted by Fischlmayr (2002) shows that these assumptions are persistent and remain to be a significant factor for the under - representation of female expatriates on international assignments. The author's research was based on interviews with 21 Austrian expatriates from different industry sectors, and has found that particularly female expatriates are often a barrier to their own career, as they behave according to gender - based role models. These self-inflicted barriers are perpetrated by behaving according to stereotypical expectations like for example, female team members serving coffee when no one else in the group stands up to do so. The author has examined that this behaviour is part of a gender - based education were, from infancy on,", "label": 0 }, { "main_document": "reason that China is of immediate importance. Not only does its cheap labour force mean that the prices of our clothes, computers and cars are less expensive than they might otherwise be, but \"it is revolutionising the relative prices of labour, capital, goods and assets in a way that has never happened so quickly before\" It could be argued that the importance of cheap labour to the 'western lifestyle' is not a new phenomenon and that countries such as Bangladesh, India or the Philippines have the same importance attached to them. However, there is a distinguishing factor between China and these other countries: the vast size of its population and therefore its potential workforce. China's encouragement of foreign investment has \"effectively doubled the global labour force, hugely boosting the world's potential output\" Its effects do not go unnoticed in other developing countries and even in Japan which feels economic pressure \"as production moves to China, where wages are just 4 percent of comparable Japanese manufacturing wages\" The opening of China's workforce to foreign companies and its resulting role in the current nature of capitalism, thus clearly plays an important role in the world today, from the cheap products available to consumers throughout the developed world, to the increase in competition among developing countries, particularly in Asia. Breslin, S. (2004) (eds. Buzan, B. and Foot, R.) (2005) \"How China runs the world economy\" (2005) \"How China runs the world economy\" Breslin, S. (2004) (eds. Buzan, B. and Foot, R.) The immediate importance of China as described above, will now be placed within the larger context of global capitalism in order to highlight the importance of China's role in reconciling the contradictions of capitalism. Many commentators point to China's economic dependence on the US, while others argue that the US could not sustain its current hegemony without China's exports. An appreciation of these two opinions as evidence of a inter-dependent relationship demonstrates the force of global capitalism which \"is characterised by profound frictions and conflicts\" Gamble, A (1999) London: Macmillan Press Ltd p.103 China has an economic history as an isolationist and largely autarkic state. This obviously includes the post-revolutionary Maoist years of the twentieth century but also dates back hundreds of years prior to communist China. The motivations behind opening the doors of the Chinese market in the 1970s must be addressed to establish why such a revolutionary change in policy took place. Two factors can explain this decision: firstly, an awareness on the part of the Chinese government that China was \"being out performed by its neighbours, most irksomely by Taiwan\" Evidently, the spread of global capitalism had meant to Xiaoping that \"No country (could) now develop by closing its door...\" Further evidence of China's dependence on the benefits of global capitalism can be exposed by observing China's performance in international institutions such as the UN Security Council and more significantly, the WTO, where it \"has been acting for the most part as a system-maintainer, not a system-reformer or system-transforming revolutionary\" While some may argue this is simply indicative of an apathy", "label": 1 }, { "main_document": "hardness tends to average out with no further changes to properties occurring. The freezed sample generally exhibits lower hardness. For the first few days it can be seen to have constant hardness and suddenly peaks at 96 hours then drops again. The sudden peak is perhaps attributable to human error in measuring the Brinell Hardness. The tendency of the freezed sample to exhibit lower hardness can be associated with the freezing process which serves to retard the ageing process that is associated with increasing hardness (?). It is also known that a new batch of materials was used in the experiment and that some materials are therefore expected to behave more ductile and softer (?). The graph shows that the maximum hardness under precipitation hardening (artificial ageing) is greater than the maximum hardness achieved under natural hardening. By accelerating the rate of diffusion, this precipitation hardening increases the amount of 'intermediate coherent' precipitate thus raising the strength and hardness of the alloy. As the graph shows, there is usually an optimum point since as the material gets hotter, the grains grow thus increasing ductility and the hardness readings will therefore drop. On the level of the microstructure, this occurs when the composition reaches its saturated normal state, where the material reaches maximum hardness. However, the precipitate continues to grow at the expense of the finer precipitates. As a result of this disappearance of the fine precipitates the strength and the hardness decrease. For this reason perhaps the phenomenon is called \"over-ageing\". The following conclusions are reached from this investigation: Alloys are heat-treated in order to vary their mechanical properties. Heat treatment can be applied to steel to harden it as well as to improve its strength, toughness or ductility. Quenching, depending on the cooling rate, aims to avoid the critical cooling rate (C-curve) in order to produce a hard (but brittle) martensitic structure. Fully hardened carbon steel is brittle and the existence of stresses resulting from the quench would especially make it unsuitable for most applications. It is due to this reason that the steel is tempered to release stresses while exchanging brittleness for toughness as hardness drops with tempering temperature. The aluminium alloy derives its strength from solid solution and precipitation hardening. The properties of strength under precipitation hardening were found to improve over those obtained by natural age hardening, when samples were tempered at 270", "label": 0 }, { "main_document": "and grow polymer chains of different lengths, depending on the rate of propagation for each initiator. However, if crossover reactivity does occur then the polymers produced will depend on the rates of exchange as well as propagation. Two initiators of each type were used in the paper to polymerise styrene (Figure 1). The polymerisation conditions satisfied both NMRP and ATRP functionality. The experiments were carried out at 135 Control experiments and 3 combination experiments were carried out (Table 1). Experiment PS-4 produced a monomodal polymer by gel permeation chromatography (GPC), which presented three mechanisms for the propagation; either To determine if the first mechanistic scenario occurred and if the ATRP initiator forms polymer initiator Again the experiment led to a unimodal polymer by GPC, with the traces at 254nm and 320nm being superimposable. This showed that the polymer produced contained ATRP initiator. To determine if the NMRP initiator also formed polymer 0.5 eq. of difunctional initiator This experiment had three potential outcomes. If only one of the initiators initiated and propagated, the polymer would be unimodal, but would only have a trace at 350nm for initiator If both initiated, the polymer would be bimodal at 254nm, but unimodal at 320nm. However, if both initiated and the end groups exchanged more rapidly than propagation the polymer would have a similar GPC trace to the previous scenario, except hydrolysis of the polymer would produce a unimodal polymer at 254nm that is super-imposable with the trace at 320nm. The polymer produced fulfilled the last scenario, producing a polymer which was bimodal at 254nm, with peaks at ~30,000g mol On hydrolysis the polymer became unimodal at 254nm at ~15,000g mol From the results of the experiments it is clear that either both types of initiator initiate and propagate independently, but with a similar rate of propagation, or the end groups of exchange rapidly relative to propagation causing a singular rate for propagation. However, the control experiments show that the reaction conditions affect the ATRP and NMRP processes. Therefore, it can be concluded that both types of initiator work through a common propagation mechanism. For this to occur the end groups must be exchanging at a faster rate than propagation. Therefore, the final conclusion of the paper was that the NMRP and ATRP mechanisms do not operate independently due to end group crossover, which leads to polymers that grow at a common rate and have a low polydispersity index. However, these experiments were unable to determine whether ATRP, NMRP, or both mechanisms caused chain growth.", "label": 1 }, { "main_document": "discourses. The population was not simply a collection of people, but a specific phenomenon with variables such as birthrate and fertility, and at the heart of all this was sex. The future of society was tied up with sex, and the sexual behaviour of the populace was a target for intervention and analysis. Moreover, Foucault argued against the conventional view that sexuality is a static and natural concept, only ever subjected to repression and self-denial (Haug, 1987:191). Instead of being eternal, he showed us that sexuality has a history, existing as discourse and changing in accordance to the social conditions it inhabits. One example of a \"mechanism of 'public interest'\" (Morris, 1982:251), (specific site that constructed sexuality), is the church confessional. Conventionally seen as repressing our natural sexuality through guilt and fear of God, Foucault believed it instead helped in the \"production of discourses of sexuality\" (Haug, 1987:191). The church decreed that sexual actions should be endlessly talked about and spoken aloud. Every act was considered in relation to sex and whether it was worthy of confession; sex became a secret to be uncovered. Rules and boundaries created a sexuality that was meant to be detached from lust and temptation, with the confessor turning back to God (Foucault, 1976:23). Additionally, Foucault has influenced our understanding of sexuality and discourse, through its relation to 'power'. Power is conventionally seen as a right, \"which one is able to possess like a commodity\" (Gordon cited in Smart, 1989:6). Power (and repression) is a movement from 'the top' downwards, exercised through the law and the state (Foucault, 1976:85-88, Haug, 1987:193, and Smart, 1989:6-7). Foucault rejected this, believing in power's 'omnipresence' (Dews, 1984:92, Foucault, 1976:93-98). Power is not owned by any particular source, or resulting from any one decision, but is exercised and produced at every point. Power is extended throughout (sexual) discourses other than the state, and \"comes from below\" (Foucault, 1976:94). Moreover, power is not simply a negative sanction, but can be positive; it is productive and does not simply say 'no' but can say 'yes' (Smart, 1989:7). However, Smart is not entirely convinced with Foucault's notion of power, therefore questioning his input on our understanding of sexuality (as power underpinned his history of sexuality) (Smart, 1989). Smart stated that Foucault believed the old power (discourse of rights) was diminishing, and being taken over during the onset of modernity, by new knowledges and discourses surrounding sexuality, such as medicine and science (discourse of normalization). Foucault looked at how certain discourses claim to speak the truth, therefore being able to exercise power. For example simply claiming to be a science is an exercise of power as \"other knowledges are accorded less status\" (9). One of Foucault's flaws is that he never compares the scientists claim to truth with the lawyer's, which, according to Smart, runs in parallel (9). Law has its own claim to truth, systemized language and methods. It sets itself above other discourses in the same way that science does, therefore Smart is \"doubtful that law is simply being superseded\" (14) by science and", "label": 1 }, { "main_document": "of the tone in the middle of the sentence rather than in the tonic syllable itself. The final part introduced in Roach's book (2000) is about the The high head usually has a higher pitch in the stressed syllables than the starting pitch of the tonic syllable (shown in ' '); differently, a low head often has a lower pitch (shown in ' ') than the starting pitch in a falling tone, but keeps the same pitch as the beginning of a rising tone as in (6) and (7) (Roach 2000: 256 and 174). (6) with high head a. (7) with low head a. It may be noticed that only the situation with one stressed syllable in a head was discussed and the parts of the head except the stressed syllable keep the same pitch such as in (6a), (6b), (7a) and (7b). However, in a head with more than one stressed syllables, the stressed syllables have a step-up or a step-down tendency to get close the beginning pitch of the tonic syllable like (8) and (9). A little difference is in the situation (9a) where the pitches in a low head will not raise up. (8) with high head a. (9) with low head a. b. So far, the intonation system introduced in Roach (2000) has been very generally represented in former paragraphs and from now on another system for describing the intonation called ToBI (Tone and Break Index) is going to be introduced in the following section. As a kind of autosegmental approach to the intonation system, ToBI is originated by Pierrehumbert (1980: 87 in Cruttenden 1997) and used mostly in the American academic research. It applies only two tones, High (H) and Low (L) to the analysis and presents a quite new view of the intonation system. It is necessary to introduce five types of symbols used in ToBI (Cruttenden 1997: 60): Other symbols such as ' Therefore, if the tone unit (2) is analyzed in ToBI, it may be like this. (2) (10) After the rough introduction of both systems, the advantages and disadvantages of each can be noticed with a comparison between them. 1. The application of different notations usually conveys a direct impression to readers. Undeniably, the symbols such as ' ', ' ' and ' ' do provide a simple and visual guidance for the readers of course with appropriate explanation but are quite difficult to type in computers. Contrarily, ' * ' ' ! ', ' % ', etc. are quite meaningless and abstract codes which are difficult, especially, for learners to remember, which is also agreed by Roach (2000) and (Cruttenden 1997); however, the system is easy to be used in computers. 2. For the attempt to describe the natural speech, the Roach's system seems to have more limitation than ToBI. One is the analysis on more than one tone movement in a tone unit. Roach generally summarizes five situations as mentioned above; nevertheless, the natural intonation system is not as simple as Roach's generalization. Considering the following example (11) indicated in ToBI", "label": 0 }, { "main_document": "more specific function than previous periods (ibid). This would date the tools somewhere between 250,000 - 35,000 years ago, coinciding with the age of the Neanderthals. The 'current' view of Trinkaus and Shipman in their 1993 publication was Neanderthal characteristics were indeed evidence of a harsh environment: that their compact bodies, limbs and digits were adapted to conserve heat in near-arctic conditions; their large noses had evolved to prevent moisture evaporation in the cold, dry conditions but would also allow heat dispersal during exercise. They identified the fact that although the Neanderthals possessed the ability to clothe themselves, find shelter and create fire, their subsistence strategies were unequal to that of later humans (Trinkaus & Shipman 1993). During the process of research for this paper, it has been observed a majority of Neanderthal fossils derived from Europe and few from the Near, Middle and Far East. A comparison was proposed of contemporary fossils from both the east and the west, in the attempt to prove or disprove the climatic adaptation hypothesis, however, difficulty arose in identifying two fossils of the same date. The original fossil from the Neander Valley, dating to 40-50kyrs, was chosen as the basis of comparison, due to some 'classic' Neanderthal skeletal traits. As Tattersall suggests faunal evidence proves the Neander specimen and others including Kebara are contemporaneous (1999), the Kebara 2 fossil, found on Mount Carmel, Israel, was introduced as the Eastern example. The Neander Valley specimen (fig 2) includes bones representing almost the entire skeleton. A long, low upper cranium displayed the prominent brow ridges of a 'classic' Neanderthal, however, the lower cranium and mandible were missing. The robust skeleton possessed bowed long bones with large joint surfaces and well developed muscle attachments (Johanson & Edgar 1996), possibly relating to a strenuous lifestyle (Tattersall 1999). The Kebara 2 fossil (fig 3), missing the cranium, right leg and lower left leg, was found in association with tools of the Levallois technique and burnt flint, allowing dating using thermoluminescence (Johanson & Edgar 1996). There is contradiction between Johanson & Edgar (1996) as they report this individual was less robust than other examples from the Near and Middle East, opposed to Tattersall (1999), when he suggests Kebara 2 is the heaviest built Neanderthal fossil. At 1.7m, Kebara 2 is certainly one of the tallest Neanderthal fossils, taller than the average European Neanderthal (Johanson & Edgar 1996); the 'classic' retromolar gap was, in the absence, evidence of a forward movement of the mid face; and the chin was minimal. Unusually, the hyoid bone, used for muscle control required for speech, was recovered, leading to further discussion on the communication skill of Neanderthals. Both fossils display a common skeletal trait of the 'classic' Neanderthal: the heavy bones. At this point, partially due to the absence of many bones, the similarity diminishes. The Kebara male was tall, the Neander specimen shorter; the Eastern fossil long bones were straighter than the bowed examples of the German individual, who also possessed larger joint surfaces; and observation of the long bones identifies the Neander example to", "label": 1 }, { "main_document": "the necessary structure of the experience is actually one of mediation: there is a sensed content only because there is a sensing self, and there is only a sensing self because there is a sensed content. What each is, is mediated by the existence of the other element in the structure. For what it wants to express is a particular, a singularity that presents itself in sensuous immediacy, such as Through the very way sense-certainty attempts to remain intimate with the full reality of its object, by maintaining itself as a thoughtless pointing, it effectively frustrates the possibility of grasping the object at all; for it cannot PS, p. 59. So, from the assumption that immediacy is the most adequate way for consciousness to grasp its object, what emerged was the realization that this kind of immediacy is entirely indeterminate. The The dialectical movement is \"at once a PS, p. 67. PS, p. 68. The object of perception is no longer the immediate 'this-here-now' of sense-certainty, presented as unrelated sense-data, but is rather grasped as a 'thing' that is mediated by numerous universal The object is no longer the way it is because of some sort of intending or pointing on behalf of the subject, but is seen as manifesting a certain number of common properties which serve to determine it (e.g. the thing is spherical, white, salty, etc.). The test at this level is the same as at the previous one: we take a long hard look at the form consciousness has taken and wait to see if, through the unfolding of its own position, it can maintain its integrity. The more particular question in front of consciousness is now: can perception individualize its object? Also, it is only the 'thing-ness' of the thing that is responsible for the gathering together of what are in themselves independent and indifferent properties; the thing is 'this' and Perception's way of grasping its object, however, leads it into contradiction. Consciousness perceives an object as a singular thing distinct from other things. However, the properties which make the thing what it is are universal, and so are properties of many things; hence, the thing's supposed singularity comes into conflict with the universality of its constituting properties. Further, although these properties are understood to be determinate, i.e. they in themselves exclude their opposite, there are many properties Perception's object is at once exclusive and not-exclusive. What perception lacks is an ability to grasp a thing with universal properties that are nonetheless inextricably As long as the properties of a thing are thought of as independent of one another, consciousness will never be able to account for how universal properties can determine a thing in its unique identity. Lauer, Quentin. The dialectic continues: perception's inability to determine its object points us forward. Its object manifests itself to the senses as a group of properties, but perception's status as a mere \"spectator consciousness\" Only if the interrelationship of these universal properties can be grasped will the object be determined as individual; only if the thing can be understood", "label": 1 }, { "main_document": "again for 30 seconds. The organic (upper) and aqueous (lower) were separated. The lower layer (aqueous) were ran off and the upper (organic) layer collected in a conical flask. The aqueous layer were transferred back into the separating funnel and re-extracted with 15ml of petroleum ether. Again the (aqueous) into a conical flask. The two organic extracts were combined in the separating funnel and washed with 10 ml of water. The water layer ran off and the organic layer was washed with 25ml of salt solution. The lower (aqueous) layer ran off. The organic solution were transferred into a conical flask ad it was shaken with 5g of anhydrous sodium sulphate to dry it. The organic extract was filtered through a fluted filter paper into a weighed round-bottomed flask. The solvent was evaporated by using a distillation unit with \"tap-off\" and finally the residual solvent was evaporated by using a rotary evaporator until it reached constant weight. The flask was weighed and the percentage of the fat was calculated. From the results we see that the net weight of fat in our meat sample is 0,668 g. We have already mentioned that the sample we weighted is 3,006 g. As a result, we can conclude that in 100 g of the sample the content of fat is 22 g. So the fat content in our sample is 22 %. 10 ml of sulfuric acid were put into a milk butyrometer and 10.94 ml of milk were added and also 1ml of isoamyl alcohol. The butyrometer was stoppered by using the keys. It was placed in the protected stand and inverted. The butyrometer was centrifuged in a Gerber centrifuge at speed 8( 1000rpm) for 5 minutes. After that the butyrometer was placed in a water bath at 65 The percentage of fat was directly read off from the scale. We did not do any calculations since it the result was straightforward and clear perc by reading the percentage from the scale. Readings to an accuracy of 0.05percent are usually adequate for routine purposes. As it is difficult to separate the small fat globules in homgenised (eg.sterilised ) milk, it is advisable to re-centrifuge after warming in the 65", "label": 0 }, { "main_document": "not perform better when paid more and may even perform worse. (Maund 1999, p86) The result of the research, where they are derived from a survey providing over 3,200 responses was that money and recognition do not appear to be primary sources of motivation in stimulating employees to contribute ideas but in line with Herzberg's predictions, factors associated with intrinsic satisfaction play a more important part (Jones and Lloyd 2005, p929) Moorhead and Griffin (1995) (Mound 1999, p105) identified three clear messages for mangers, The basis of the reward system must be transparent in that it needs to be understood by everyone. Rewards are perceived as being multifaceted because individuals get different rewards. People have different understandings of what reality is, which is based on their perceptions. In expansion of this, Expectancy theory has been developed by Vroom, where his theory is based on the belief that employee effort will lead to performance and performance will lead to rewards, being either positive or negative. The more positive the reward, the more likely the employee will be highly motivated. Conversely, the more negative the reward the less likely the employee will be motivation.(Wilson 2005, p48) Vroom (Buchanan and Huczynski 2001, p79) highlights the importance of valence, instrumentality and expectancy as key issues in effective motivation, where Motivation = Expectancy * Valency E = subjective probability (expectation that the behavior will lead to a particular outcome) V = Valence (strength of preference, for outcome) If performances and rewards are not strongly correlated, and the balance of effort to level of performance required is seen as excessive or unrealistic, the process of motivation will be compromised.(Maund 1999, p120) As an example, the Wallace Company, winner of the prestigious Malcolm Baldrige National Quality Award in 1990, can be used to exemplify the expectancy theory. (Halepota 2005, p14) At Wallace, employees had sufficient authority to work on their own and they were trained to improve quality. The reward for improvement was the Baldrige Award: and the workers were aware of value of reward, so this kept them motivated to continue working hard However, this theory is based on the assumption that we are consciously aware of our goals and motives and that individuals are rational and objective, whereas they are more likely to be irrational and lack objectivity (Maund 1999, p109) According to Expectancy Theory, a manger would need to: (Maund 1999, p109) Determine the primary outcomes each employer wants, Decide what levels and kinds of performances are needed to meet organizational goals, Make sure that desired levels of performances are possible, Link desired outcomes and desired performance. Different types of cultures were developed by Charles Handy (1986) which are Role, Task and Power Culture and moreover there are two conflicting views of the culture, analytical and applicable, developed by Wilson and Rosenfield.( Fincham and Rhodes 2005) Analytical school argues that the Culture is a socializing force, which controls behavior of members, where as applicable school argues that the culture should be viewed in terms of commitment to central goals and as a means of managing successful", "label": 0 }, { "main_document": "Elective Surgical Referral following outpatient visit to cardiology. Admitted on the Came into hospital on the bus. The patient was seen post operatively, but was able to give me a history of her condition before she had her operation and treatment. Shortness of breath and a cough. Presented to cardiology outpatients in December 2003, with a persistent dry cough and shortness of breath. An ECHO (echocardiogram) was performed. It was discovered that the patient had aortic regurgitation, with an early diastolic murmur, radiating to the neck. The left ventricular systolic function was good and an operation to insert a prosthetic valve was planned. The shortness of breath and wheeze present for most of the time, waking the patient approximately twice every night. To sleep the patient needed four pillows and was unable to lie flat without getting breathless. Patient also experienced some night sweats. The patient reported that her ankles were never swollen. Cough was persistent, phlegm was white and frothy mainly brought up at night. Exercise tolerance was 50 yards on the flat. No chest Pain, palpitations, intermittent claudication, cold/blue hands or feet and no syncope. Hypertension (unknown when this was diagnosed) Hypothyroidism (Diagnosed and treated since 1991) Appendicectomy (1960) Voice Problems (2002), Received speech therapy for 7 months. No previous Myocardial Infarction, Ischaemic Heart Disease, Cerebrovascular accident, Diabetes, Rheumatic Fever, Pneumonia, Asthma or Tuberculosis, Penicillin Mother died when patient was 20 months old from ruptured aortic aneurysm, also had rheumatic fever Mothers Sister died from heart problems and had rheumatic fever. Father died aged 73 from a stroke Lives alone in a flat, no stairs Smoking - 0 pack years Alcohol - 0 units/week Retired aged 72 previously a nurse. The patient has a good social network and normally manages to do everything for herself. Recently has had some help with cleaning and shopping due to the shortness of breath. RS and CVS, see history of presenting complaint GIT No indigestion, abdominal pain, changes in bowel habit, problems with swallowing, heartburn or nausea and vomiting. Weight loss of approximately 7lbs since being in hospital and the patient's appetite has been reduced since being admitted. GUT Some stress incontinence when coughing but no other urinary symptoms CNS Occasional migraine, Some dizziness since admission, No faints, fits or blackout. The cause for the presenting complaint is already known as the patient is an elective admission and was seen post operatively. The incompetent aortic valve allows regurgitation of blood from the aorta to the left ventricle during diastole. This produced the early diastolic murmur seen at the cardiology clinic. The patient was experiencing shortness of breath, orthopnoea and paroxysmal nocturnal dyspnoea due to oedema. Oedema arises because the incompetence of the aortic valve leads to an increased workload for the left ventricle. In response to this the cardiac cells hypertrophy and the hydrostatic pressure increases in the pulmonary circulation resulting in fluid accumulation in the lungs. As the operation was successful and a soft prosthetic aortic valve fitted upon examination I would expect to find that the patients symptoms had been relieved. I", "label": 1 }, { "main_document": "fastest growing industries all over the world. The faster it grows, the more the destructive power it possesses and the life is become shortened. If it is well-planned and well-managed, tourism can be a positive force, bringing benefits to destinations around the world; in the reverse, if it is poorly planned and managed, tourism can be an engine for degradation. It is clearly in the interest of the tourism sector to maintain and sustain the basis for its prosperity, thus sustainable tourism should be promoted in every corner. According to the definition by WTO, it is defined as \"Sustainable tourism development meets the needs of present tourists and host regions while protecting and enhancing opportunities for the future. It is envisaged as leading to management of all resources in such a way that economic, social and aesthetic needs can be fulfilled while maintaining cultural integrity, essential ecological processes, biological diversity and life support systems.\" (UNEPa, 2002) Sustainable tourism business is the business operating within the tourism industry, which takes a responsibility on the environmental concern. Take hotel as an example, it is one of the sector in the tourism industry. As the issue on environmental concern is arising recently, hotel managers try to find out lots of way to reduce the energy consumption. Under this circumstance, the hotel is a sustainable tourism business because it strives for protecting the environment. In this report, one stakeholder/business is included for analysing its operating environment in the Liverpool tourism industry by applying some theoretical models of strategic management and then finding out how the stakeholder/business do to maintain the sustainability. As an up-scale 4-star hotel, its primary role is to offering food, beverage and sleeping accommodation with a superior quality to its customers and tourists. Through the name of European Capital of Culture, Marriott Liverpool City Centre Hotel is certainly able to get benefits from the \"08 project\". However, in order to access the opportunities for Marriott Liverpool City Centre Hotel to expand their achievement to get more chances offered by the event. According to the Hill & Jones (2004), SWOT analysis is to identify the strategies that will create a firm-specific business model that will best align, fit, or mach a company's resources and capabilities to the demands of the environment in which it operates. Here, SWOT analysis is used to recognize the resources and capabilities of Marriott Liverpool City Centre Hotel with the opportunities together to gain the advantages through the \"08 project\" , and also to identify its weaknesses and the threats it will face so as to find means to minimize their influences. Marriott Liverpool City Centre Hotel is owned by a listed company called Whitbread plc and its mother company is Marriott International Inc., which is a leading worldwide hospitality company. Thus, it has a well-known brand name. Marriott started operating since 1927, it has rich experience and knowledge in providing excellent services to its customers so that it was always given lots of awards (Marriott International Inc., 2004). As Marriott has a good reputation in the accommodation sectors, it", "label": 0 }, { "main_document": "services, and a possible slowdown in innovation. Nonetheless, with the sector's ecocycle in account, innovation can also be viewed as a process by which small improvements in diverse areas of functioning-\"functional differentiation\" (Ohmae, 1982; p. 97)-can indeed assist hotels to maximize their strengths, minimise weaknesses, improve their competence and potential as well as create new opportunities for growth (see CSF). Most well-established hotel companies' have vast portfolios, with significant presence in key locations, considerable and comparable economies of scale as well as substantial congruence and profitable management of critical success factors (Appendix 1 and CSF). Those who pioneer the developments tend to be the main players themselves and this may again be justified by their significant power generated from economies of knowledge and scale. The combination of the stage of the ecocycle, economies of knowledge and the source of the main developments being from other industries, all contribute to the availability of innovation to all, and a significant degree of imitation. Also, given that the innovations are mainly from other industries, and therefore extensively accessible to all companies, consumers are not as impressed as they would be if a development stemmed from within the sector and an organization in particular. The high level of similarity between brands, their products and services indeed confuses guests (Brandimension, 2006; Ohmae, 1982), especially those who are not loyal to any specific hotel organization, and should be a cause for companies to dare to go beyond improvement, development and imitation and challenge existing practices, products and services (Brandimension, 2006). A reflection of critical success factors and their importance is the development of technology and its diffusion in the sector. Innovations in this area revolve mainly around deals between global hotel brands and development companies, focused primarily on improving internet access for guests, the introduction of VoIP in rooms, and enhanced in-room facilities (Appendix 3). Technology has spread into the different departments and innovations such as hand held POS systems for food and beverage operations and mobile wireless check-in systems for the convenience of guests. In addition, kiosks with boarding pass printing, the guarantee of the cheapest online booking rates through constant monitoring of internet prices and the improvement of central reservations' systems which have enabled cutting out intermediaries, are becoming the norm (Appendix 3). Technology has also found its way into intensely personalising the guest experience, through the emergence of an interactive suite selection tool, where extended-stay guests can choose their suites, upon reservation, according to their preferences. The iPod innovation has been integrated in staff training, where mobile learning will be provided through specifically developed programs for the software and be made widely available (Appendix 3). The emergence of new hotel concepts mirrors not only trends in various industries but also changes in society. Designer, lifestyle and extended stay hotels, serviced apartments, vacation ownership or partly owned residences, though a potential substitute threat, represent inevitable adaptations to ever-changing needs. They may therefore be regarded as innovations, though their impact on the sector and society is highly restricted, as these transformations show high levels of adaptation, limited", "label": 0 }, { "main_document": "and I have assed the goal using the SMART system and have shown how the goal will be implemented. This is linked to communication, which is explored in the next section, as the goal will have to be communicated to the workforce. There are three functions of leadership; these are motivation, communication and encouraging teamwork. Communication is a vital part leadership as the ability to communicate will affect motivation and the success of the business. A manager will spend 80% of their time communicating this can be done through a variety of channels such as face to face conversations, over the telephone, via letters and emails, or through items on newsletters or on notice boards. Communication is a part of every function of management and without communication the manager would not be able to explain plans, share visions or implement changes. Communication as defined by Daft (2002) is \"the process by which information is exchanged and understood by two or more people, usually with the intention to motivate or influence behaviour\". This means that communication is more than just a manager talking to employees and telling them what is expected, it is about interaction and the sharing of information and ideas. Managers need to interact with the employees as this will give them a better idea of the state of the business and how it can be improved. The ability to affectively communicate is a difficult skill to master, many times people end up conveying the wrong message to people, whether this be at work or in our every day lives. The communication process is made of five parts: encode, message, channel, decode, and feedback. The sender of a message encodes a messages sends it via a channel such as letter or email, the receiver then decodes the message and then delivers feedback. It is during encoding and decoding that mistakes in what is being communicated can occur, the knowledge, attitude and background of the receiver maybe different to that of the sender so they may interpret the message in a different way. There are many ways of communicating within an organisation and each one is used to convey different information, it is important that the manger selects the right type to use in the situation. The commutations can be found on a scale that runs from low media richness an example of this is reports to high media richness which is face to face talks. Low media rich channels should be used for information that is already agreed or understood, or when a record of what is said is needed. Media rich sources should be used when the message is likely to be misunderstood, or when a quick response is needed, media rich sources are also more personal. Communication is not solely about talking, listening is also a key part and without it communication is not taking place as 75% of effective communication is listening, but most are unable to does this, so mistakes in communication occur. There are 10 key skills that are essential to listening, these are: A good", "label": 1 }, { "main_document": "between fully skilled machinists and semi-skilled machine minders. It is inevitable to downsize the numbers of workforce during the changing of machine shop, but it is not a pretext to fire employees. In general, the ones who are cut down during the process of changing are excellent employees; they will form core power for the implementation of new project and new product line. So, Sun Company will not lay off any employees. In functional layout, organization usually has developed through function department, such as test department, maintenance department, material department and quality control department. A cellular layout emphasizes product-based, and the organization should be built based on products or product group. Cell organization simplifies the difficulty of management. Each cell has its own team leader who is in charge of both workers and machines. Cell manager should be responsible for daily operations such as forecasting, scheduling, inspection and maintenance. General Manager could be easily control the whole manufacture and operations through managing cell managers. A strategic view of quality is one which sees quality as a long-term competitive requirement, an ongoing, unending means of outperforming competitors. With the globalization, companies have to confront with the competition from all over the world, and customers also have great amount of choice for one product, these make quality become more important in strategy aspect. The sun company often pulls products from finished goods and rebuilds them for custom orders that, further, lead to overdue of order and increase customers complaints and dissatisfactions. In order to change this situation, Six Sigma will be applied to improve our quality. Harris [7] claims that Six sigma is a disciplined, data driven approach, it concentrate on continually improving process quality and productivity. Although the process quality is not focus on reducing cost, cost will undoubtedly reduce in successful quality drives because of the eliminating of waste, excess and other non value-added activities. GE is a good example. Six sigma improvement projects include five steps: Define, Measure, Analyze, Improve, and Control (DMAIC). A set of tools (charter, TMAP, process map, measurement system evaluation, VOP, etc.) can be used to guarantee the successful implementation of DMAIC. We still need some employees as black belt and green belt who are trained individuals responsible for leading six sigma teams through the DMAIC process. Project team will be responsible for the improvement of quality and control. Production planning and control aims to assure efficient and effective process of operation and manufacture of product as required by customers. Nigel [8] claims that how to reconcile supply and demand in terms of volumes, timing and quality, is the most significant purpose of planning and control. There are four over-lapping activities which should be considered during the process of reconciling volume and timing. Source: operation management In order to make production planning and control system run effectively, computers have been used. According to Vollmann [9], planning and control system based on computer system can help manager to efficiently manage the flow of materials, effectively utilise people and equipment, co-ordinate internal activities with those of suppliers, and communicate", "label": 0 }, { "main_document": "of the breast, often in the absence of a mass, may indicate inflammatory breast cancer, which is associated with a poor prognosis (8). The nipple should also be briefly examined for signs of any obvious lumps or evidence of asymmetry, inversion or discharge. The tail of the breast, the axilla and the neck are then assessed for lymphadenopathy. If any abnormalities are found it may also be necessary to examine the abdomen and spine for evidence of hepatic or bone involvement respectively (8). The second stage of the triple assessment investigation usually involves either mammography or ultrasound imaging techniques. Mammography is extremely effective at showing lumps and areas of calcification within the breast, therefore in most circumstances it is possible to predict the nature of a lesion solely by its appearance on the mammogram (30). For example, benign lumps such as fibroadenomas are usually very discrete with well-defined margins on the X-ray, whilst the invasion of malignant lesions into surrounding structures results in poorly defined, irregular margins. Mammography does have its limitations though, particularly in association with nulliparous women or in those less than 35 years of age. This is due to the high percentage of connective tissue they exhibit, which promotes X-ray absorption and therefore results in a mostly opaque film. Large regions of opacities may also be seen in women over 35. These opacities appear as compression of normal breast tissue creates the appearance of apparently dense regions on the film. However as two X-ray planes are now assessed (i.e. cranio-caudal and mediolateral oblique views), any abnormalities can usually be ruled out by examining the tissue in the opposite direction. Women with large regions of opacities must therefore be reassured that they do not always mean bad news. Furthermore, the patient's age, parity and circulating hormone levels can all influence the appearance of the mammogram. In either circumstance, it can be difficult to detect abnormalities when dense tissue is present, because they do not stand out from the surrounding tissue. In such patients where mammography is of limited value ultrasound imaging may be more beneficial. Another finding often seen on mammograms is micro-calcification. Tiny speckles of calcium can be seen clearly, because they do not allow X-rays to pass, so appear white on the film. It is known that a number of breast problems can cause microcalcification to occur, most of which are usually benign. However, it is important to note that calcification of the ducts is an early indicator of DCIS, which may be observed before they are clinically palpable (5). Ultrasound scans (USS) are of great value in the investigation of breast lumps and cysts particularly in younger women who have dense breasts, as cysts and solid lumps have characteristic appearances which allow them to be easily identified (Figure 5.1) (31). In addition, the use of USS prevents unnecessary exposure to radiation in these populations. USS is however less useful for areas of general \"lumpiness\" where there is no discrete abnormality to feel, as the quality of the image is restricted by the resolution of the scanner and", "label": 1 }, { "main_document": "tango only gained marginal recognition as a truly Argentine metropolitan dance, as in the poem 'Milonga' by Oliverio Girondo. Rowe and Schelling, Oliverio Girondo, 'Poems to Be Read on a Trolley Car' in A second exponent of tango, nevertheless, soon found through tango a new formula for political criticism, woven into poetical lyrics. The song 'Cambalache' by Enrique Santos Disc 'Those that don't cry don't feed', Santos sang, 'and those that don't steal are fools'. Enrique Santos Disc Despite this loathing with politics, the middle class adopted the tango as an aspect of the new, dynamic and modern culture that Argentina was now adopting. In this, the cinema played a big, yet not unambiguous role. Argentina was steadily adopting the Northern American cinema culture. In 1930, around a thousand film screens could be found throughout the country. By the late thirties, fifty such films were produced a year, also serving as a charming and successful export product for other Latin American countries. John King, Ibid, p. 11. Ibid, p. 38. While the metropolitan immigrant gained commercial ground and mass appeal, other cultural forces were working from above, posing questions at a more political and intellectual level. In the essay, Ezequiel Mart Jews, who were often particularly victim of racism, attempted, with relative success, to bring the issue to attention in the theatre. A number of Jewish playwrights wrote plays on Argentine Jewish families and their cultural problems, with strategies varying from the complete abandonment of any distinctive Jewish identity', Ezequiel Mart David William Foster, Foster, Ibid, p. 111. The literature of Jorge Luis Borges forms a totally separate category in Argentina's intellectual life of the thirties. Whilst his outlook towards nationalism was complicated and peculiar, both the political left and right came to agree 'that what the fictions display is mastery'. By the 1930s, however, Borges came to reject Mart Instead, he proposed that Jean Franco, Martin, Franco, Martin, Jorge Luis Borges, cited in Martin, This deconstructionist view of Franco, Ibid, pp. 330-31. Williamson, Martin, The 1930s as a definable decade for Brazil started in October 1930, when a group of young army officers overthrew the long-standing S Gradually, art forms and cultural expressions of the lowest classes (in Brazil this mainly meant the former slaves) became accepted by a wider public, and eventually even representative as national symbols. The film genre of the Already in 1931, many of the writers in the Modernist movement were aware of the ideological clash that the Vargas government was driving the cultural community into. Williamson, Robert Stam, Ibid, p. 82. Mike Gonzales and David Treece, Martin, While in the early decades of the twentieth century public concerts of Whereas previously lovers of Amidst the still prevalent racism in Brazil, Gilberto Freyes presented his theory of 'racial democracy' in 1933 with the publishing of his book In it, Freyes argued that the contribution of Afro Brazilians added to the rich cultural mix in which Brazilian tradition was embedded. However, now supported by the state, who saw in the lower classes an important element for the maintenance", "label": 0 }, { "main_document": "directory for example): The simple schema below illustrate this structure, Where A and B are two Link_T. Directory_S represents the content of a directory. Actually \"files\" represents a list of the files contained in the directory and \"dirs\" a list of the files contained in the same directory. \"files\" and \"dirs\" are both FileArray_T which is explain in the next step, FileArray_S is a type which represents a list (queue) of files or directories. It contains 3 attributes: Entry_S is a type which represents an entity in the queue. An entity has a Of course, there is some others data structures but we define here only the most important for the futures modifications. We can simplify these structure by the below schema. The content of a directory is represented like that, This program contains a Then, the synchronisation is executed by a function called This function will analyse the contents of the directories (source and dest), then process to the synchronisation. In this part, we will explain how the synchronisation is done. First of all, this function initialize some queue: excludedDirs, excludedFiles and excludedRegex calling the function QueueInit(). These queues are used to store the excluded Dirs, Files and Regular Expression which are specified in the command line. Then, a loop is used in order to analyse the command line and read the options and argument (calling the function getopt()). This part will be useful when we will modify the program. The buffer initialisation is then done (we can define the buffer size in the command line). The buffer is the size used to copy the file (this mechanism is explained below). Finally, This function is very important. We have to understand how it works in order to process to the modification. This function receives two arguments (two char*): source and destination which are the source directory and the destination directory. First of all, In order to fill it, the method \"insert in head of queue\" is used. These structures are Some basics functions are written to access, write, read etc...elements in this structure (for example ). Once these structures filled, the synchronisation contains 4 steps. In the next part, we will explain these steps and some functions calls. This step is done iteratively by reading every file name contain the queue filled before. Of course, the file if deleted if and only if the option '-r' didn't specify in the command line (this option does not allow the elimination of the missed file. This function calls some simple others functions contained in the same file ( This step is similar than the previous step. However this step read every directory name contained in the source directory. This step will copy the files from source to destination when it's necessary. If the file is not in the destination directory, then it will be copied. If the file is already in the destination directory, then this function will analyse the file stat (date of last modification and file size) contained in the source directory and determine if it has to be copied (\"yes\"", "label": 0 }, { "main_document": "importance of relations between the visitor, the resident community and the place itself. Not a particularly groundbreaking study, but builds on past research. There are many previous studies of Waikiki. As well as a well recognised destination, it provides 13% of jobs in the state of Hawaii and 10% of gross state product. Respondents to the research survey tended to be long-standing residents of Waikiki, with an average age of 56.5. It can be assumed that they responded because, having lived there a significant time, they feel a commitment to the area that more recent arrivals may not. Older residents were found to be less likely to accept change. Natural and park space were central to respondents' concerns, reflecting their choices of leisure activities. 92% desired improvements to be made to the Ala Wai canal; Waikiki beach was the feature most liked by local people, followed by parks. Respondents desired even more parks and open spaces, suggesting that they want an 'enhanced outdoor recreation experience' (p.440). Some concerns about maintenance of facilities. Despite tourism and other commercial activity, 58% did not believe the area was overcrowded. The major concern was traffic congestion. Specific recommendations include: Revitalising Waikiki through identifying Waikikian culture and involving local people in it. Creating a 'sense of place'. Minimise the commercialisation of culture. Involvement with locals is preferable to created-for-tourists representations. Stimulate partnerships between local residential, business and Government to broaden citizen input. Despite the initial aim of applying the findings to other mature destinations, all recommendations made are very specific to Waikiki. Initial research questions are somewhat vague, e.g. 'what are the resident's opinions of destination features?' For example, questions and responses could be grouped specifically into natural and built environment, or opinions on facilities designed primarily for resident use, and those tailored towards tourists. The study considers only very few elements that residents may like or dislike ( Many others may also be relevant. Own research in Waikiki. Previous academic papers on Waikiki as a tourism destination and on tourism development. Dinnel, T. (1997), Cooke, K. (1982), 'Guideline for socially appropriate tourism development in British Colombia', Need for research identified: Absence of research into host families and their relationships with guests. Significant industry interest in 'small end' accommodation. Concern over quality management. It is generally accepted that the quality of the bed and breakfast/farmhouse stay sector needs improving. Guests demand higher standards. Ambiguity regarding host-guest relationships: Are hosts' interests and behaviour predominantly commercial or domestic? Aims to describe the main characteristics of the homestay sector and 'test the hypothesis that host attitudes differ on the basis of their commercial orientation' (p.123). Homestay tourism results in high intensity host-guest relations and relatively large economic benefits. 43 businesses identified as falling within the homestay sector. 206 interviews conducted with host families. Analysis of hosts using variables in family life cycle and commercial orientation. Hosts generally female, between 31 and 60 years of age. Motivated by a 'feel good factor' (social and psychological wellbeing), self-education and financial benefit. They feel responsible for guests and desire their integration, single hosts", "label": 1 }, { "main_document": "are stopped so as to focus on adjustment at the expense of other causes and to their detriment So too adjustment has often been charged with being culturally insensitive, failing to recognise the diversity and culture of individual countries with their prescriptions and instead streamlining countries economies and disregarding \"that some individuals may not be trying to maximise their own economic self interest\" The increasing migration of workers bankrupt by the adjustment's agricultural reforms led to an insurgence in coca production and the illegal drug trade in Peru became rampant and all invading. Structural adjustment \"undermined the legal economy, reinforced illicit trade and contributed to the recycling of \"dirty money\" towards Peru's official and commercial creditors\" In effect drug profits were the way Peru managed to pay off its escalating debts, the total debt had doubled as a result of adjustment and debt repayments increased from $60,000,000 to $150,000,000 a month OXFAM UNRISD (1995) London: UNRISD. p.43 Chossudovsky, M. (1997) Impacts of IMF and World Bank Reforms. Zed Books Ltd; London & New Jersey. p.211 ibid. p.204 Bracking charges adjustment with being a form of \"political domination\" Essentially the argument is that structural adjustment undermines the sovereignty of nation states and their \"sacrosanct right\" Indeed Adedeji writes that \"independent policy making and national economic management were considerably diminished and narrowed in Africa\" So the implication is that non-adjustment motivated priorities have to fall by the wayside in favour of structural adjustment reforms. There is no arena for genuinely domestic concerns and thus there is a demise of politics replaced by a mimicked version of neo-liberal economic ideals. Does the Peruvian illegal drug industry prosper because of the strict adjustment conditions that demand government monies focuses, not on internal policing, but on economic growth, national expenditure and income? Bracking, S (June 1999) Structural Adjustment: Why it wasn't necessary & why it didn't work ibid. Simon, D 'Neo-Liberal, Structural Adjustment, Poverty Reduction Strategies' in Desai V & Potter R. (Eds.) (2002) Adedeji, A in Kanbur, R Structural adjustment reforms often spark \"unrest and violence\" In Peru the 'Fujishock' of the 90's left the government having to chose how to react to the inevitable social protest and general discontent which could potentially have a detrimental effect on their rule as well as an unsettling effect on the country in general. Faced with what they perceived as a potential \"civil dissent\" During his reign Fujimori continued with militaristic intervention seriously undermining any concept of a democracy and the resulting \"curtailment of civil liberties\" This could lead one to suggest that structural adjustment leads to a demise of proper politics and democracy. Has adjustment led Mugabe to adopt the strong, militaristic style he displays today, twenty-five years after he declared independence for Zimbabwe, became their president and formed the country's relationship with the World Bank? UNRISD (1995) London: UNRISD. p.42 Chossudovsky, M. (1997) Impacts of IMF and World Bank Reforms. Zed Books Ltd; London & New Jersey p205 ibid. Increasingly, where governments initiating structural adjustment have evolved into authoritarian states, there is an implication that this form", "label": 1 }, { "main_document": "quote John Stuart Mill in saying that by 1836 \"The virtues of a middle class are those which conduce to getting rich... along with family affections\" While this shift must be seen in part as due to the broader development of the category of 'middle class' over time, as can be seen the 1832 Act was a turning point in its definition and thus its emergence. In addressing the issue of what were 'middle class' values then it appears that the 1832 Act throws up yet another ambiguity in that they were not static and ascribed but altered over time, as, inherently, did any 'class' in itself Wahrman, Dror, '\"Middle Class\" Domesticity Goes Public: Gender, Class and Politics from Queen Caroline to Queen Victoria', Seed, 'From 'middling sort' to middle class, pp.124 It appears therefore that the emergence of the category of 'middle class' was an extremely complex and, to some extent, unaccountable process. The evidence I have presented certainly infers the dangers of the 'weak notion of \"the rise of the middle class\"' as part of the industrialisation of Britain in obscuring its diversity and constantly changing nature. As Seed points out, there was 'no central institution where the middle class was embodied and its inner structure revealed' The ways in which the category 'middle class' emerged during the age in question resorts, it seems, back to the issue of demarcating the term 'class' itself. What constitutes or causes any such social category to come into being resists explanation exactly because it is a historical process and thus changes over time and with reference to personal representation because it is not an objective reality, as is clear in this case with the transformation from a public to a domestic type. Seed, 'From 'middling sort' to middle class, pp.125 Seed, 'From 'middling sort' to middle class, pp.134", "label": 1 }, { "main_document": "result bore little resemblance to the FSLN leadership's original vision, as state farms occupied as little as 11.7% of the arable land and the cooperative sector made up 13.8%. On the other hand, large and medium-sized capitalist producers had 6.4%, and the individual peasants enjoyed the property rights to as much as 48.7% (Ryan, 1993). \"Higher forms of production\", that the leaders wanted to impose, failed to bring success to the Sandinista designed mixed economy. The ten year long rule was composed of both success but also failures due to misjudgments of actual situation and sometimes implementing conflicting policies concerning agriculture. It seems to be true to say that the agrarian reform left major part of the peasantry disillusioned and dissatisfied (Martinez, 1993). Also the economy as a whole did not thrive in spite of the original growth from 1980 through 1983, as such problems as sharp war-driven inflation and increasing foreign debt emerged ( The Sandinistas could be considered as quite flexible and rather pragmatic, as throughout their rule they applied a series of adjustments or even complete changes of strategies (e.g. significant reduction of public employment, elimination of subsidies for basic foods). The economy was in crisis. Still they were convinced that they would manage to improve things, and in fact the FSLN 1990 election slogan was worded: \"Everything will be better\". But they did not have a chance to put it into practice as they lost this election.", "label": 0 }, { "main_document": "angle, so it is reasonable to assume a 63 To be sure the detected amplitude by the receiver is a surface wave it will be possible to attenuate the signal by placing an object in the path of the wave, such as a finger. No attenuation would mean the detected wave is travelling by bulk propagation. Measuring the speed of the Rayleigh wave can be achieved by recording the time the signal is detected and comparing it to the distance between the transducers. There will be a linear correlation directly proportional to the velocity of the Rayleigh wave. The offset in distance will be equal to the distance the wave travelled inside the transducers. The time will be measured on the scope and the distance by a Vernier calliper. Theoretically, a good approximation to the speed of a surface wave is: Where For various materials the time for the Rayleigh wave to travel between the transducers was plotted against the separation of the transducers. Some materials did not show signs of having a surface wave. The ones that did are shown below along with their Rayleigh wave speed: The measured velocities compare well with the expected values and also the predictions of the theory. The uncertainty in the experiment covers the actual value, which assures of the accuracy of the measurements. The offsets for the linear relation between distance and time were all similar, as expected due to the waves travelling in the same pair of transducers. The materials in which surface waves were not detected had differing densities. In the case of the Perspex block there was no density change and thus no interface for the mode conversion to make surface waves. The denser objects like Copper would have made the actual angle less than what is necessary for the second critical angle. A variable angle transducer would have allowed the setting of the incidence angle to maximise the amplitude of the surface wave and also extend the test range over a wider value for densities of materials. The couplant was an issue with this experiment as different thickness yielded different amplitudes, and it was not experimentally simple as to whether the lack of a surface wave was due to the material being of unsuitable density, or the couplant was incorrectly used. A non-contact transducer would supple a more repeatable experimental result or perhaps performing the experiment in a tank of couplant so it was a constant. As the Rayleigh wave generated will only have an effective depth equal to its wavelength it will be possible to slide the transducers along the Aluminium object, either side of the groove, until the amplitude of the Rayleigh wave detected becomes insignificantly small. By measuring the depth of the groove at both ends, the width of the object, and knowing it is linear, a function of groove depth can be found. Once the Rayleigh wave amplitude becomes unperceivable the distance of the transducers from the start of the groove can be found using Vernier callipers. Then the groove depth is calculated and the wavelength", "label": 1 }, { "main_document": "a process played out between mind and environment. Computer models have successfully demonstrated that the input could actually be enough to account for the acquisition of complex language features in a general neural architecture: the complexity of language structure can be product of the complexity of the environment, without the need to invoke a putative Language Acquisition Device. Some recent developments in linguistics lie comfortably with a connectionist framework. Another product of the computer age, corpus studies, has moved the study of language away from introspection to produce objective analyses of real language. These studies, such as the Cobuild project (Sinclair, 1991), have challenged the view of a core grammar acting on a separate lexis in favour of memorised collocations, chunks and idioms: each word and phrase has a range of preferred neighbours and interdependencies (even extending beyond clause boundaries) in a way that blurs or dissolves the distinction between words and rules. Good grammar does not equate with good language is not an acceptable alternative to (Bod, 1998). R. Ellis (1999; cited in N. Ellis 2002) points out that what we consider to be grammar is highly variable, even within an individual speaker, while Barsalou (1998; ibid.) and Luka (1999; ibid.) demonstrated how grammaticality judgements can be primed by recent exposure to grammatically 'borderline' structures, suggesting that grammar is less a fixed absolute, more a collection of memories and probability judgments. The Competition Model (MacWhinney, 2001), meanwhile, models the relationship between form and meaning as multiple cues in the input competing to determine comprehension, and characterises learning as the modification of the weighting between these cues in the light of experience, and as such provides a complementary framework of the functional application of connectionist theories. To sum up, the advantages of a connectionist model of language are that it suggests how neuroscience can relate to linguistics; it describes language representation, processing, variability and learning in one system; it is tangible and testable, unlike many other theories, generating solid data and predictions that can be compared to human data; computer models have produced some surprising evidence that have challenged linguists' assumptions about the emergence of linguistic competence; finally, it concords with the instincts of many teachers, learners and the linguistically na As a very young theory, it is not clear whether connectionism will replace, absorb or merely accompany other theories. Strong judgements must be withheld, and in many aspects of language acquisition it still has a lot to prove, but there is clearly considerable potential for fruitful future exploration. These reasons, combined with the fact that it is a radical new theory with relatively unexplored pedagogical implications that may confirm or challenge the classroom practice informed by earlier theories, form the rationale of this assignment. As explained above, connectionism represents a considerable departure from previously held theories of second language acquisition. How, then, can a teacher apply these principles to the classroom? I will focus on what I consider the three most important aspects of teaching that connectionism draws attention to. Firstly, connectionism puts input This may come as no surprise to", "label": 1 }, { "main_document": "case so convincingly for Ancient Greek cosmology when he knows it to be false. After all, if Parmenides truly wishes to prove the mortal way of viewing the world wrong, \"then it seems an unlikely and unwise procedure to construct the most persuasive account possible.\" The 'traditional' interpretation also fails to address the possible reason why Parmenides arranged his Ways in that particular order. Why, after revealing the ultimate truth about the world, should he choose to follow it by discussing what mortals erroneously believe to be true? The impact of the brilliant logic of the goddess' argument in the Way of Truth appears somewhat lessened when followed by the patently false Way of Seeming. W. K. C. Guthrie, 1980, p71 In his 1989 essay, Karl Popper suggests that the reason for Parmenides' arrangement of the Ways is that he wished to break from the traditional model of presenting one's theories. In Ancient Greece, as now, the usual method is to systematically refute an accepted belief before presenting the reader, or listener, with the new one, which avoids all those obstacles which proved to be the downfall of the first theory. Popper suggests that Parmenides' choice to put forward the Way of Truth before the Way of Seeming is a deliberate \"inversion of the 'traditional style.'\" Though this attempt to place Parmenides in context is laudable, the inclusion of the Way of Seeming still appears superfluous. The Way of Truth was, in fact, so groundbreaking that it could easily have stood alone, and this would also break with traditional style Popper implies Parmenides was so eager to avoid. K. Popper, 1998, p90 John Burnet's attempt to reconcile the two halves of Parmenides' work also appeals to the context in which Parmenides was writing. He claims that the Way of Seeming is, in fact, written as though it were from the point of view of a Pythagorean philosopher. The purpose of its following on from the Way of Truth is to emphasise its falsity, throwing its wrongness into stark relief. Burnet gives several reasons for this suggestion, the central one being that Parmenides was himself part of the Pythagorean School of philosophy until he divined the Way of Truth. Burnet speculates that \"IN BOOK LENT TO AMBROSE\" The use of the plural here indicates that this Way is not limited only to the Pythagorean philosopher. J. Burnet, 1920, p184-5 To my mind, the most satisfactory explanation of the link between the first and second halves of Parmenides' poem is that put forward by Karl Popper in his 1988 essay 'Can the Moon Throw Light on Parmenides' Ways?' He points out that Parmenides was the first to discover that the waxing and waning of the moon was in fact a trick of the light, and as a consequence the moon remains a spherical solid no matter what stage it has reached in the lunar cycle. So, that which appeared to change with the passage of time was, in fact, not changing at all. Popper speculates that this ground-breaking discovery is likely to have had", "label": 1 }, { "main_document": "of reserves should be connected with the amount of money in circulation. Once the gold standard system was done away with in 1971, the size of reserves was linked to the extent of international trade. Reserves were desired to cushion the fluctuations in the current account balance. In the 1980's liberalization was the buzzword and facilitation of cross border capital movement was seen as the key to global success. This resulted in the change of outlook towards reserve holdings. They were hence expected to meet the financial liabilities of the economy both short and long term and to a greater degree. The Reserve Bank of India uses different indicators of forex reserve adequacy and tries to arrive at an optimal level through an observation of all the indicators taken together and not merely one. This determines the amount of reserves as the number of months of imports that can be financed by the country's existing reserves, with three to four months considered as adequate. This is also known as the 'rule of thumb' as it is propounded by the IMF and is most commonly accepted. The ratio of reserves to short term external debt is viewed as a useful indicator to judge the level at which investors lose confidence in the economy. It has been recognized as a key indicator in determining the extent of reserve holdings. It aims to capture the risk of flight of capital from the country in the event of a financial crisis. Apart from the above methods any new method encountered during the study would be incorporated. Of particular interest, is the Guidotti Rule, postulates that the ratio of short term debt augmented with a projected current account deficit (or another measure of expected borrowing) could serve useful an indicator of how long a country can sustain external imbalance without resorting to foreign borrowing. As a matter of practice, the Guidotti Rule suggests that the countries should hold external assets sufficient to ensure that they could live without access to new foreign borrowings for up to twelve months. This implies that the usable foreign exchange reserves should exceed scheduled amortization of foreign currency debts (assuming no rollover during the following year). NULL HYPOTHESIS: Ho: The Foreign Exchange Reserves in India are not in excess of the adequate amount. ALTERNATIVE HYPOTHESIS:H1: The Foreign Exchange Reserves in India are in excess of the adequate amount. The duration of my study would be from 1991 crisis to July 2007. 1)The IMF (International Monetary fund ) Reports 2)Reserve Bank of India Bulletins (1990-2006) , Reserve Bank of India publications and the official website of rbi: 1)Dr.Charan Singh's Report on Foreign Exchange Reserve Adequacy, SIEPR Policy Brief(2005), Stanford Institute of Economic Policy Research and Singh.C.(2005):\" Should India use Foreign Exchange Reserves for financing Infrastructure?\" SCID working paper no 256,Stanford University, His findings are as follows: The rising levels of FER(Foreign Exchange Reserves)have succeeded in infusing necessary confidence both to the market and policy makers. However, neither the capital inflow to India nor the size FER is disproportionately large when compared to some", "label": 0 }, { "main_document": "the official Resolution on Party History ultra left deviations were blamed and Mao's policies of \"excessive targets, the issuing of arbitrary directions, boastfulness, and stirring up of a 'communist wind'\" It was not only those who already worked in these areas who had to contribute, \"Everyone had to meet a quota by handing over their metal possessions.\" Mao's megalomania meant that he refused to accept criticism and would not believe that his strategy was not working until it was too late to rectify the damage. Joseph, \"A Tragedy of Good Intentions.\" p. 431. Becker, However it was not solely Mao who was responsible for these policies and ambitious targets, the entire leadership of the Party can have part of the blame attributed to them at least. The Party had been extremely successful up until the period of the GLF and maybe they could not envisage one of their policies completely failing in such a way. The \"combination of inexperience and arrogance that it said to have characterized the leadership of the Chinese Communist Party\" This may have led the leadership into falling into Mao's train of thought where they let \"their own revolutionary aspirations overwhelm their appreciation of the material and ideological constraints on what was possible to do at that point in time\" Joseph, \"A Tragedy of Good Intentions.\" p. 424. Joseph, \"A Tragedy of Good Intentions.\" p. 433. The leadership then, Mao included, were far too slow in realising how much the population were suffering from the GLF, predominantly because of their na In fact late in 1958 \"the leaders did acknowledge that the GL had produced some disorganisation and problems; but they remained blind to the magnitude and nature of the difficulties that were developing\" This may not have been true of the entire leadership however, there is evidence that some leaders were just too scared to stand up to Mao. Many were desperate to keep hold of their positions of power and knew that Mao would not react kindly to any criticism that was put his way. Lieberthal, K. (1995) New York; London: W.W. Norton. p. 105. One glaring example of this is Mao's treatment of the Minister of Defence, Marshall Peng Duhuai, who was one of the few people to report back to the Party that things were not actually as good as they appeared to be when he toured parts of the country in the autumn of 1958. This one incident had a dramatic effect on the success of the GLF as it meant that a new upsurge in radicalism took place in 1959 and 1960 as \"everyone was eager to demonstrate that they leaned to the 'left' instead of to the 'right'\" This has led to the argument that if enough senior leaders had dared to stand up to Mao \"the famine could easily have been arrested after the first year of the GLF\" Becker, Lieberthal, Becker, This fear of voicing criticism was not just a problem at the senior leadership level but all the way down the ranks of the Party. There is evidence", "label": 1 }, { "main_document": "and is being arranged by the family. However the care home itself needs consideration e.g. access to church on Sundays. All the theory's and examples included in this essay explain how sociology and psychology can help the healthcare professional to understand the effects of illness and service delivery on the individual within the context of their family, society or culture. The scientific studies of both sociologists and psychologists provide the health care professional with evidence to support their understanding of how the individual is affected by illness and service delivery. They do this by looking at what aspects of the individuals life is affected for example their role and explain why this occurs by theories such as classical conditioning, social construction and common sense models of health and illness. Without consideration of these theories, interventions would be based purely upon the biological effects of illness on the individual. The influence of sociology and psychology have allowed the health care professional to work in a more client centred and holistic way by explaining the possible effects of illness on the individual. This is crucial if health care is to meet all the needs of the individual and not just the biological ones.", "label": 1 }, { "main_document": "of the company if it does really well than he is now owning all of it. For the owner's concerns over loss of control of the company I can only say this: If the company does do well and more people are helped as a result of it then surely that is reason enough for the owner to go for expansion through a Venture Capitalist. The reason why Venture Capitalism is the most likely source of finance is because to a Venture Capitalist company PCD Maltron is an attractive company to work with. Venture Capitalist companies are looking to work with companies with the potential to be leaders in their market sectors. PCD Maltron has the potential to do this, if they are backed financially and gain some management experience. Also, the lead time should be relatively short (less than two years) since the company already exists but just isn't very well developed; this makes PCD Maltron attractive to Venture Capitalists who will be looking for a return on their investment. There is also a clear exit route available, which the Venture Capitalist company will need to be able to see, and that it is to do a merger (trade sale) with the low cost assembler. The market strategy needs to exploit the awards it has won for its products and the recognition it has achieved. Backing up claims of reducing RSI with associated resultant cost reductions for particular cases will attract businesses to invest in PCD Maltron's products. Also the web site idea is very good and should definitely be updated as well as having the option for on-line buying. To get the right kind of people to the web site there needs to be more advertising on other web sites that have content on healthcare, RSI, disability communities for example. Distribution is a big part of marketing as this is how the product will actually reach the end-user. As described earlier it is desirable to get the company's products on businesses preferred vendor or approved ergonomic products lists. There is, however, another way to reach the masses and it is through the OEM's. For example a Computer Manufacturer (Dell, IBM etc) could incorporate a Maltron keyboard as part of a standard PC package. Similarly with Office Furniture Manufacturer (for example, Herman Miller), the Maltron keyboard could be sold as part of a whole ergonomic workstation solution. The main advantage of selling through an OEM like this is that a far greater number of both customers (businesses) and end-users are reached. The greater the number of customers reached, the greater the chance the product ends up on the approved product list or preferred vendor lists of other businesses. The more end-users who use the product, the greater the awareness for the product will be generally and hence the more likely employees will be to choose it from those lists. The products need more incremental innovations/improvements to stay as market leaders once entry to the market on a more global scale has been achieved. One thing that struck me as alarming was", "label": 1 }, { "main_document": "The objective of this study was to analyse chocolate purchases with respect to individual supermarkets, types of chocolate and customer characteristics. The independent sample t-test was chosen to test the null hypothesis that 'customers with kids spend the same amount on chocolate as those without' because it compares two means within the same variable. It tests whether the two means are equal or alternatively if they show a significant difference. The T-test is based on the assumption that the sample mean is normally distributed across the sample. The ANOVA F test was chosen to find out whether the amount spent on chocolate is significantly different across 3 categories of TV watchers (light, medium and heavy). This test is appropriate because it can compare several means simultaneously while avoiding the error that may occur in performing multiple t-tests. The null hypothesis is that all the means are equal and it is rejected if one of them differs. The F (Fisher) ratio compares the variance within sample groups with the variance between groups and is the basis for ANOVA. If the calculated F value is more than the critical F value then H0 (the means are equal) is rejected and H1(there is a significant difference between the means) is accepted. This test assumes that the sample is normally distributed and has equal variances. The P value is used to test the statistical significance of the T and F values. If the P value is greater than alpha (which depends on the significance level) then the results are statistically significant and can therefore be used to draw conclusions. The table below shows the average expenditure for 3 different chocolate types across 6 supermarkets by age group. The following information is represented in a series of graphs and discussed below. The graph to the left shows that for every type of chocolate, expenditure is highest for the 56-70 age group and lowest for those aged 25-40. This however varies within in each supermarket, demonstrated in the rest of the graphs on the page. Focusing on Sainsbury, organic chocolate expenditure follows the same pattern as above, whereas fair trade and standard chocolate vary. Fair trade chocolate has the highest expenditure within the 41-55 age bracket at Standard chocolate expenditure is highest within the 41-55 age range at The 71 - 85 age range does not appear on the graph at all because they do not shop at Sainsbury. The standard deviation for fair-trade and standard chocolate across all age groups is considerably higher than the standard deviation for organic chocolate indicating a larger variation from the sample mean for these two types of chocolate. A large variation suggests that the precision of the sample; in terms of being able to replicate the values if the research was conducted again, is low. The width of the confidence intervals and the standard error values also reflect the reliability of the mean. The wider confidence intervals and higher standard error of mean values indicate a less reliable mean. The widest confidence interval is for fair trade chocolate within the 56-70", "label": 1 }, { "main_document": "million pounds annually by merging with Warner However, the creation of a monopoly may also be a danger for the industry. In a monopoly, the price-making firm chooses to produce at the profit-maximising output where its marginal cost equals its marginal revenue. In Fig2, prices are therefore pushed up to Pm, far above the productively efficient price Pc, and quantity falls from Xc to Xm. Price is yet higher than in an oligopolistic structure, with a lower level of output, and the monopolist enjoys abnormal profits . The quality of the product could also drop: unmotivated by competition, the monopolist need not invest in finding new artists, and could offer consumers a reduced choice of music. \"Industry brief: Music recording\", Oligopoly Watch \"Music's Brighter Future\", The Economist, 30 October 2004, p.92 Matheson, Clare, \"Mixing Up the Music Industry\". BBC. Internet. Accessed at: Matheson, Clare, \"Mixing Up...\" In addition, the monopoly causes a netba welfare loss. In perfect competition, consumer surplus In a monopoly, however, consumer surplus shrinks to the area , and producer surplus is the abnormal profits . In addition to this welfare transfer from consumer to monopolist, there is a total welfare loss of . The development of a monopoly in the music industry could benefit producers to the expense of consumers, and considerably reduce the market's efficiency and creativity. The difference between the price one is prepared to pay for a good and the market price DEFINE!!! However, the attempts of the \"big four\" to join forces may be \"desperation merger[s]\" With independent labels \"fighting to stop further consolidation among the majors\" Instead, a legal digital music market is developing via the internet: over 180 legal download services were launched globally in 2004 The digital market is eroding barriers to entry into the music industry. While the major record labels have controlled CD distribution in the past, this distribution can now occur freely online; manufacturing and production costs are also lowered by internet technology The internet is also an effective marketing tool that has allowed artists to bypass the major record labels despite these firms' tight control of the media The legal digital market is consequently easy to enter, and should foster more competition in this industry. \"A Desperation Merger in a Fading Industry,\" The Economist. Internet. Accessed at: \"Music's Brighter Future\", The Economist International Record Industry Organisation, \"IFPI Releases Definitive Statistics on Global Market for Recorded Music\". IFPI. Internet. London, August 2, 2005. Bockstedt et al, \"The Move to Artist-Led...\" Bockstedt, Jesse C., Kauffman, Robert J., and Riggins, Frederick J., \"The Move To Artist-Led Online Music Distribution: Explaining Structural Changes In The Digital Music Market\". Internet. Accessed at: This new market with numerous buyers and sellers and low barriers to entry hence presents traits of a perfectly competitive industry. Indeed, prices are standardised throughout the market, with all firms selling songs at 99 US cents each The benefits of perfect competition are substantial. The internet facilitates the flow of information between producers and consumers, fostering perfect knowledge and allowing consumers to enjoy a more elastic PED Bockstedt et", "label": 0 }, { "main_document": "Hotel. This demonstrates that the Wychwood Forest Hotel isn't producing excessive amounts of waste, but more could be done to recycle. After implementing a strict recycling programme, the Hyatt Regency Chicago now recycles 70% of their waste, and has reduced waste collection charges by 50% (Enz and Siguaw, 1999). There are also some poor systems practiced by the Wychwood Forest Hotel. Although all glass and cardboard from public areas is recycled, the waste produced in guest rooms is all sent straight to landfill, as the separation is seen as a poor use of a chambermaid's time. However, guests could be encouraged to recycle themselves in the rooms. As long as initiatives are explained, recycling bins could be placed in room (Green Globe, 1994). The menu in the restaurant at the Wychwood Forest Hotel is extensive and varied. Although this is advantageous to the guests, it could be creating large amounts of waste in the kitchen, both from unused food that is no longer fit for consumption, and from meat trimmings and vegetable peelings. Another major concern for the hotel must be their energy and water consumptions. It is possible for them to benchmark their usage against other hotels of a similar type. The Wychwood Forest Hotel uses 120kwh of electricity per square metre annually, which is an excessive level of usage, with 70-80kwh/m The hotel also uses 232kwh per square metre of gas. This is also an excessive level of usage, with a satisfactory level being 190-200kwh/m It's predicted that 20% of energy used in the hospitality industry could be saved (Webster, 2000). The water usage is also high, but not excessive. Water consumption per guest is 0.56m The calculations for the above figures can be found in appendix 1. Only 5% of water used in hotels is actually consumed by people, while the rest is mainly used in cleaning and the preparation and cooking of food (Webster, 2000). Also, only 10% of the water used in rooms is used by the guest, and the rest is used for cleaning by chambermaids. An important measure that can be taken is to check for, and fix, all leaks. It is estimated that 25% of the water in the mains system is lost through leakages (Webster, 2000). Although there is some effort being made by the Wychwood Forest Hotel to act in an environmentally friendly manner, there is considerable room for improvement, particularly in the areas of energy and water consumption. Another area that needs consideration is the expansion of their recycling practices. Small changes to their systems will help to reduce waster produced, and energy and water consumed. Not only will these changes benefit the environment, but they will also result in reduced costs for the hotel. As previously mentioned, the Wychwood Forest Hotel has some poor environmental practices that need to be improved. They are problems that have been overcome by other organisations within the industry. Guest waste is not recycled at the Wychwood Forest Hotel, as it's considered a poor use of chambermaids' time. However, the Colony Hotel in Maine has", "label": 1 }, { "main_document": "Currently there are only four whaling nations and they are as follows: the Faroe Islands where around one thousand Long-finned Pilot Whales are killed in the annual whale \"grind\" by Faroese fisherman each year. The current practice continues a tradition going back to the tenth century. However anti-whaling campaigners campaign particularly vociferously against Faroese whaling - saying that the method of killing is cruel. The second country is Iceland and has a long tradition of subsistence whaling. Indeed whaling of one form or another has been conducted from the island since it became populated more than eleven hundred years ago. By 1915, 17,000 whales had been taken from Icelandic waters, eradicating Northern Right Whales and Gray Whales in the area. The Icelandic Government banned whaling in its waters to allow time for population recovery. The law was repealed in 1928. By 1935 Icelanders had set up their own commercial whaling operation for the first time. They hunted mostly Sei, Fin and Minke Whales. In the early years of this operation Blue, Sperm and Humpback Whales were also hunted, but this was soon prohibited due to decimated numbers. Between 1935 and 1985 Icelandic whalers killed around 20,000 animals in total. Unlike Norway, Iceland did not protest against the IWC moratorium and was therefore limited to whaling conducted under the name of scientific research. Between 1986 and 1989 around 60 animals per year were taken. However under strong pressure from the international community, not convinced that the kills were truly for scientific purposes (particularly because the meat was sold to Japan) Iceland ceased whaling altogether in 1989. Following the 1991 refusal of the IWC to accept its Scientific Committees recommendation to allow limited whaling, Iceland left the IWC. With significant support from its people, Iceland rejoined the IWC in 2002. This allowed it to restart a program of whaling in the summer of 2003. Iceland presented a feasibility study to the 2003 IWC meeting to take 100 Minke Whales, 100 Fin Whales and 50 Sei Whales in each of 2003 and 2004. The primary aim of the study was to deepen the understanding of fish-whale interactions - the strongest advocates for a resumed hunt are fisherman concerned that whales are taking too many fish. The hunt was supported by three-quarters of the Icelandic population. However under the terms of the convention the Icelandic government issued permits for a scientific catch. In 2003 Iceland took 36 Minke Whales from a quota of 38. In 2004 it took 25 whales (the full quota). In 2005, the government issued a permit for a third successive year - allowing whalers to take up to 39 whales. Japan is the third country that continues to whale. Harpooning of whales by hand began in Japan in the 12th century, but it was not until the 1670s, when a new method of catching whales using nets was developed, that whaling really began to spread throughout Japan. In the 1890s Japan followed international trends, first switching to modern harpoon whaling techniques, and eventually to factory ships for mass whaling. In the postwar", "label": 1 }, { "main_document": "real interest of Winstanley lies in the totality of his challenge to established beliefs and systems of value...whatever his starting point as a religious thinker, he was not a normal kind of millenarian...\" Due to Aylmer's stance that Winstanley's \"communism and his theology are literally inseparable in his writings from 1649 on, even though the emphasis varies in different pamphlets\" Sabine, George Juretic, \"Digger No Millenarian: The Revolutionising of Gerrard Winstanley\", in Ibid, p. 272 Ibid, p. 269 Ibid, p. 269 Mulligan et al., \"Winstanley: A Case for the Man as He Said He Was\", p. 74 \"The spirit of Winstanley's writings throughout had more in common with the apocalyptic visions of his puritan contemporaries than with modern socialist or communist ideologies.\" Ibid, p. 65 G. E. Aylmer, \"The Religion of Gerrard Winstanley\", in J. F. McGregor and B. Reay (ed.), Ibid. p. 92-93. A crucial part of one's religious outlook is constituted by how one understands the nature of God, and as with most other things about Winstanley, this is controversial, and is centred on two main positions-God is immanent as opposed to God being immanent and transcendent. The adherents of the former include Zagorin This view of Winstanley's \"God\", is most agreeable to their take on Winstanley as a whole, and disposes of the potential problem that the belief in a transcendental God might pose to their reading of Winstanley. Whereas those who take the line that Winstanley was influenced by theology throughout his writing career, assert his belief in an immanent and transcendental God. While both positions can be substantiated by a reading of Although the idea that God is immanent is littered throughout the text It seems to me that if Winstanley believes in a Creator God It not only seems absurd to think that God must be bound by an either/or straitjacket, but it is also simplistic to think in such strict dichotomy. Zagorin's understanding of Winstanley's path follows the progression \"mystic, pantheist, material rationalist\". Seen in Perez Zagorin, \"Winstanley knew no transcendent God, only immanent reason\", in Hill, \"The Religion of Winstanley\", p. 30. See footnote 7 for Hill's position. Zagorin, \"Therefore Christ hath the honour above his brethren, to be called the spreading power, because he fills all with himself...\" p. 151/ \"...for he is in all and acts through all\" p. 160/ \"So that, this one Almighty power be spread in the whole creation...\" p. 166 in Sabine, Ibid, p. 164-165 Ibid, p. 204 \"If any man be offended here, let him know, I have obeyed Hill, \"The Religion of Winstanley\", p. 30 However, Winstanley's belief in a transcendent and immanent God does not make him less of a religious radical, despite what Mulligan et al. seem to imply Mulligan et al. appear keen to make Winstanley out to be a traditional millenarian, and this is reinforced by their interpretation of Winstanley's understanding of the Fall and Restoration, which is fairly similar to the traditional Christian understanding of it. Unlike Hill's radical Winstanley who believed that the introduction of private property constituted the Original", "label": 1 }, { "main_document": "every town site (Keswani 1996: 219). Copper ore deposits are also known from other sources in the Mediterranean, including Sardinia (Gale 1991: 201) and East Attica, an important source in the Aegean (Catling 1991: 9). Catling concluded that \"Cypriot copper was used by the Minoan and Mycenaean economies, but not on the large scale that I, for one, had supposed\" (1991:9). Contrary to this, Muhly has argued that Mycenaean Greece and Minoan Crete must have been copper-importing regions since both regions have failed to yield any remains of the heavy bronze smithing tools used in copper-production which have been found in Cyprus and Sardinia (1991: 189). Furthermore, evidence from Kommos on the southern coast of Crete testifies to the import of Cypriot copper, in the form of oxhide ingot fragments while the discovery of a Pithos at the same site provides additional support for trade relations between the two islands. The prominence of Cypriot copper has been demonstrated through scientific analysis of 354 ingots found on the Uluburun shipwreck (figure 5) to determine their provenance, almost all of which have been interpreted as Cypriot in origin (Karageorghis 2002: 30). However, these results should be treated with caution due to the nature of the technique, and of course, the potential for metal recycling. Lead isotope analysis, which has been used in this instance, is able to exclude potential origins for materials by comparing samples with raw materials of known provenance. Each source will have a characteristic isotopic signature and, if the results from the samples do not fall into the same isotopic field, we must discount a common source. Where this becomes problematic is in the interpretation stages. Unfortunately, in cases where the results do fall within the isotopic field of a known source, there is no guarantee that the sample is definitely derived from this source given that two or more geographically distinct ore deposit may well fall in the same isotopic field (Muhly 1991: 190). For example, analysis has demonstrated that a number of Sardinian ingots fall into the Cypriot isotopic field (Muhly 1991: 189), suggesting that they (or at least the raw materials used) were imported from Cyprus but Muhly has proposed that ore deposits in the west and central Mediterranean may have similar lead profiles to those from Cyprus (1991: 190). Gale disputes the criticism of the approach, arguing that any overlaps in isotope profiles can be eradicated when all three available lead isotope ratios are utilised (1991: 205) and accepts that most of the copper ingots found throughout the Mediterranean could well have come from Cypriot sources. Further research into lead isotope studies is needed before any substantial conclusions can be drawn. However, Manning and Hulin have dismissed the need for provenance analysis, deeming it \"almost irrelevant to a social archaeology of material culture, since they highlight patterns of production and distribution rather than consumption\" (2005: 271). They have argued that it is not the provenance of an artefact that dictates its value or status as an 'exotic' and therefore, highly desirable product, instead it is related to", "label": 1 }, { "main_document": "Environmental Impact Assessments (EIAs) are defined as \"a formal process used to predict the environmental concerns of any development project.\" (UEM). This means that the potential problems are foreseen and addressed at an early stage in the planning of projects. Biophysical, social, and other relevant effects of development are taken into account, and decisions made accordingly in order to minimise any negative effects of development (Senecalal. 1999). Within the EIA processes, scoping is becoming an increasingly used tool to help with identification of potentially significant impacts and also to improve EIA quality. The process that is known as scoping is one of the initial stages in an EIA. It involves the identification of the major issues that may occur during the EIA, prior to the commencement of studies, once the potential impacts have been identified (RTPI 2001). Scoping is carried out primarily in order to focus the assessment on the environmental impacts that are likely to be most significant and have most impact. It is also responsible for identifying the depth of research that is required in these identified areas, and to create a forum where methods of impact assessment can be discussed and decided upon. Impacts that are thought to be potentially significant are studied in various categories are identified. Compliance with the statutory requirements is ensured. Those impacts that are considered not to be significant are eliminated to refine the focus to the most significant impacts (Glassonal. 1999). Here, the early inclusion of all parties that are to be involved is encouraged and the content of the Environmental Statement is determined. If requested to do so, the Planning Authority is required to give a written opinion on the information that is to be assessed and included in the Environmental Statement. A developer can identify the main issues from the Scoping Report, and using this information, can produce a thorough EIA (Reed 2004). The process of determining the scope of an Environmental Survey should include utilising the regulations and relevant criteria and checking the relevant plan provisions. References should be made to published guidance documents such as the DETR guides. Preliminary contacts with Local Planning Authorities and statutory consultees as well as other expert bodies should also be made (RTPI 2001). Once the main issues are identified, possible changes can be predicted and assessed with the use of baseline information. Baseline information can be collected by primary research or a literature review. Once the Scoping Report has been carried out, the significant impacts need to be predicted and assessed. With the use of the baseline information and knowledge of the proposed development, the effects of the project can be generated for the constructional and operational phases. Some projects will experience more environmental effects during the construction phase, and other projects may experience more effects during the operational stage, and this also needs to be taken into account along with any inter-relationship between impacts. Making sure that any indirect, secondary and cumulative effects are not overlooked should be a key focus in impact scoping as well as obviously important issues. This should", "label": 1 }, { "main_document": "but has wider implications for research into colonisation of the Mediterranean islands and island archaeology in general. Cherry acknowledged that \"as always, future finds may radically modify the picture\" (1981: 44), at a time when pre-Neolithic remains were rare in island contexts, and even then occurred almost exclusively on islands which could have been reached by land at the time of colonisation ( Even if This does not correspond with the colonisation model put forward by Cherry (1981;1990), but proves that \"anomalies are as important as goodness-of-fit\" (Cherry 1990: 146). The Cypriot evidence serves as a pertinent reminder that even the most widely accepted paradigms can be undermined by the discovery of new evidence and that an open-minded approach is needed even when confronted with controversial evidence like that from It also shows that although the observation of patterns in the colonisation of Mediterranean islands is useful in the initial stages of research, the complexity of the process of colonisation requires the detailed study of individual islands in addition to the generalised Mediterranean-wide model. Similarities with Anatolian and Levantine sites provide some indication of the origin of the first population to colonise Cyprus. This can provide insight into the process of colonisation in terms of sea-faring ability and the distances within the capabilities of the founding population. Research in this area can also lead on to further investigations which lie beyond the scope of this discussion, such as the reasons why the island was colonised. Was it the result of demographic stress forcing the mainland population to 'marginal areas' (Cherry 1981: 59) or simply part of the spread of agriculture with no distinction made between islands and mainland Europe (Finlayson 2004: 21)? In the words of Donne, \"no man is an island, entire of itself; every man is a piece of the continent, a part of the main\" It is not isolated entity bounded by an impassable sea, but is an extension of the mainland. In order to fully understand the process of island colonisation, the archaeological evidence needs to be studied within the context of a wider cultural framework encompassing both the mainland and the surrounding seascape. John Donne, 'Meditation XVII'.", "label": 1 }, { "main_document": "The data indicates that the allophones of The data indicates that the allophones of [l] For example The data indicates that For example, Meltese and Leponese share the phonetic consonant inventory [t l p] and the phonetic vowel inventory [u e i [u] occurs only in final position and only after [t]. [ [i] occurs only in final position and only after [ [e] occurs only in final position and after any consonant. [ Each pair shows complementary distribution with its partner as the first item in each pair occurs only in final position and the second occurs only between consonants. This suggests that [u] and [ Their sound will change depending on their position and which consonant they follow. The monosyllables [tu] They are a minimal pair because they change the meaning of the word. 4. As [e] Words are not restricted to a consonant/vowel (CV) pattern but may also occur as a CVCCV pattern, eg; [t Therefore the phonological inventory of Meltese is [u e i] [e] and [ For example, [lep This feature occurs with [u] and [ They also occur in final or second position and are a minimal pair changing the meaning of a word. For example, [p [i] and [ Together they make up a minimal set. Any vowel can appear in final position. All words follow a consonant/vowel (CV) pattern. Therefore the phonological inventory of Leponese is [u e i In each spectrogram the vowels sounds / Spectrograms 1 and 3 must be Subsequently spectrograms 2 and 4 must be Therefore; Spectrogram 1 is / /d/ is a voiced alveolar plosive. The voicing of the vowel sound / The voicing in the / The voicing bar at the bottom also indicates this is the voiced plosive. Spectrogram 3 is / /t/ is a voiceless alveolar plosive. The voicing of the vowel sound / The / The lack of a voicing bar at the bottom also indicates this is the voiceless plosive. 6. Spectrogram 2 is / The burst of very high energy shows this is a fricative and the lack of a voicing bar tells us it is the voiceless fricative /s/. The voicing of the vowel sound / The / Spectrogram 4 is / As with /s/ there is a burst of very high energy but the presence of the voicing bar tells us it is voiced and so is the sound /z/. The voicing of the vowel sound / The voicing in the / The line that appears at the end of each spectrogram represents the release of the voiceless bilabial plosive /p/.", "label": 1 }, { "main_document": "An experiment was carried out to investigate the effect of new items in a list on the frequency of false memories. Proportions of new, critical lure words incorrectly identified as old words were obtained from participants in two groups: with new words in a test list and without new words in the test list. It was found that the presence of new words does increase the likelihood of a false memory occurring involving the critical lure. These results are considered in terms of the brain's familiarity threshold, and its ability to correctly distinguish closely semantically related words as either new or old. It is now widely accepted that the brain has the ability to create false memories. Craik and Tulving showed that items are more likely to be remembered if they are elaborated on and connecting to similar concepts already held in the brain (1975). Is it possible, then, that the brain can also falsely remember an item that is closely related to other items presented to it? Roediger and McDermott presented participants with a recognition test, where they were read study lists in which all the words are related to a semantically associated critical lure word. They were then presented with a test list which comprised of words from the old list, the critical which words and new unrelated words. They were asked to identify from the test list which words they believed were old and which were new. Roediger and McDermott found that critical lures words were incorrectly recognised as old more frequently than the new, unrelated words (1995). In this experiment, we aim to investigate the effect that the presence of the new, unrelated words has on the proportion of times that a critical lure is incorrectly identified as old. Monsell discusses the idea of an activating level applying to memory retrieval, whereby an item is classified as old if enough associations can be recalled to raise its activation level, and when it reaches a threshold, it is classified as old (1979). In the case of our experiment, the critical lures, by their nature, come very close to the familiarity threshold level of the old words that are actually on the lists. We could therefore expect the presence of the new, unrelated words to lower this threshold, making it easier and more likely that a critical lure will be incorrectly identified as an old word. For this experiment, six participants were chosen at opportunity. These six were tested individually in two groups, groups 'With' and 'Without', with three participants in each group. The participants were obtained from the residential hall, and no incentive was offered to take part in the experiment. Two essential pieces of equipment were used in this test - the original study lists and the two test lists. The study list is split into twelve sub-lists, each containing 15 words. These 15 words all have a semantic association with an unread, 16 There were twelve of these critical lures in total: one for each list. The two test tables, 'With new words' and 'Without new words'", "label": 1 }, { "main_document": "his own research from these charges. Markedly Whyte does not deny that the researcher's identity and own cultural background can largely affect their research results which are often, particularly with Participant Observation, uncontested. Orlandella, A (1992) \"Boelen may know..but..\", Vol. April 1992 p.71 In order to assess the accuracy of Whyte's research we ought to refresh ourselves with the aims of Whyte's study. Whyte sought to understand the 'slum' culture and find out about its organisation, he sought to disprove claims that slums were a \"mass of confusion\" which were \"at odds with the rest of the community\" Whyte sought to deconstruct a society and identify its characteristics and hierarchy not merely to gauge opinion polls on issues. Thus, for the research involved we can conclude that Whyte chose the most appropriate research method and equated the possible costs involved as acceptable for the importance for the research cause. In doing so he not only broke new ground in research but pioneered a way of research, that of Participant Observation, which was largely unexplored. We cannot claim that Whyte's research is infallible but nor too can we accurately seek to disprove it so many years on, we can only consign it to research history and build upon it foundations. What is clear is that by using Participant Observation as a research method Whyte was able to pioneer research that could get deeper beyond the opinions or actions of a society to actually attempt to understand or identify pillars which made up that community. In doing so Whyte was able to dispel myths about 'slum' communities and portray the richness of a diverse and unique community. Whyte, W. (1965)", "label": 1 }, { "main_document": "good quality, 2= good, 3= average, 4= poor and 5= very poor. We decided to use these attributes to compare the competitive set as our interview (Appendix 5) highlighted that price is very important to the market. We chose overall quality because the perceived quality determines customer satisfaction (Johnston and Clark, 2001), that is to say that a product or service is perceived to be of high quality if it meets or exceeds the customer's expectations. From this positioning map, it can be seen that Bicester Golf and Country Club is in an unsustainable position, as their prices are low given the perceived quality (Bowie and Buttle, 2004). However, as the restaurant/bar is a secondary product to the Golf and Country Club itself, it may be supported by the income from other activities. The Oxford Arms, Chipping Norton Golf Club and KGC are all charging reasonable prices for their perceived quality, and operating within their price/quality product class (Chambers and Lewis, 2000. A positioning statement is \"a phrase reflecting the image the organisation wants to create\" (Morrison, 2001 pp596). The purpose of a positioning statement is to highlight differences, advantages and benefits which make people perceive a business or product in a particular way (Morrison and Wearne, 1996). Below is a positioning statement that we have created, for the position in which Kirtlington Golf Club restaurant should be. A SWOT analysis is an evaluation of a business' Strengths, Weaknesses, Opportunities and Threats (Kotler The internal evaluation is of the businesses strengths and weaknesses, whilst the external evaluation is of the opportunities created by the macro-environment and the threats it poses. A SWOT is used to develop strategies to exploit opportunities and minimise threats using your strengths and build on your weaknesses (Chambers and Lewis, 2000). Below is a SWOT analysis of Kirtlington Golf Club restaurant. The information on this SWOT has been drawn together from our PESTE analysis, customer audit trail and interview. Low staff turnover. This suggests that staff are happy in their jobs and are well treated. Familiarity with customers. This conveyed a very friendly, family-like atmosphere, which attracts regular custom. Good reputation for food in golfing community. The interview revealed that the quality of the chef was important to maintain this. Responsive to member's needs. Emma Stone emphasised how important the members are to the club and that no matter what, they came first. Value for money. The audit trail showed how the club was good value in comparison to competitors. This makes it attractive to the grey market. Lack of signage in club house. This could put people off visiting the restaurant. Lack of marketing communication. Despite Emma mentioning that she would like to attract more members of the public to the restaurant for meals and events, there is no promotion to inform them of it. No consideration to atmosphere. The main room lacks the intimacy of a restaurant, and feels more like a meeting room. Concentration on member's needs. Due to the importance of members, it's very hard to attract the public without upsetting members. Inconsistent food", "label": 1 }, { "main_document": "maddened by blood-lust. Yet Aeschylus proves otherwise in the final part of the trilogy when he places the Furies directly on stage. It is Orestes' vindication to those who previously thought him to be dreaming: For Aeschylus, true perception is only granted to those marked out by the Furies for blood-vengeance. Orestes has to empirically witness the reality of the Furies in order to be affected by the madness they inflict. Similarly, Dionysus must also reveal himself to his enemy before his destruction. This change in perception is vital because it is primarily responsible for splitting the male from the public body he previously held ascendancy over. The end result in both plays is the same, with the individual segregated by the communal majority he was once head of. And to isolate a male and then strip him of his social standing is to emasculate him. Madness originates from Olympus and Gaia, and indeed Greek ideas of masculine and feminine did not just partition humanity: the Classical world-view extended gender into the realm of the natural and divine. Sexual conflict between male Sky and female Earth would manifest itself in the personified divinities of early-Greek creationism. The theme of emasculation is introduced even here: Zeus castrates his father and casts his genitals from heaven. The blood that spatters the ground gives birth to the Furies, the \"hounds of Lyssa\" (madness). The weakening of the male through the symbolic overflow of blood onto Earth is a notion Aeschylus expounds further. The Choephori comment The birth of such monstrous creatures is presaged by murder and catalysed by blood. Earth in Greek tragedy is explicitly feminine. Her effect on the male is consistent with the emasculative forces she harbours (in the As Padel notes, a hero's Furies and Earth are both similar in their ability to enfeeble the heroic male through the draining of his blood. Their vampirism is triggered through the blood shed as a result of murder. Indeed, the Furies' relationship with Earth is iterated many times in Aeschylus. Similarly the maenad train's symbiotic relationship with nature is expressed by the messenger (Bagg 399) in the After she kills her husband, Clytemnestra is told by the male chorus \"You've sown and you'll reap.\"(Harrison 41) They are right: feed Earth blood and it will bear revenge. Madness and it's consequence (murder) establish a cyclic harvest, with the seed of blood yielding a further crop of vengeance. Clytemnestra breeds her own destruction, the snake-son that emerges from her uterus: Orestes is his mother's madness manifest. Earth harbours in her own womb the Furies that will pursue Orestes. These gorgon-like creatures are notable for reviving the serpent motif through their hair, a mass of \"Snakes coiled within snakes\" (Morshead 109). Snakes are also recurring in Theban genealogy. Prior to founding Thebes, the progenitor of the city, Cadmus, Pentheus' father Echion \"was snakeborn\" (Bagg 424), one of the men harvested from the crop of snake-teeth sown. At the With Cadmus' bloodline defunct, the old ex-king is banished with the knowledge that he himself will metamorphose into a", "label": 1 }, { "main_document": "of their photosynthesis is stimulated by higher levels of C0 As a result, the C0 Such changes will affect their distribution in the UK and some weeds like perennials, which have rhizomes and storage organs, will become more difficult to be controlled by growers. (IPCC) Not only changes in C0 Escalated temperatures promote plant growth, but extremely high temperatures cause damage to the crops. It is estimated in the UK that rise in temperature will extent growing season available for the plants and will reduce the period required for maturation. This is beneficial for those areas of the UK where lower average temperatures prevail. An increase in temperature will expand the cultivation area of horticultural crops to north as well as to higher altitudes. By contrast, higher temperatures during summer will cause damage to crops and will increase the heat stress risk. (TDRI, 1999) It is estimated that winters in the UK will become warmer and that climate change has great impact on horticultural crops. Plants such as apples, cherries and blackberries require a certain number of hours below a critical temperature to resume growth in the spring. In consequence, temperatures above average during winter will affect bud-dormancy and blossom during spring. In addition, taking into account that it is difficult to develop new varieties and rootstocks to respond to this rapid change of climate, the problem becomes more severe. As a result, warmer winters have negative effect and this is a concern for British Fruit Industry. (NC State University, 2000) Apart from fruit crops, temperature affects salad crops such as lettuce. The minimum temperature for growth is between 3- 12 Apart from the fact that warmer temperatures promote germination, they also allow growing season to start earlier and simultaneously extent it. By contrast, higher temperatures during summer have a negative impact, increasing the possibility of bitterness, loose head and bolting. (DEFRA, 2003) Cauliflower is another example of crop affected by temperature changes. First of all, it has three different stages of growth with different response to temperature; juvenility, vernalization and curd growth. Escalated temperatures reduce the period of juvenility and curd growth but delay curd initiation. Although increasing temperatures promote maturity of summer- cauliflower, they reduce maturity of autumn crops and as a result a better management of transplanting will be necessary so as to have continuity of production. Moreover, higher temperatures reduce the possibility of frost damage but maximize quality problems such as bract, leaf bract and curd looseness. (DEFRA, 2003) Changes in temperature undoubtedly affect root crops such as onions and carrots. Soil temperatures between 20-30 Taking into account that carrot growth is being promoted by an increase in temperature, crop production will also be increased. As frost damage will be reduced, the growing season will be extended resulting in earlier production especially under polythene. (DEFRA, 2003) Not only carrots but also onions are affected by warmer climate. Temperatures between 23- 37 In addition, 24 As a result, warmer temperatures will give earlier bulbing combined with faster bulb growth and maturity, but reduce yield as the duration of bulb", "label": 0 }, { "main_document": "The basic objective of the laboratory sessions was to use Solidworks and Cosmos Floworks to design a car in a wind tunnel and to analyse the properties such as the drag coefficient and drag force. The idea behind these laboratory sessions was to gain knowledge about basic concepts of fluid dynamics and car aerodynamics. This laboratory assisted in gaining a basic idea about analysis and optimisation. A car was fitted in to a wind tunnel and all the external specifications were provided. Then the car was analysed for different properties by varying the external entities. A car and a wind tunnel is designed using Solidworks. Then he original car was mated in to a closed wind tunnel using Solidworks and then Cosmos Floworks was added in. Using a wizard available in Cosmos Floworks the external entities such as temperature and pressure were defined. Then the surfaces of the car were mentioned on which the air acts. Solver was then used to calculate the X - component of force by carrying out several iterations. The original car used for the initial calculations is shown below; The Modified car used for the initial calculations is shown below; Following data was specified before carrying out the iteration calculations; The iteration calculations are carried out for six different velocities of air (10, 20, 30, 40, 50, 60 m/s) The general data used for derivation and calculation of drag coefficients; (Where; The drag forces for six values of velocity are found and using the equation shown in the \"Theory used\" the respective Drag coefficient is found. Then one graph is plotted for each car, with Drag coefficient against velocity. Calculating the Drag coefficient; (Above is an example calculation of a drag coefficient) The drag coefficient for each velocity is calculated and tabulated below; The graph shown below is of Drag coefficient against velocity. Even though the points seem to be scattered around the plot area when the y - axis is considered all the Drag coefficients for different velocities seem to be very close to each other. All the Drag coefficients seem to lie between 0.36 and 0.41. There is a general trend line added to the graph and it is considerably horizontal and straight showing that there is no change in Drag coefficient even though the velocity of the air is changed. The drag coefficient for each velocity is calculated and tabulated below; The graph shown below is of Drag coefficient against velocity. Even though the points seem to be scattered around the plot area when the y - axis is considered all the Drag coefficients for different velocities seem to be close to very each other. All the Drag coefficients seem to lie between 0.3 and 0.36. There is a general trend line added to the graph and it is close to being horizontal even though it is slightly slanted showing that there is only a very slight change in Drag coefficient even though the velocity of the air is changed. This slight change of coefficients could be due to the different number of iterations", "label": 0 }, { "main_document": "Ice cream is a complicated physiochemical system-colloidal system. Air cells are dispersed in a continuous liquid phase with embedded ice crystals. The liquid phase also contains solidified fat globules, milk proteins, insoluble salts, lactose crystals in some cases, stabilizers of colloidal dimension, sugars and soluble salts in solution. The finished product consists of liquid, air and solid, giving a three phase system. A typical ice cream contains about 30% ice, 50% air, 5% fat and 15% matrix (sugar solution) by volume. The ingredients of ice cream may be classified in three groups: Cow's milk contains about 87% water. The remainder consists of fat (4%), proteins (3.5%), lactose (4.8%) and small quantities of inorganic salts. Basically, milk contains two main types of protein: casein (80%) and whey proteins (20%). Casein and whey proteins are distinguished by their differing solubility at pH 4.6 (at 20 Most of the casein proteins are present as colloidal particles, known as casein micelles (Swaisgood, 1992). Caseins are phosphoproteins. The phosphate groups are responsible for many of the important characteristics of casein, especially its ability to bind relatively large amounts of calcium, making it a very nutritionally valuable protein. All caseins have a high content (35-45%) of apolar amino acids (Val, Leu, Ile, Phe, Tyr, Pro) and would be expected to be poorly soluble in aqueous system. However, a high content of phosphate groups, low level of sulphur containing amino acids, and high carbohydrate content in the case of Caseins are quite stable to heat denaturation, but can be denatured by excessive heat, leading to aggregation and precipitation. As caseins are very surface active, and make good emulsifiers. Also, casein's high water binding capacity creates high viscosity in casein solutions. Consequently, casein has desirable functional properties for incorporating into instant desserts (Fox & McSweeney, 1998). Whey protein contains two well-defined groups, which could be fractionated by saturated MgSO The major advantage of whey protein's application in food industry is its nutritional quality. The main whey product developed-to-date is probably whey protein concentrate (WPC). Commercially available WPCs contain from 35% to 95% protein, 6-10% lactose, 4-6% fat, 3-5% ash and 3-4% moisture. The functions of WPCs in food products can be related to the functionality of their proteins. Interactions between proteins and other molecules within a food system are necessary for a protein to manifest its functionality (Zadow, 1992). The five tables below represent a more detailed review of the composition and functionalities of whey protein in ice cream and other food systems, and also possible modifications on whey proteins during food processing. Stabilizers are used in such small amounts as to have a negligible influence on food value, but will modify the physical and sensory properties of ice cream. For instance by: Ageing takes place before ice cream mixture goes on freezing process. Commercial ageing time is 3 to 4 hours, within which longer ageing time is beneficial. The beneficial changes occurring in ageing are listed below: The meltdown properties of ice cream are related with its eating properties. Basically, meltdown properties are determined by two parameters, one is", "label": 0 }, { "main_document": "The concept of 'will' for Schopenhauer is intended to ground the phenomenal world and set a limit to the universe. It is a metaphysical principle that Schopenhauer believes we have access to directly, unmediated by the principle of sufficient reason. The two main problems in understanding the concept of will are the epistemological question - how does Schopenhauer believe we have knowledge of the will? - and the constitutive question - what can Schopenhauer legitimately say about the will's nature? Kant's critical philosophy set out to set a limit to human knowledge; to delimit the conditions of our knowledge of the world and thereby that beyond which we can not legitimately think. For Kant, the empirical world is determined by our subjective faculties. That is to say that there are conditions set upon our experience of the world by our constitution as subjects. The world exceeds our faculties' ability to cognize however; the world as it exceeds our faculties' abilities is by definition unknowable: this Kant calls the thing-in-itself. Schopenhauer's philosophy represents an attempt to give content to this thing-in-itself; consequently at first glance it appears highly paradoxical. The world as mediated by the nature of the faculty of sensibility and the pure concepts of the understanding in Kant is refigured as the world of representation - the world under the fourfold principle of sufficient reason. Schopenhauer's claim is that we can have direct knowledge of something that is not subject to the conditions of representation through our experience of our own bodies. Within the world as representation, as mediated by the subject, all things are related to a ground: the intellect relates material things to their causes and effects; it grounds abstract concepts using the laws of logic; mathematical and geometric matters are grounded in numbers and spaces; psychological questions of motivation are related to intentions as their ground. In Kant's philosophy, the limits of reason are revealed by antinomies that are reached when one attempts to think through the different principles that determine our cognising of the world (the categories). The third antinomy results in showing that in attempting to think a ground of the empirical universe the mind reaches a paradox: it must both postulate a first cause and the infinite regression of causes. Kant draws the conclusion that we have the right to the possibility of another form of grounding proceeding from the thing-in-itself. Schopenhauer provides that ground with a name: the will. The will grounds the world as representation: the world as representation is objectified will. The will has no ground outside itself, it is self-grounding; consequently it allows Schopenhauer to provide an answer to the perennial problem of the prime mover. More than this, in the idea that the world as representation is objectified will Schopenhauer thinks the unity of the multiplicity that we are presented with in experience. In striking out from Kantian scepticism to the thing-in-itself Schopenhauer's main step is to conceive the will as one. The will is the world conceived as unified. For the representing subject, the world appears divided into discrete", "label": 1 }, { "main_document": "Paris opened just outside of Paris in April 1992 (Curwen,1995). It is essentially a \"pure transplant from the USA\" (Curwen, 1995; p:15), as it is developed around the same themes and ideas. However, the park turned out not to be as successful as its creators had hoped for (Spencer, 1995; Curwen, 1995; Lovelock and Morgan, 2001). Besides being more expensive than planned, having less visitors than expected and suffering from bad weather, one of the reasons for its failure is claimed to be the 'Disney-unlike' employees (Bryman, 1995, Lovelock and Morgan, 2001; Curwen, 1995). In comparison to the staff in the parks in the United States, the employees in the French park are less inclined to follow the display rules set out by Disney. \"The level of customer service established in Disney World was not easily reproduced in Euro Disney\" (Herbig, 1998: p37 cited in \"The Antidote). Many of the employees were not willing to follow all of the guidelines set out by the company (Leehrsen and Gleizes, 1992; Bryman, 1995) and therefore Disneyland experienced having dissatisfied employees which led to a very high staff turnover (Lovelock and Morgan, 2001). It has been shown that emotional labour is one of the key components in the perception of quality by customers. If the employees are displaying positive emotions, i.e. are friendly and are apparently enjoying their work, the customers will value the service as of higher quality. On the other hand if the employees are unfriendly and are showing their own negative feelings, the customers are more likely to think of the service as bad quality service. So what could be the reasons that the display of emotional labour and therefore the perceived quality of service is so successful in the Disney parks in the United States, but less so in France? Could the reason for this be that French employees are less willing, or even less able to display emotional labour to the same standard as the staff members in the United States? \"Each cultural world operates according to its own internal dynamic, its own principles, and its own laws - written and unwritten\" (Hall and Hall, 1990; p:3). With other words, each culture is different and therefore the members of cultures act differently to one and another. According to Testa (2004) this can be seen in how people behave at their work place. There are differences in how they react to supervision and how they handle being given certain rules. \"Regional and national cultures have been shown to have different norms for emotional expression\" (Grandey et al., 2005; p:895). Therefore the members of different cultures also have different views on how the display of emotions at work should be regulated and what can be expected of them (Cooper et al., 2003, cited in Grandey et al., 2005). It is essential for managers to be aware of these cultural differences in order to avoid potential problems and ultimately gain a competitive advantage through effective cultural management (Schneider and Barsoux, 2003). In their work Grandeyal. (2005) differentiated between two different types of cultures:", "label": 0 }, { "main_document": "in Joyce's In it, Joyce 'tacitly acknowledges the undercurrents of anger, frustration & helplessness that pervade Irish life.\" The story clearly showcases the dangers of a life stifled by oppression: Farrington is trapped in a job he dislikes and is treated badly by his boss. He does not act on the 'spasm of rage' that he feels towards Mr Alleyne; instead he cruelly beats his young son on returning home. Suzette A. Henke; James Joyce; Ibid, pg 109 However, like Flint in Joyce praised Ibsen for presenting 'average lives in their uncompromising truth,' and in this story he is doing just that. In this story and throughout Like the Imagist poets, Joyce moved away from a traditional narrative form to convey this message, instead recognising 'the complexity of language as the fundamental medium of culture in its historical, creative and unconscious dimensions.' James Joyce, cited in: M. Levenson (ed); Cambridge University Press, 1999) Pg 75 D.T Torchiana, James Joyce, cited, ibid. James Joyce, Cited in lecture handout, 10.02.06 M. Levenson (ed); Press, 1999) pg 17 Joyce's focus on language is skilfully paired with 'a detailed, closely observed depiction of the surfaces of life.\" As such he adopts a 'naturalistic' approach. Humans are imprisoned in the social and physical; therefore Joyce places less emphasis on a heavily plotted narrative, and the intensity of his stories comes instead from his ability to precisely capture a mood. In 'Eveline' the entirety of the story is presented as a stream of consciousness. Up to the last section there is an air of pensive musing to the tale, as Eveline sits at the window weighing up her decision: Lecture handout, 10.02.06 James Joyce; This meditative air is paired with many small details, which add a sense of reality to the story and make it more vivid: Ibid, pg 37 By using language in this manner Joyce is able to capture a precise mood, and although we are given little detail about the life of Eveline herself, by adjusting the style of the story to the experience of the main protagonist, Joyce is able to bring her character alive. Eveline is vague about Buenos Aires, where she is proposing to spend the rest of her life. As readers we can assume that this is due to the fact that she has never previously left Dublin. It is perhaps for this reason that although Eveline feels that 'she must escape' and that 'Frank would save her,' when it comes to it she finds herself in 'a maze of distress:' Ibid, pg 41, pg 42 Ibid. We can emphasise completely with Eveline's distress in this story. Despite there being little by way of an 'exciting suspenseful narrative,' the development of her character shows a very human complexity to her wants and desires, a paradoxical nature to her feelings which the readers can easily relate to. Lecture handout, 10.02.06 Joyce uses a similar technique to develop a character in 'The Sisters,' the first story in the collection. It is written from the point of view of a young boy, and Joyce", "label": 1 }, { "main_document": "of a certain type is an element of the vector space on which a real Lie group called the spinor group Spinors first became prominent with the advent of quantum mechanics. This happened when Dirac realised that a spinor representation of an electron's state was required for the correct relativistic equations for describing the electron. In quantum theory the double-valuedness is not a big obstacle as in classical physics since the observable properties of a system are expressed through quadratic expressions of the wave function, removing the They are still used today within particle physics where bosons and fermions require either tensor or spinor descriptions. They are also used within research on loop quantum gravity, the main rival of M-theory as a budding GUT. Here follows a fantastic diversion for the next two sections. As we know, there are five regular or 'Platonic' solids. Namely these are the tetrahedron, cube, octahedron, icosahedron and the dodecahedron. Now an unit quaternion can represent a rotation in 3-dimensional Euclidean space. The rotation groups of these solids are useful in other areas of mathematics but here we ask First consider a regular tetrahedron Such a direction will be referred to as an outward-pointing normal to the tetrahedron. We can achieve any even permutation of {A, B, C, D} by a rotation, so that the group of rotations is A4, with There are three distinct types of group elements. First are the rotations which fix a single vertex. For example, we find a rotation leaving Let us call this rotation D1. D1 We call this rotation D2. A1, A2, B1, B2, C1, C2 are similarly defined. We now define the second type of rotation through Rotations AC and AD are similarly defined. The third type of rotation is the identity Id. The cubes of the first type and the squares of the second type are equal to Id. We only need two examples to generate these rotations. Let us choose Denote these by The Let us describe the tetrahedron in coordinate form. Opposite edges are at right angles so we can arrange We then write We arrange Then Letting the edges be of the same length and the position vectors be unit vectors we have Using the quaternion formula for a rotation we have and The quaternion representations of the remaining rotations may be calculated from the presentation given above. We consider quaternions corresponding to elements of A4, together with their negatives. We can take these quaternions as 24 distinct points in four dimensions. They are unit quaternions and so lie on the surface of the unit 4-sphere These points are taken as vertices of a hypersolid and form a polyhedron by joining each vertex to its nearest neighbours. We may define distance on the surface of a sphere in terms of angles separating vectors to those points from the centre of the sphere. A quaternion analogue for the cosine rule is needed. We consider a triangle which is defined by two quaternions Comparing this with the cosine rule gives We start with a quaternion and calculate", "label": 0 }, { "main_document": "product of King's environment, but more, a product of his own ideology, an ideology that had developed through King's theological background and shared no common ground with Castro. In essence, there exists a modicum of similarity between Castro and King; they are both extraordinarily charismatic personalities. A small section of Sociologist Max Weber's classic 'Charisma' formulation begins this discussion and perhaps it allows for an appropriate ending as well, for despite their differences in ideology, both show a great inner-determination, and both have a great charismatic claim. Castro felt it was his place to free the Cuban people from former corruption, just as King aimed to bring social justice to a long suffering people. However, perhaps, almost equally, their charismatic claim falls just short. Ling assesses that, by 1968, 'King himself could not meet the unrealistic and contradictory expectations of race leadership.\" In 1970, Castro's efforts to achieve a ten million ton sugar harvest had failed, and Fidel 'no longer had sufficient power and leverage to limit soviet penetration in Cuba.\" 'The Limits of Charisma' provide a fitting final analysis, as they allude to what perhaps existed as the greatest similarity between Fidel Castro and Martin Luther King, Jr. Peter J. Ling, Martin Luther King Jr. (GB, 2002) p301 Edward Gonzalez, Cuba under Castro, (US, 1974) p215", "label": 1 }, { "main_document": "image applied to rebellious young men does not apply to young women (Hudson 1990 in Gelshorpe and Morris :118). Moreover, young men may actually live up to the widespread image of high youth criminality by being or becoming criminally active (Flood-Page et al 1989/99 :55). For girls, the opposite is true, the emphasis is on protection (Hudson 89 :197) and the policing of young women's behaviour (Cox 03 :166) to ensure they remain 'respectable and decent' and not become 'promiscuous and dangerous' (Hudson in Cain 1989 :211). Girls who fight against gender roles, either by running away from family or institutions or attempting suicide to escape the reinforcement of patriarchal influences, are labelled deviant. (Kersten 1989 in Cain :137). Ironically, girls are more likely to be offended against than to offend (Petrie 1988), being at risk from violent or sexual attacks upon them by the male population (Cox 2003:10). The violent behaviour of female street gangs is explained by Worrall and Hoy (2004:189) as resulting from violence they have already received during their lives. The impact of these attitudes creates a clear difference in the perceived threat from young offenders. Whilst it is the public who are seen in need of protection from boys' disruptive behaviour, it is the girls themselves who are seen to be in need of protection, especially upon reaching puberty (Petrie 1988). Different attitudes towards bringing up girls and boys therefore affect societal perceptions overall. Girls and boys are treated differently throughout their lives including when they have (or are perceived to have) offended and enter the justice system. There is much agreement among feminist researchers that many of society's rules are made by men wanting to retain power over women's behaviour (Campbell 1981 :4, Cox 2003 :10) and when it comes to the justice system, women are therefore judged by men's ideas about how women With such a male predominance, models of delinquent behaviour are based on male behaviour (Taylor et al 1997 :*), which results in a lack of understanding of girls' behaviour when they enter the justice system. The application of male models of delinquent behaviour to females impacts upon the resultant treatment in several ways. When it comes to girls offending, their actions are not divorced from their sexual behaviour. In expressing their femininity, adolescent girls are in danger of being labelled 'slag or drag' (Worrall and Hoy 2004 :189) - and the line between the two is very fine. Girls are therefore not only judged for their crime, but for their deviance from moral behaviour (Campbell 1981 :4) and may be treated more harshly as a result of this application of double standards (Chesney-Lind and Pasko 2004 :62, Nacro 2001 :1). This fixation with female sexuality judges that young women who engage in intercourse are considered immoral, therefore deviant and in need of correction (Chesney-Lind 2004 :60). The same judgements do not apply to boys, either in the justice or social work systems (Hudson 1990). As a result the sentences handed out to boys and girls will vary. Boys who become disruptive are", "label": 1 }, { "main_document": "recruitment and selection methods is strongly dependent on the HRM strategy of the organisation. Christensen Hughes (2002) differs between four different methods: the traditional HRM, which focuses on worker productivity through selection and job design, integrative HRM, which focuses on congruency of internal fit, strategic HRM, which focuses on overall competitive strategy and finally the universal HRM, which focuses on commitment and motivation in the workforce. There are two different approaches that depend on the HRM strategy of an organisation. The 'soft approach with an emphasis on strategic interventions for commitment and development [and the] hard approach [which] emphasis on strategic interventions to secure full utilization of labour resources' (Worsfold 1999, pg. 340). Both approaches are based on the human capital theory and clarify whether an organisation should 'internalise employment and build [an] employee skill base through training and development initiatives [...or] externalise employment by outsourcing certain functions to market based agents' (Lepak & Snell 1999, pg. 31). The hospitality and tourism industry has several service market characteristics, such as 'mass-service markets' (Boxall 2003, pg. 12), 'mix of mass markets and higher value-added segments' (Boxall 2003, pg. 14) and the 'very significantly differentiated markets' (Boxall 2003, pg. 14). As stated by Worsfold (1999) it would therefore be reasonable to assume that sectors in the mass-service markets, such as the fast food industry would adopt the hard approach with it's emphasis on control, whereas within the differentiated markets, such as the luxury hotel sector, the soft approach with its emphasis on engendering commitment might be considered appropriate. Considering the different HRM strategies and the \"soft\" and \"hard\" approach of recruitment and selection it is important to mention that recruitment and selection is inseparable from training and development. Using the universal HRM strategy with a soft approach to recruitment and selection requires ongoing training and development in order to achieve commitment within the workforce and build up a skilled labour pool. On the other hand it also brings up the question, whether a high level of commitment through universal HRM is necessary within the hospitality and tourism industry in order to gain competitive advantage, due to it's different service market characteristics. Lepak & Snell (1999) state that in addition to full-time employees the hospitality and tourism industry is dependent on external workers, such as temporary employees or contract laborers. Winstanley et. Al. (1996) also recognises that the economy is characterised increasingly by sub-contracting. In addition to the unique characteristics mentioned above, the hospitality and tourism industry needs to deal with seasonality, which would explain the need for external labour. Another angle to question is the necessity of commitment rather than productivity being from a financial point of view. Even though internal recruitment would 'lower transaction costs' (Lepak & Snell 1999, pg. 31), external recruitment 'decrease[s] overhead and administration costs' (Lepak & Snell 1999, pg. 31). Furthermore McGunnigle & Jameson (2000) argue that within the UK hotel industry there is a weak internal and a strong external labour market, which leads to the assumption that these more expensive recruitment and selection methods, which achieve commitment", "label": 0 }, { "main_document": "with the 'person' he has become, War literature stresses the importance of friendship and camaraderie, and Many soldiers, like Baumer, found they could no longer relate to the family and friends at home who couldn't understand the trauma that they had been through. They became each others mothers and lovers because fellow soldiers were the only ones who understood. A soldier particularly special to Baumer is Kat. He represents safety and warmth, Friends were needed as a distraction from distressing thoughts and Remarque cleverly exhibits this when Baumer becomes trapped in a shell hole with a man he has just stabbed. The longer Baumer is there with the dying man, the more desperate he becomes as he is forced to be alone with his thoughts ' When he is reunited with his friends he is comforted, ' Friendship could work as a distraction for the soldiers, aiding them in the repression of their troubled memories. This is a theme that often occurs in war literature, and it is represented in a variety of ways. In Pat Barker's Representing the imminence of death can only be achieved by bringing the reader face to face with it. Remarque surrounds his characters with death as they live with it day by day, even hiding in the graves of dead soldiers to avoid death themselves. Friendship becomes the symbol of life for Baumer, but as his friends die one by one, life slips away. Remarque tracks the path of death with a pair of boots - as they are passed on to each soldier we know that they are doomed (1979 p51). The penultimate death comes with Kat, coinciding with the death of Baumer's mother. To the perceptive reader this signals the end for him, with no future any longer in war or at home. Once a soldier realises the universal futility of war he can no longer have a meaningful future. The episode with Duval points to an end for Baumer as it gives a face to the opposition. Infrequent in war literature, it is insightful as it shows a soldier being forced to come to terms with the fact that he is not just fighting against death, but another human being. It obliges him to consider the consequences of the atrocities he is forced to commit. The style of language Remarque employs is deliberate, and the ending's impact is increased by the use of the present tense and staccato sentences. The stripping down of language reflects the limited, automatic thought process ' The short sentences are interspersed with longer ones which help to highlight the words and give them a greater meaning. It might appear that As well as the fragmentary style he uses anger, and a sense of being lost after the inevitable rejection of religion. The idea of events building to a climax in the mind reflect the psychoanalytical ideas that were emerging and the animalistic references could elude to Darwin's recent'survival of the fittest' thesis. Like Dr Rivers in Pat Barker's Happy and tragic episodes are alternated with increasing intensity (1979", "label": 1 }, { "main_document": "to children's literacy development, not all types of talk can catalyze learning. We can easily find teacher-dominated discourse in many classrooms, a prevalent type of talk which is more of a hindrance than a help to children's learning. Corden describes this type of talk as an I-R-E exchange pattern in which \"the teacher In such a classroom, teaching is based on one-way instruction and pupils are seen as empty vessels for the teacher to pour the essence of knowledge. Though they are seemingly invited to take part in the exchange of talk with the teacher, they are, in fact, expected to come up with a 'correct' answer which the teacher has already kept in mind. Corden's I-R-E exchange pattern echoes Barnes's 'presentational talk', which focuses more on the expectations of the pupils rather than their ideas. As Barnes suggests, presentational talk persuades \"the speaker to focus on 'getting it right', that is, on appropriate speech and the expected information ('right answer')\" (1992b, p. 126). For real learning to happen, children should be provided with chances of 'working on understanding'. They need genuine interaction where they can actively engage in discussion by activating their existing knowledge to make sense of new ideas. It is 'exploratory talk', advocated by Barnes, that offers room for learners to actively interrelate, reinterpret and understand new ideas and experiences. Exploratory talk is characterized by frequent hesitations, repetitions, rephrasings, false starts, change of direction, backtracking, pauses, overlaps and interruptions (Barnes, 1992a; Corden 2002). Its exercises the 'heuristic function' of spoken language identified by Halliday (1969). It is a reflective and hypothetical dialogue aiming at joint construction of knowledge. For exploratory talk to thrive in the classroom, learners should \"feel relatively at ease, free from the danger of being aggressively contradicted or made fun of\" (Barnes, 1992b, p. 126). Furthermore, teachers, Corden argues, should create open contextual contexts which can be subsequently perceived as such by learners (2002, p. 136). People approach learning situations in different ways. Everyone has his/her personal preferences in processing information and solving problems. These personal preferences can be regarded as different learning styles. Some theorists categorize learning styles in terms of polar opposites such as wholist/analyst and verbaliser-imager (Adey, 1999). Wholists tend to get an understanding of the general structure and reach a conclusion based on the 'big picture', while analysts prefers to look at the detail of what is to be learned. Verbalisers are inclined to learn from words, either spoken or written; imagers feel more comfortable with information presented in pictorial or diagrammatic fashion. Some classify learning styles in terms of sensory channels. Fielding, for instance, describes learning styles as follows: Gardner (1983) has identified at least seven distinct learning preferences and referred them as multiple intelligences. They are logical/mathematical, visual/spatial, body/kinesthetic, musical/rhythmic, interpersonal, intrapersonal, and verbal/linguistic. It is likely that every individual possess these seven intelligences; nevertheless they do not develop at the same rate in each one person. Ideally, teachers should cater for students' diverse learning needs to assist them reach their full potential with their multiple intelligences. What actually happens", "label": 0 }, { "main_document": "other elements according to the region's specific characteristics. With regard to inequality, there is also controversy on its cause and types. Prakash Lougani makes a relevant distinction between global inequality, cross-country inequality and within country inequality. Cross-country inequality has indeed increased and hence, the slogan 'richer are getting richer while poorer are getting poorer' is not misleading. While the average global citizen has become richer, this way of analysing inequality is inappropriate because reducing inequality is a matter of state policy and hence states, not only people should be taken into consideration. Cross-country accounts of inequality are much more relevant because they indicate whether governments have adopted the 'right' liberal policies. The author is saying that inequality may not be so unacceptable as long as the right liberal economic policies are adopted which will eventually generate economic growth. After all, let us remember the liberal Rawlsian principles of justice according to which, certain inequalities are accepted as long as they make the least advantaged person better off too. In 1990, the income of 10 percent of the world's population was 120 times higher than that of the poorest 10 percent. Oligarchies interested in preserving their wealth and power represent the real danger, and they seem to be favoured by the current global market capitalism. Prakash Loungani. \"Inequality: Now you see it, now you don't\", Rawls, John. Especially starting with 1980, development (and the way poverty and inequality are understood and addressed) has been conceived within the ideological framework of the Western global capitalism or economic globalisation. Since the end of the Cold War, this is the dominant ideological framework. Globalisation can be traced back to the nineteenth century (when speaking about how modernity was integrating the world), but the term started to be used only in the 1960s. Different policy reactions have been developed with regard to globalisation. While neo-liberals advocate a leading role of global markets and a passive role for the state, reformists consider that some state intervention is required to alleviate the negative effects of globalisation by assuring a healthy environment, minimum living standards and greater social equity. Neo- Marxists and other leftists see globalisation as just another stage in the pathological expansion of market capitalism, mainly beneficial for core states and detrimental for poorer ones (in the periphery). The present paper adopts a moderate liberal perspective seeing globalisation as a new phase of real structural changes that have a great impact on people lives everywhere (although not to the same extent). One cannot claim that what we live is just a myth (as sceptics consider) and, although global capitalism can be viewed as an instrument working on behalf of great powers, people everywhere may be at least potential beneficiaries of this process. However, while accepting that overall, globalisation might have potential for poverty reduction the present paper will mainly focus on the negative side of economic globalisation's impact on world poverty. This negative side is also due to what might be called 'market fundamentalism': Excerpts from George Soros, Neo-liberals believe that the global marketplace itself ensures a kind", "label": 0 }, { "main_document": "The Viking raids of the 9 Before this period Scandinavia was a distant outpost with little cultural, political and economic significance or value for the rest of the world. The beginning of the Viking age starts traditionally around 793 AD to when the first historical account of a Viking raid was writted describing a raid on an English monastery at Lindisfarne on the east coast. It is around this time that similar records of raids start appearing all around Europe. The age of the Vikings had begun. The popular image of the Vikings handed down through the generations is based on written accounts from the time. These accounts portray them as brutal, merciless barbarians who were nothing more than heathen pagans that spent their lives killing and stealing from good Christian folk. In many respects this image was true to a certain degree but it must be taken into account who wrote these depictions. These 'hirtorians' were for the most part priests (one of the only actively literate groups at the time) and would have been particularly horrified by the pagan Vikings and their practices. On the other hand however it is important to realize, that the Church itself would have been a relatively defenceless and extremely wealthy target. Many monasteries were positioned along the coast which would have made attacking easier for the seafaring Vikings; they would have made an extremely tempting target. Another point to consider is that the Vikings were pagans and did not believe in or fear the Christian's God, something that would usually have gone a long way to protecting a monastery from attack. However as with any historical account it is inevitable that a certain amount of bias would creep in and the priests would not have had the opportunity to see the situation from all sides. This is where archaeology steps in, to find physical evidence of all aspects of Viking life. The Vikings were a seafaring people who relied on their ship building knowledge as well as their practical knowledge of the seas and oceans on which they sailed. Just the sight of a Viking warship off the coast of many countries was enough to spread terror and panic. The ships themselves were of exceptional design and came in two distinct classes, the longship and the warship. The longship ranged from 70 to 140ft in length while the more practical and easier to maneuver warship ranged from 70 to 80ft in length. Both had extremely shallow drafts which led to the ships having the ability to enter river systems as well as allowing them to come close to shore and even be beached for extended periods. The shallow drafts also allowed for speed and manoeuvrability which was essential in sea battles, over all ships of Scandinavian design had the advantage. Generally the ships were rowed by the warriors within and when they were adequately rested they could reach speeds of 8-10 knots. In the late 1960s the remains of five 11 The ships were in a remarkable state of preservation and had been purposely", "label": 1 }, { "main_document": "On September 18 For weeks leading up to the elections, the rivalry between leader of the Christian Democratic Union (CDU) Angela Merkel and the existing Social Democrat (SPD) Chancellor, Gerhard Schroeder, received immense media coverage. Despite the fact that the outcome of these elections could have \"radically alter[ed] the face of Europe\" By comparing press treatment of the pre-election period to the examination that might be carried out by a political analyst, one realizes that our media may need to make significant improvements in its coverage of political issues. Horsley, William, \"What German Poll Means for Europe\". BBC News. Internet. 08 / 09 / 2005. Accessed on 05 / 10 / 2005. Accessed at: Throughout the German campaign, one of the focal points of European newspapers has been to contrast the CDU and SPD candidates on a personal basis. In certain publications, such as the Spanish 'El mundo', this comparison is limited to describing Schroeder as \"combative\", by juxtaposition to a Merkel \"with tears in her eyes\" at one of her rallies However, 'The Guardian' goes further, labeling Merkel as \"dull [and] uninspiring\" and depicting her rival as \"the charming and roguish Schroeder\" Similarly, the BBC claims that one of Merkel's key challenges will be, \"struggl[ing] to match the famous charisma of Mr Schroeder\" This narrow focus on the two politicians' personalities, while helping journalists to better interest and manipulate a politically unaware audience, poorly informs readers; nevertheless, it dominates pre-election articles in major European publications. EFE, \"Un combativo Gerhard Schr campa El mundo. Internet. 16 / 09 / 2005. Accessed on 14 / 10 / 2005. Accessed at: The Observer, \"Germany in flux as 48pc can't decide how to vote\". Internet. 28 / 08 / 2005. Guardian Unlimited. Accessed on 08 / 09 / 2005. Accessed at Murphy, Clare, \"Merkel's Bumpy Road to Polls\". BBC News. Internet. 16 / 09 / 2005. Accessed on 05 / 10 / 2005. Accessed at: From a political analyst's perspective, this style of coverage does have some validity: if one accepts the individual level of analysis, whereby individuals are key political actors in international relations, the personality of a leader may be seen to have some bearing on his or her decision-making. Indeed, the processual assumption states that politics exist in all spheres of life and may often occur on a personal basis. Trying to target the EU public by appealing to readers' psychology also illustrates a voluntarist approach to political thought: individuals are more than self-interested rational actors, and they have volition to take decisions based on factors other than a straightforward ordering of preferences. Despite this theoretical backing, however, the individual level of analysis is rather restricted. In order to arrive at a more conclusive study, a political analyst may seek to broaden the scope of his or her investigation. Axford, Barrie; Browning, Gary K.; Huggins, Richard; and Rosamond, Ben, \"Chapter 1: Individuals: is Politics Really about People?\". 2 Leftwitch, Adrian, \"Thinking Politically: On the Politics of Politics\". What is Politics? . 2 Indeed, much as they make individual scrutiny a main point", "label": 0 }, { "main_document": "a website is safe and secure. The extent of credit card fraud is difficult to quantify, partly because companies don't like to reveal the exact amount of money lost on their online stores as this might lead to lack of confidence in the online buyers in doing future shopping on their websites and partly because the figures changes over time. Various estimates have been given. Total looses through credit card fraud in the United Kingdom has been growing rapidly over the last 4 years [1997, The amount of money lost for 2004 been reported as Fraud These include elaborate graphics, fluorescent fibres, multitone pictures, watermarks, laminated metal strips and holographs on banknotes, personal identification numbers for bankcards, Internet security systems for credit card transactions, and passwords on computer systems and telephone bank accounts. Unfortunately, none of these methods are foolproof and, in general, a balance has to be reached between cost and inconvenience (e.g., to a customer) on the one side, and effectiveness on the other. One of the commonest reasons for all the losses is weak fraud prevention methods partly blamed onto the customers. Many customers are not very confident in progressing steps in order to carry out transactions online, Until their card has been misused they are unaware of the stronger fraud prevention techniques which should be adopted. The most basic form of fraud prevention is electronic authorisation. This process verifies that the credit card being used to purchase goods is valid and has sufficient monetary funds attached to it. That said, there are significant limitations connected with this process, principally that it provides no assurance that the person using the card has authority to make the purchase. Technology is being rapidly developed to ensure retailers have added protection. For credit card transactions conducted in person, a system involving microchips and personal identification numbers (PINs) has now been adopted in the United Kingdom. This has utilizes a microchip attached to bank cards to ensure data securely, and the inputting of a PIN as opposed to a signature at the point of sale (APACS). This procedure has necessitated traders to install special terminals to implement the chip and PIN system. The occurrence of over-the-counter credit card fraud should be substantially reduced with this process. Credit card transactions conducted over the internet, are however still prone to problems of card and cardholder authentication. Chip and PIN cards should have the potential at some point in the future to utilise more secure transaction technology by using readers and PIN pads attached to computers. At this early stage, however, their application is very limited. In the UK and USA, two extra online fraud prevention strategies have been developed in conjunction with financial institutions and card issuers: The AVS uses information obtained through the cardholder's issuing financial institution. It ensures that the billing address provided by the customer crosschecks. This process is not totally accurate. For example, there could be instances where the customer has moved and changed address but hasn't updated the details with the financial institution. This has the potential to cause problems", "label": 0 }, { "main_document": "situation is illustrated below. The discharge was recorded using a Venturi Manometer. When measuring, the most extreme measurements of delivery pressure were taken first of all in order to determine in which range measurements needed to be taken. It was decided within the group to take about 10 measurements within each range. For 2000 rpm the most extreme delivery pressures were 0.49 bar and 0.13 bar, so it was decided to take measurements every 0.05 bar from 0.45 bar to 0.15 bar. For 3000 rpm the most extreme delivery pressures were 1.30 bar and 0.40 bar, so it was decided to take measurements every 0.10 bar from 1.30 bar to 0.40 bar. When recording the force and the discharge, taking measurements at a level perpendicular to the reading reduced systematic errors in the form of parallax errors. Centrifugal pump definition from ' Performance curves are used to display to users how pumps perform at different fluid output levels, and are often used as points of reference for customers, being provided by pump manufacturers. They normally show total head, efficiency and input power against pump discharge. Where input power is sometimes labelled as brake horsepower. Total Head definition from ' Manometer definition from 'Fluid Mechanics' by Streeter, Wylie and Bedford: The following are a sample of the computations that are used in the spreadsheet in order to obtain the values that we wish to analyse. The results and calculations on the results are shown in Appendix 1. Two sets of graphs have been plotted. The first two graphs are of the 'Pump Performance Characteristics'. These have been plotted on two different graphs for speeds of 2000 rpm and 3000 rpm, so as to stop any confusion between which results relate to which speeds. The third graph illustrates the 'Non-Dimensional Results' as calculated in the table. Both speeds have been illustrated on the same graph, due to the fact that there are 4 sets of points, compared to 6 in the performance characteristics graphs. Therefore, there is less chance of there being any confusion. These three graphs are shown in Appendices 2, 3, and 4 respectively. When the pump is running at 2000rpm the performance characteristics, as displayed by Graph 1, are as follows. Efficiency is a parabolic curve in which the maximum value of around 23.5% efficiency occurs at 0.9 litres sec-1 flow rate. Total head decreases at a constant rate as discharge increases, until discharge reaches 0.9 litres sec-1, at which point the total head starts decreasing at an increasing rate. Input power increases at a constant rate almost throughout. When the pump is running at 3000rpm the performance characteristics, as displayed by Graph 2, are as follows. Efficiency is a parabolic curve in which the maximum value of around 31.5% efficiency occurs at 1.2 litres sec-1 flow rate. Total head decreases at an increasing rate. However, it is fairly linear between 0 litres sec-1 and 0.9 litres sec-1 flow rate. Input power increases at a constant rate almost throughout. It is now possible to compare performance characteristics for when the pump", "label": 1 }, { "main_document": "acute toxicity. Under normal operation, the enzyme with a serine residue at it's active site, becomes acetylated through transesterification with acetylcholine. This acetylated-enzyme activated complex is readily hydrolysable; however the when phosphoesters come in contact, a different transesterification processes takes place - the phospho group is transferred to the serine derivative. This phosphorylated serine possesses a strong bond which causes an effective irreversible binding known as an enzyme blockade and so the neurotransmission stops There are three main variations of the basic organophosphate motif: The different types are differentiated by which esterase they inhibit, for example Type A organophosphates are broad-spectrum esterase inhibitors that are capable of affecting Type A non-specfic esterases. Type B inhibit carboxylesterases which are predominantly found in muscles and then type C, inhibit acetyl esterases primarily. Many parallels can be drawn between organophosphate insecticides and war-chemicals - the most common structural difference being simply the inclusion of a strongly electron withdrawing substituent bonded to the phosphor-nucleus, which presumably strengthens the serine-agent ester bond through electron induction thus making them more efficious killers. In 1943, the German Gerhard Schrader became the first chemist to produce a fully synthetic organophosphate pesticide. He called this chemical bladan (ethion). Later that year he also prepared parathion, codenamed E-605. This O,O-dialkyl thionophospho-ester was produced by condensation of an alcohol and thionophosphoric chloride, which was itself made by reaction of phosphoric trichloride with elemental sulphur at 190 When working on amido-phosphoesters in 1936, Dr Schrader and his team became very ill after attaining the substitution of (dimethylamino)phosphoric acid with sodium cyanide and ethanol. After three weeks recovery, the evidence was verified and Dr Schrader made his results available to the national The potential use of these compounds as chemical war agents was immediately recognized, and the first mass produced for war-time use was named Tabun. In the years to come he and his team synthesized a variety more nerve agents including Sarin and Soman (Latin, The original pesticide synthesis route was advanced by Fletcher et. al. This new method made use of phosphorous pentasulphide (or pentoxide) as a source of O,O-dialkyl thionophosphoric thiol ester, which is comparable to carboxylic anhydrides. This was substituted with elemental chlorine to produce the activated acid chloride (comparable to carboxyl acid chlorides). This had the advantage over the original German method in that the starting material was already in it's maximum oxidation state and reaction proceeds at room temperature as opposed to 190 This method can be used to make a huge array of other Type B and C organphosphates The pentoxide can also be used in place of the pentasulphide affording type A pesticides It has been found that the activity of biodegradation is massively increased in soils that have been exposed to repeated applications of OP pesticides. This represents the adaptation of soil borne bacterium and other prokaryota to be metabolically capable of OP degradation Biodegradation is thought to be at-least 10 times faster than environmental hydrolysis and so in repeatedly-exposed areas, is the main route of degradation. The general hydrolytic degradation mechanism is sensitive to pH", "label": 1 }, { "main_document": "locations, to engage with law in a globalising world, informed by a concern for social justice and human development. The concept of 'law' is generally presented and understood as apolitical, benign and relatively unchanging. However, various commentators have highlighted the narrowness of such an interpretation, through historical explorations of legal and political processes. Research has indicated, among other things: the imperialism of international law, as demonstrated through the politics of recognition which created an ongoing project of colonisation and an international law constituted by unequal violence; Enquiry into the origins of the present legal order, its driving agenda and the identity of its advocates demonstrate a dominant political ideology manifested in textual, institutional and other realms. China Mi See also, Antony Anghie, 'Time Present and Time Past: Globalization, International Financial Institutions, and the Third World' (1999-2000) 32 Balakrishnan Rajagopal, B.S. Chimni, 'International Institutions Today: An Imperial Global State in the Making' (2004) 15 Further, a central element to the dominant understanding of law is the assumed coinciding of social and state boundaries, making the state the defining feature of the world legal order, and consequently limiting the application of law to a national/international framework. Given that many of the issues key to global justice concerns (from human rights discourses to cross-border economic relations, from global security policies to environmental consciousness, from globally connected financial and regulatory bodies to transnational anti-systemic movements) are now influenced less and less by hegemonic discourses and control mechanisms of statist law, it appears that state sovereignty stands challenged as the foundational principle of legal imagination, as it proves difficult if not impossible to attempt to challenge transnational issues using a state-centred discourse. Beck, above no 3, 45. See, for example: Chamsy el-Ojeili and Patrick Hayden, Andreas Fischer-Lescano and Gunther Teubner, 'Regime-Collisions: The Vain Search for Legal Unity in the Fragmentation of Global Law' (2003-2004) 25 Once the concept of law is recognised as a political (rather than benign and unchanging) concept not limited solely to the national/international duality, it becomes possible to embrace and deploy the legal discourse for particular purposes. Whilst at the present time the law predominantly manifests, through institutions and process, in line with the interests of the hegemonic globalisation(s), In addition to resistance and activism If 'law' itself is a malleable concept and can be viewed as a \"discursive terrain\", Jayan Nayar, 'Taking Empire Seriously: Empire's Law, Peoples' Law and the World Tribunal on Iraq' in Amy Bartholomew (ed), For a comprehensive discussion and examples of this approach, namely the subaltern cosmopolitan legality school of thought, see: Boaventura de Sousa Santos and C Ratna Kapur, 'Revisioning the Role of Law in Women's Human Rights Struggles' in Saladin Meckled-Garcia and Basack Cali (eds), Various analytical frameworks and theories have sought to advance global justice through such resistance, each with contributions and limitations. The dominant theories include: the collaborative governance school, the global hegemony school, the subaltern cosmopolitan legality school, Hardt and Negri's 'multitude' conception. In relation to the first two schools, whilst they provide useful descriptions of global legal structures, arguably they conceal counter-hegemonic", "label": 1 }, { "main_document": "There are always few limitations in the evaluation done by expert and evaluation by a general user. An expert user requires a wide range of facilities and finer more precise tools then a less knowledgeable person. The evaluation of system depends upon many factors such as memory load, time, and accuracy. The expert user can tolerate a much higher memory load with apparent less prompting which might not be the case with general user. Indeed, evidence indicates that a system design that works well for the inexperienced user can be slow, crude for the expert. Hence there is a limitation on the evaluation done by expert is because of different level of understanding and requirement in comparison with general evaluation. The programmer provides an impressive The programmer model doesn't hold extra buttons .The system is The The model shows good An effective interaction design is one in which a user carries out his work with The existing programmer model holds weaknesses, which might lead to inaccuracy, and more time spent by user. The following weaknesses are encountered in the model: The user of The existing programmer model doesn't have any The The central heating system programmer model doesn't hold any The quality of an interaction design is determined by the combination of three factors: Time, Accuracy and Pleasure. The new central Heating Programmer designed takes these factors into consideration before re-designing, which will help in improving the learning about the device for the user. Using a Temperature Knob gives the user easy and quick idea of its use. This reduces the time the user might take in understanding things with which he might not be very familiar, using things or devices which the user is familiar reduces the time it takes in learning about the device working. The use of LED button for different setting makes it easy for the user as the flashing of the LED gives the user indication of his progress and placing the panel close to each other because of their link makes it easier for the user to handle and it reduces the short term memory load. The use of signs around the buttons reduces the perceptual learning time, which is the time it takes for the user to learn the pattern to be used as signature for the elementary figure. The sound effect produced in the model when the user does the changes and user pressed the run programme button gives the user the indication of the eventual success. The new programmer design tries to balance out both the long term and short-term memory load. The user of patterns to techniques and prompting the action is useful in reducing the long-term memory load. The light buttons provided above the heater and water setting flashes and prompts the user when the user presses the change setting button on the left panel of buttons. The use of flashing LED Button for Run Programme reduces the long term load as the user don't really have to remember much amount of information as the Led keeps flashing until the user", "label": 0 }, { "main_document": "the functional and procedural means to transfer data between network units and to detect and possibly correct errors that may occur in the Physical layer. The standardised packets of the network layer are separated into MAC frames. The Physical layer defines all the electrical and physical specifications for devices. This includes the layout of pins, voltages, and cable specifications. The major functions and services performed by the physical layer are: The user launches his web browser, which registers a port with operating system. When he types in the address of the internet shop and confirms it by pressing \"enter\" a 3-way handshake starts: At this point, both the client and server have received an acknowledgement of the connection and connection is established. User browser shows a main page of e-shop (index.html). User chooses the products to buy by browsing the page and adding them to the basket. Finally he decides to make a payment by debit card. There is nothing at the HTTP protocol to guarantee any data confidentiality, authenticity or integrity. As soon as user types his debit card details and presses confirms them with \"enter\" key the data is send transport control protocol (TCP). Application layer attaches an application header (AH). TCP as \"connection\" protocol establishes connection between e-shop's web server and user's browser to form \"a live tunnel\". The debit card details are broken into manageable sizes called TCP segments. Size of the segment is determined by how much the receiver (web server) can and want to accept. Data is exchanged after TCP segment is confirmed. When the protocol is agreed, the first TCP segment is encapsulated with TCP header and passed down to network layer. Encapsulated TCP segments are encapsulated with IP header, and become IP datagrams. The IP header includes source and destination IP addresses to help with routing the datagrams to intended destination (e-shop web server). Once the encapsulation is complete the IP datagrams are passed down to data link layer. When IP datagrams reach the data link layer they are encapsulated by the NIC with the header and become Ethernet frames. The Ethernet Frames have to be passed down to Physical Layer. At this final stage Ethernet frames are divided into bits and send to a wire for feather routing to the web server, where all the data processing is reversed starting from Physical layer and ending at Application layer. ( Wired Equivalent Privacy - WEP is part of the IEEE 802.11 standard. WEP uses the stream cipher RC4 for confidentiality and the CRC-32 checksum for integrity. It can be typically configured in 4 possible modes: By default most of wireless devices have WEP turned off. Most public wireless LAN access points (i.e., airports, hotels, etc) do not enable WEP. If the Access Point does not enable WEP, the wireless clients can not use the WEP encryption. In some base stations, it is optional whether the encryption is enforced. The WEP encrypted may be turned on, but if it is not enforced, a client without encryption with the proper SSID can still access that base", "label": 0 }, { "main_document": "words used to denote female could also imply that the female is either inferior or sexually promiscuous for example master/mistress, bachelor/spinster. Our language however, is historical and many of these presumptions contained in the language could be out of date. This will be discussed further later in the essay. Linguists are now generally split between the dominance theory and another approach known as the difference theory. This theory is more recent and strives towards a less negative view of female language. Some researchers in the late 1980s argued that \"the dominance model had become a deficit model, that is a way of interpreting the linguistic facts that represented men's language as the norm and women's language as deviant\" (Coates 1998:413). The theory focuses on different conversational goals, and by doing so, does not see one as superior to the other. Research into the different theories on language and gender is very broad. As stated earlier, the two later theories have overshadowed the deficiency theory now and many linguists do not count it as a valid and research-led approach. \"Lakoff seems happy to present folklinguistic material without the support of any research findings to confirm her statements\" (Coates 1993:23) Coates also suggests that Lakoff may be trying to prescribe how women This perhaps spurred research and development of the difference theory. On reading some of the literature on the deficiency approach, I agree with Coates in that today's society should look more to description rather than prescription. Lakoff's theory has not stood the test of time. Society has changed so much in the last 30 years and generally women are no longer seen as inferior. I will therefore take a focus from this point on the dominance theory and the difference theory. The dominance approach focuses on the male's dominance in society and therefore concludes that the speech developed by the male of the species has inherent supremacy. Being man-made, language is not neutral. It contains attitudes and beliefs about what it is to be male and what it is to be female. The dominance approach views language as a gendered system. Before going any further in this discussion, it has to be pointed out that power has a large influence when studying the dominance theory of language and gender. A lot of the language that is seen as typical male language correlates with that seen as powerful language, and typical female language is sometimes pointed to as powerless. Research has been done and it has been found that males are more likely to use 'single-voiced discourse' where the subject presents only their perspective. Females are more likely to use 'double-voiced discourse' where, whilst maintaining their own perspective, they take the other person's into account also. The dominance approach sees this as unassertive and therefore less powerful. Language can be used by males to assert social dominance, hence tendencies to brag, boast, heckle and threaten. This 'conversational dominance' is described by Jennifer Coates as \"strategies which enable speakers to dominate their partners in talk\" (Coates 1998:161). She highlights that since research has shown", "label": 1 }, { "main_document": "single research strategy less capable in capturing the different \"slices of the social world\"(Denzin 1970: 247), thus it is less \"valid\" than a multi-strategy research? It would be a narrow definition if one sees \"triangulation\" can only be applied to multi-strategy research. As discussed in the other writings of Willis, data validity can also be checked by various measures (Wills 1978). In \"Learning to Labour\", both the working class \"lads\" and the school \"conformists\" were interviewed in order to make comparisons on their various views on school and work. Moreover, detailed interviews were conducted with the students' teachers, their parents and supervisors (after they started working). The wide range of collected data can also serve the \"cross-checking\" purpose, which coincides with the objective of triangulation. As acknowledged by Willis, the difference in age and social status between the researcher and the \"lads\" required a lengthy period to establish trust between the two parties (Willis 1975). Willis attempted to compensate by carrying out a longitudinal study which allowed him to follow his cases for two years, in order to trace the changes of the \"lads\" from school to work. In this sense the match of research questions and strategies in the work of Willis is no less relevant than that of Pahl. \"Reality\" is infinity. No matter how well the research proposals are written, none of them can neglect the practical concern in execution. By conducting a multi-strategy research which involved both quantitative and qualitative data collection and analysis, Pahl's work in \"Money and Marriage\" lost the longitudinal \"depth\" and the \"width\" of data diversities when compared to the work of Willis. Despite the absence of figures which may help to generalize and to convince, I found the conclusion drawn from \"Learning to Labour\" not less thought-provoking. The rebellious working class \"lads\" were driven to \"choose\" to remain in the working class by the strong institutional and social mechanisms. After all, can one still say that a single strategy research is \"worse\" than multi-strategy? If multi-strategy research is raised from the demand of breaking down the dichotomy between the quantitative and qualitative \"camps\", drawing the boundary between \"single-strategy\" and \"multi-strategy\" research risks a return to the methodological dichotomy. On one hand, as Bryman pointed out, quantitative and qualitative research strategies are rarely given equal or nearly equal weights in the same piece of multi-strategy research study (Bryman 1988). Pahl also acknowledged that the study could have been more \"qualitative in style\" (1989: 180), by introducing checklist of topics instead of structured questionnaire to give more flexible interactions during the interviews. On the other hand, a research study can hardly be purely \"single-strategy\". As in \"Learning to Labour\", comparisons were made among different groups of students along the parameters of class, ability and the \"standard\" of the schools (either grammar or non-selective schools). These methods in selection and comparison correspond to the practice of sampling in the quantitative studies. If we accept that there is no hard-and-fast set of distinction between quantitative and qualitative research strategies, the same understanding should also be applied when", "label": 0 }, { "main_document": "was also no difference in the muscle HUE or saturation- determinants of the colour of the tissue. There is, however, a significant increase in fat HUE with increased maize substitution, making the appearance of the fat less yellow, and more of a creamy-white colour to the naked eye. The saturation of the fat had a significant difference between the diets. However, it is not possible to judge any correlation between the saturation and changes of diet, due to the values fluctuate as the amount of maize silage increases in the diet (Figure 2) Muscle pH is also shown in Table 7. The results show that there is no significant change in the pH value of the meat, and between the diets, the values stay within 0.03pH. Replacing grass with maize silage increased the dry matter intake and lead to increases in metabolisable energy intake, daily liveweight gain, and thus carcass gain. This can be attributed to the nutrient composition of maize silage. The reduction of neutral detergent fibre, and increase in starch content, in maize silage leads to an increase in dry matter intake providing more energy for digestion, and therefore, growth- all in a shorter period than a grass silage diet. The energy that is provided to the steers is used for lean tissue growth first, with the excess being stored in the fat depots. Forrest and Vanderstoep (1985) found that all carcasses fed a maize diet received the highest carcass classifications, whereas only 80% that were fed on grass-based silage received the same standard. This supports the results given of a higher carcass weight, and killing out percentage, producing a higher carcass standard in maize-fed steers. The results showed no significant increases in lean tissue between the diets, indicating that the steers were provided with enough energy- with all the diets producing layers of fat. In the case of maize, which is high in energy, a larger proportion was stored in the cod depot. Although higher fat deposition in these steers, the appearance of the fat may have become more desirable to the consumer by the whitish-cream appearance. Knight (1998) found that the carotenoid concentration in fat accounted for 60% of the variation, and not by the fat depth of the depots. Therefore, the low carotenoid concentrations present in maize silage may have resulted in the whiter fat. The level of marbling in the lean tissue was not measured in this experiment. Albrecht (2006) concluded that Holstein- Friesian cows had a more numerous, finer level of intramuscular fat than other breeds of cattle that were studied. Consumers visually prefer both low and high levels of marbling in beef- depending on individual preferences (Killingeral. 2004). Therefore, the amount of intramuscular fat that was present in the meat from this study would still be desired by a certain consumer market.The low bone growth can in all the diets is due to that the steers were being finished-off, therefore were reaching maturity and most bone growth would have already occurred prior to the experimental trial. Any small bone growth would have been in", "label": 1 }, { "main_document": "preference for currencies backed by anti-inflationary policies. Hence, governments are keen to adopt counter-inflationary measures to please the market and have access to credit. Yet, rather than classifying these developments as giving power to markets over states, Burnham sees these events providing states with the 'strongest possible public justification' for maintaining downward pressure on wages to combat inflation and thereby achieve price stability. In fact, there is an enhancement of the state's power over the working class (1999, 46). These developments are also characterised by the implementation of a depoliticised management of the states based on rules since the 1990s. In this process, governments need an external validation of policies and are required to be transparent and accountable to international institutions. Codes - such as fiscal, IMF and good practice codes - provide countries with benchmarks. They must also accept binding rules limiting their room for manoeuvre, such as those from the World Trade Organisation. A third important characteristic of the depoliticisation process is the reassignment of tasks within the government: state's bodies win operational independence as long as they work under the international codes (Burnham 1999, 44). Therefore, as Burnham puts it, the international financial changes in the last decades have been taken by states as part of a broader attempt to restructure and respond to domestic crisis in capitalist societies: The concept of depoliticised management of the state fits Polanyi description of the process of disembedding the economy from social and political relations. For him, in order to achieve a free market society, various social relations must be marketised as well (Best 2003, 365). Polanyi predicted that the economic order that would emerge from the Second World War would mark the end of capitalist internationalism. He believed that governments had learnt the lesson that 'international automaticity stands in fundamental and potentially explosive contradiction to an active state domestically' (Polanyi 1944, 19-21, quoted in Ruggie 1982, 387). The Bretton Woods regime was indeed an attempt of establishing a new balance between market and states, with governments assuming much more direct responsibility for domestic social security and economic stability. For Ruggie, during this period, countries promoted an embedded liberalism: they had financial tools to benefit from global economy, such as control of the movement of capital, and at the same time they were following policies to attend domestic pressures, such as establishing welfare state (Ruggie 1982, 388). After the 1980s, however, the liberalism became transparent - or disimbedded - and the domestic stability became increasingly subordinated to the principle of international economic liberalisation (Best 2003, 363). The IMF was a pioneer in applying these neoliberal ideas in its agreements with developing countries since the 1980s. The conditionalities demanding trading liberalisation, privatisation, control of the level of indebtedness and inflation management policies forced states to prioritise these requirements to the detriment of domestic needs. Now the task to promote these policies was transferred to the rating agencies, which are responsible for the reinforcement of the neoliberal paradigm as the only mode of organising economic life. Developing countries are expected to follow the best", "label": 0 }, { "main_document": "Council (SC) Final Report of the Commission of Experts, UN Doc S/1994/674 Ibid., p.4 Under UN SC Resolutions 827 and 955 respectively Mandated to punish individuals responsible for grave violations of international law, the ICTY and ICTR have achieved some considerable successes to that end and have contributed substantially to the development of international criminal law. The establishment of the tribunals in itself, first and foremost demonstrates the intention and capability of the international community at large to punish war criminals, and in this sense sends an important message to government leaders worldwide that there is now the potential for their being brought to justice Since coming into force, 108 accused have appeared in proceedings before the ICTY, including most importantly and for the first time, an incumbent Head-of-State The ICTR has also secured the arrest, trial and conviction of a number of high ranking officials and ministers, including a former Prime Minister As Theodore Meron notes: Slobodan Milo Jean Kambanda Georges Rutaganda Th Unlike their predecessors in Nuremburg and Tokyo, their creation by the SC means the tribunals' statutes are binding upon all UN member states and call upon all states to \" This origin, coming as it did from the full international community, immunises the tribunals from challenges of 'victor's justice' which had to be endured at Nuremburg and Tokyo. It was no longer the victors emerging from the aftermath of warfare that were indicting, prosecuting and judging war criminals but a body of UN-appointed and elected individuals, representative of a wide range of nations. S.C. Res. 827, U.N. SCOR, 48th session, U.N. Doc. S/RES/827 (1993). The tribunals also exert primacy over national courts This tackles the aforementioned shortcomings of national enforcement of international criminal law, which had been demonstrated so blatantly at Leipzig The tribunals also have at last given effect to the numerous aforementioned conventions reached mid-century, through its judgements concerning war crimes, genocide Both tribunals have gone further in developing substantive law, incorporating sexual crimes such as rape into the context of crimes against humanity Opening a new area for tribunals of this nature, Article 4 of the ICTR statute enables the prosecution of persons who commit or order to be committed serious violations of Article 3 common to the Geneva Conventions and of Additional Protocol II during to armed conflict ICTY Statute, Article 9 see The ICTR delivered the first convictions of an international tribunal for genocide in the case of ICTY Statute, Articles 2-5. Ibid. Article 6. see also 174-186 (definition for purposes of war crime); 475-497 (rape as act of torture). Ibid. Article 3(a) However, whilst all of this demonstrates that significant strides have indeed been taken by the world community towards ending impunity, we must of course appreciate the substantial limitations of these tribunals. Both are only Resources (and the lack thereof) are also key to the tribunals' efficacy, and they have been a recurrent problem for both, However, the most fundamental weakness of the tribunals relates to enforcement. Only Tribunals were limited to prosecuting \" Unlike national courts, the international criminal tribunals", "label": 1 }, { "main_document": "created to allow the subjugation and exploitation of extra-European peoples, which ultimately led to the creation of slavery. It is Ryan who best summarizes this point of view - 'The world, after all, was discovered by Europeans, not vice versa. And that fact implied a certain ownership, if not legal then at least intellectual and psychological.\" It was perhaps Christianity which was the only common link between the two almost completely paradoxical depictions in terms of creating the idea of pagans needing conversion, whilst simultaneously producing an ingrained superiority to be upheld through violence and exploitation. Ryan, 'Assimilating New Worlds', p. 536", "label": 1 }, { "main_document": "spoilage in canned fruits, although it generally helps fruit keep its shape, colour, and flavour. The syrup can be replaced by water, the fruit's own juice, or purchased fruit juice. Likewise, salt has no effect on the natural colour and texture of canned foods, and the main reason for using salt in canning is to enhance flavour. For peas, flavour can be added by mint, mace, nutmeg, or curry powder. Curing salts, and sugars do help to inhibit the growth of Leaving the specified amount of headspace in a jar is important to assure a vacuum seal. If too little headspace is allowed the food may expand and bubble out when air is being forced out from under the lid during processing. The bubbling food may leave a deposit on the rim of the jar or the seal of the lid and prevent the jar from sealing properly. If too much headspace is allowed, the food at the top is likely to discolour. Also, the jar may not seal properly because there will not be enough processing time to drive all the air out of the jar. Oxygen in the headspace of the can is undesirable as it can contribute to oxidation of vitamins at high temperatures, and therefore loss of nutritional value. Exhausting is the removal of air prior to the second sealing operation, and is important in the prevention of excess strain, for the creation of a vacuum, and for seam flow closing. Canners seam at speeds from 50 to 2000 cans per minute, so visual inspection is important to ensure correct tightness and overlap of the lids. Types of container used can vary i.e. cans, bottles, jars, flexible pouches, films or trays made from plastic, metal and glass. Cans are usually tin plate, with either a soldered or welded seam, and are either two-piece or three-piece cans. Tins can however be made from tin-free steel or aluminium, and they can be lacquered on the inside for protection of the food, although this is dependent on the specifications of the product. There are a number of considerations in producing the ideal can, it must be: Enamel is applied to the tin plate to prevent chemical changes from occurring in the food during the heat treatment e.g. feathering can result from the action of acids on the tin plate. Cans with fruit enamel are used to can fruits such as red plums, because the tin would reduce the red colour of the anthocyanin pigments. Corn is canned in cans with the C enamel, to prevent the sulphur in the corn from reacting with the iron in the steel plate, and forming an undesirable black compound. The diagram below illustrates the parts of a can, how the double seam is composed and faults which can occur in the double seam: The double seamed can is produced from a two-stage process, including a first and second operation. When heating is started, the vent is left open to allow the expulsion of air and to ensure the pressure it reaches is true steam pressure.", "label": 1 }, { "main_document": "against their shields yet they still appear intimidating especially from the low shot that makes them seem bigger and more numerous than they probably actually were. In contrast, the Romans remain still and appear calm and organised as they stand in their ranks ready for battle. The image that follows the stand-off is a close-up of Crowe as he picks up some earth and rubs it in his hand. This action is part of a recurring theme throughout the movie that alludes to Crowe's agricultural roots and gives him a heroic and humanising need to be one with the earth before he spills blood on it, as Jon Solomon explains, \"Gladiator from Screenplay to Screen\" by Jon Solomon (year) p14 We are then presented with a shot-reverse-shot between Crowe and his dog which Jon Solomon believes represents the she-wolf of Rome from the Romulus and Remus myth due to the fact that in the first draft of the script the dog is called \"The Wolf of Rome\". Prior to this we have seen a medium close-up of Crowe's face which is now much sterner as he says to his generals, \"At my signal, unleash hell.\" The quickening of the score as if building up into a crescendo, the increased amount of movement on screen and dramatic weight of the previous line prepare the audience for the upcoming spectacle of the Roman army moving into battle. The dog itself seems anxious about the battle like Crowe and thus there seems to be a bond between dog and man that once again endears Crowe to the audience. This is followed by various shots that emphasise the superior technology and organisation of the Roman army in comparison to the Germanians as each side prepares for battle and accordingly the music quickens and becomes louder as the armies get ready to fight preparing the audience for the immense battle about to commence. \"Gladiator from Screenplay to Screen\" by Jon Solomon p4", "label": 1 }, { "main_document": "the customer's increasing needs, and derive manufacturing process that maximizes stock turnover and lower costs. Based on the barrier of entries, market performance and market conduct this is the suggestion for a new entrant. Based on the report on Minitel (2004), customers demands low cost and high quality vehicle, and less on brand name. Therefore a new entrant should consider investing heavily in an efficient production system in order to minimize cost and maximize production. They should also play attention to incorporating practical functions into their products, together with all the safety features to suit current customer demands. To cut out on the middleman cost, aim for direct sales either through the Internet or phone/fax. This will be our competitor's main disadvantage, due to the fact that they will be anxious to main a good relationship with their franchise dealers. We need to do what Dell did in to automotive industry. The growth in small cars market is strong. They account for 33.9% of all new car registration. We should focus on tapping on this segment of the market because it is not only cheaper to produce small cars but also it is not dominated by any of the very large firms. This means that we will not be competing head on in terms of branding. We should keep in view Diesel and other technology in order not to lag behind our competitors. This mean spending in research and development that will help differentiate our product. (Minitel 2004) In conclusion I would recommend acquiring the Peugeot. This will give us a competitive start to compete with the rest of the firms in the industry. This is based on the fact that the barrier of entry into the market is too huge for a new startup. As the market is already saturated with more than 500 automobile companies, consumers tend to look for established brands to base their purchases on. To further satisfy the market needs, we can instead create a sub branding under Peugeot to cater for new product line such as minis. This gives consumers confidents on the new product, and does not distort the current Peugeot's product lineup and focus. On the other hand Peugeot has already established itself as one of the leading car brand in UK. They have the knowledge and technology to produce high quality cars. They have implemented a good manufacturing plant in UK and established vital relationship with distribution network. Peugeot's heavy involvement in the rally scene not only gives them market awareness, but also let them transfer technology that is developed for their rally team into their products. Technology such as suspension and torsional rigidity differentiate their products from others. Peugeot as a company is also doing very well. It has one of the highest ROTA and Return in Capital employed. Their profit margin is also increasing throughout the years. It is a healthy and growing company based on the high return in shareholder's fund of 120%. Source: Peugeot.com [2004]", "label": 0 }, { "main_document": "equations. Firstly, an equation was given linking the distance the pulse has travelled as a longitudinal wave, The second equation is an arrangement of Snell's Law: Where Equation 3 can be used to calculate The third aim was to use 'sonar' properties of longitudinal ultrasound to detect, locate and size defects within an aluminium block. Detection can be achieved quite simply by knowing the velocity of a wave and the time it takes for the wave to reach the defect. Once a SAW has been generated (see 1.1.2) their velocity, Theoretical value of Finally, by measuring the energy of the wave at different depths in a block, an estimate of the wavelength of the SAW can be found (see 1.1.2). If this situation is inverted, it can be seen that knowing the wavelength of a Rayleigh wave is therefore very useful in finding surface cracks and their depth. The following section goes into more depth on how the aims of this experiment were achieved. In each part of the experiment, an ultrasound signal generator was connected to a PC oscilloscope (so the signal could be seen) and a transducer. For many of the experiments an additional 'receiver' transducer was connected to another channel of the oscilloscope, which picked up the pulse once it had travelled through the sample. The transducers and the samples were assembled as shown in figure 5 The delay-time facility on the oscilloscope enables the time between wave transmission and reflection to be determined. The pulse was set at slowest rate so that the subsequent transmitted pulse wasn't shown on the oscilloscope before the first reflection. All time errors in this investigation are due to the pulse having multiple peaks (fig. 6). Also, the errors were based on the accuracy to which it was possible to read from the oscilloscope so try to and minimise the error the timebase was set to the smallest readable value. All distances measured in the experiment were generally measured using vernier scale callipers as these are more accurate than a ruler. All distance errors were based on the callipers' accuracy, unless otherwise stated. In graphs, the errors for the gradients were all calculated in 'Origin' Throughout the experiments the velocity = displacement/time relationship was used if velocities, lengths or depths needed to be found. Two types of transducers were used (but never together in one experiment): 5MHz longitudinal wave transducers and wedge transducers. The former were cylindrical, with a circular base juxtaposed with the sample. The latter were as shown in figure 7. The time for 4 Repeat readings were also taken and then averaged to try and smooth out any anomalies. The length of these pulses (multiples of the block length) was measured to enable the velocity of the pulse, The metal samples were also weighed, using digital scales, and measured so that their density could be calculated and thus the longitudinal moduli and then the Poisson's ratios could be obtained. The graphs shows The actual value of Since the time measurements all seemed quite accurate, increasing by sensibly even amounts, then", "label": 1 }, { "main_document": "of Downs' median voter theorem, though vague, can be applied to the 2005 General Election and used to justify the outcome. Moreover, the deductive hypothesis formed by Downs - that in a single-member district plurality system, parties will gravitate towards the preferences of the median voter - appears to be true insofar as Labour's position on the ideological spectrum resulted in electoral victory. (1957). New York: Harper & Row. When analysing individual issues, Labour was ahead on the two most important issues. In a MORI survey The issue of Education, second in the list of most important issues, also indicated electorate trust in Labour, where 35% stated they had the best policies in that area, over 20% for the Conservatives and 10% for the Liberal Democrats. Only on less important issues such as Immigration and Iraq did the Conservatives and Liberals make any ground. 52% of those surveyed agreed that the Conservatives had the best immigration policy and 39% backed the Liberal Democrats' Iraq policies. MORI. On the most important issues, Labour certainly appealed to the median voter and though the Conservatives' immigration policy appealed to many, the issue was clearly not high enough in the voters' preference list to greatly affect the result of the election. It is a known fact that when the economy is doing well, as it was at the time of the 2005 General Election, immigration is not a great concern; the fact that 53% of those surveyed trusted Labour's economic policy perhaps alleviated the impact of the popular Conservative immigration policy. It became apparent shortly after the party campaigns began that Labour would attempt to highlight the success of the economy under their time in government whereas the Conservative Party would advocate their immigration policy whenever possible. However, the controversial nature of the issue ensured that the Conservatives were quickly labelled \"The Nasty Party\" by their opponents and, whilst consolidating their core vote, disengaged a lot of people in the process. In contrast, Labour's economic platform was successful in that they had an 8 year history of being capable managers of the economy, something that the Conservative party lost in the ERM crisis of 1992 and are still trying to regain. Downs' median voter theorem implies that the most important factor in determining the electoral result is party policies and the degree to which they are aligned with that of the voter. However, there are other factors that affect the way people vote. When a MORI poll asked \"what it is that most attracted you to the [insert party name] party\", 31% of those surveyed stated that 'Leader Image' was the most important deciding factor and 24% 'Party Image'. Though the rest agreed that \"Issues\" was the most important factor, this evidence illustrates that Downs' theorem is limited by the extent to which the electorate vote on issues alone. Another limitation to the use of Downs' model of voting is the rise of the Liberal Democrats as a viable third party. The model is less effective in multi-party systems; by gaining 22% of the electorate (up", "label": 1 }, { "main_document": "level (Trouv The theories of Maslow and Herzberg are the most attended ones and try to identify drivers of motivating people. These concepts focus on different needs and factors in a given order that lead to motivation and so, satisfaction (see Appendix 8). However, both models have been often criticised to be too simplistic. Maslow's framework is said to be applicable only to Western cultures (Hodgetts and Luthans 2000, McKenna and Beech 2002). Furthermore, the ranking order does not seem to be adoptable to each country as there may be culture specific differences (Hofstede in Hodgetts and Luthans, 2000). Concerning Herzberg's concept it has been argued about the inclusion of payment within the hygiene element instead of seeing it as motivator. To sum up opinions about the value of these two theories differ among academics. Nevertheless, both approaches can be used as a starting point to look at factors that motivate people generally (see Appendix 8). Rewarding is part of people motivation. Its purpose is to achieve set company objectives and performing on a high level. There are different types whose application depends on several factors like culture, size of company or strategic orientation. It is made a differentiation between extrinsic and intrinsic rewards. The former refers to financial benefits such as earning, extra pay or commission. Intrinsic motivation includes job recognition, training or job satisfaction. They are non-financial (Deresky 2000, McKenna and Beech 2002). The reward system should be selected by the host country unit because of the polycentric approach to people management. Local managers know about French peculiarities better than the British parent company. Rewards for the management level may be extrinsic as according to Trouv Therefore, it could include extra payment for annual revenue higher than expected to motivate the management team and to improve the relation between manager and company. Thus, it could be particularly beneficial in the long run and may make a contribution to the overall performance of the hotel. Motivated managers are also assumed to have a positive impact on the personnel on the operative level. Looking at the operational level financial rewards should dominate, too. Here, it might be offered holiday pay, extra payment for long hours or a bonus for weekend or night shifts. So, it is pretended to motivate personnel in order to guarantee a high quality customer service product which is of importance especially for a hospitality organisation. Most of the staff members have direct customer contact and the hotel management needs to count on workers that are committed and motivated. This again also contributes to achieve long term objectives of the firm and at the end secures jobs. Development is closely linked to training and often used as synonym for it in practice. However, management development aims more at enhancing managers' skills and capabilities in order to meet the goals of the company. Although due to the globalisation some societies converge to some extent cultural differences still remain (Doyle 2004). For that reason, it is important that managers develop abilities such as being aware of foreign values, behaviours, work patterns", "label": 0 }, { "main_document": "societies of vervet monkeys (Cheney and Seyfarth 1996). Taxonomy It is now considered the sole surviving member of a widespread historical radiation of Archaeological remains show Distinguishing Characteristics Male geladas have large shaggy manes of hair. Both sexes have a naked bright pink hourglass shape on their chest, a sexual displays such as estrus in females is signalled by bumps in the skin residing on the female's chest as opposed to the general position in primates on the rear of the female. This has been determined by the feeding habits of geladas, long periods of time spend sitting on an individuals rear feeding have moved the necessity of sexual display to the chest of the individual. The thumb of the Gelada is relatively long and more opposable than any other Old World Primate (Fleagle 1999). Geladas also have one of the largest sexual dimorphisms in all primates. Physical Characteristics (Rowe 1996). Habitat Simian mountains, Ethiopia. Montane grassland with no tall trees, only at altitudes 1400-4400m. Diet Mostly composed of grass and grass seed (Gramineae) (Strummbach 1987), but includes leaves and bulbs. Animal pray such as insects and small mammals are occasioned (Dunbar 1977). Life History Births are possible year round, but peak between June and July, November and December. The female's sexual display on chest changes from pink to scarlet 4 to 8 weeks during pregnancy. (Rowe 1996). Locomotion Quadrupedal Social Structure Multi-male multi-female. The basic reproductive unit consists of 1 male and his harem of related females (3-20) (Fleagle 1999). The composition of large bands consists of as well as the male and his harem large units of all males, the 'bachelor pack' always on the look out for week harem leader of whom to dispose and take over the females (Attenborough 2000). These two types of units can form into bands, and several bands may temporarily form herds of up to 600 individuals. 1999). Geladas have relatively small range for a baboon related primate of only 1 to 2 kilometres reflecting both the small foraging units and sedentary feeding habits of geladas (Fleagle 1999). Behaviour Diurnal and terrestrial. Geladas spend the majority of their days sitting on tough ischial callosities an adaptation for their feeding habits to pluck grass with their hands. They spend their mornings for 2 hours in social activity, such as grooming (Dunbar 1977). 50-60% of their activity budget is taken up by periodical feeding during the day. Infanticide has been observed in gelada populations when a harem leader was usurped by the leader of the 'bachelor pack' (Mori et al., 1997), but it is also known for the usurped male to form a coalition with the new lead male of the harem for addition protection against other bachelor packs, in exchange for the protection of the usurped males offspring (Attenborough 2000). The \"lip flip\" is a fear grimace in which the upper lip in inverted to show the teeth and gums completely covering the nostrils. Dominance or threat is expressed by the flashing of pale eyelids by the retraction of the scalp. As also studied by,", "label": 1 }, { "main_document": "In this essay I will try to assess whether it is possible, within the current Western framework, to think of a theory of natural-nonhuman rights. Does the anthropocentric dimension of our Western civilization allow us to think of natural-nonhumans as rights holders? Can the consumptive and exploitative structure of the Western model of life, which considers development as against nature, shift to a preservationist one? The emergence of the concepts of sustainable development in international law can be seen as a first step towards this end. But these notions are still focused on the well-being of the human species: sustainable development, the conservation of natural resources and wilderness for its needs are the concern of the current environmental policies. The preservation of the environment We need an environmental ethic based on a rational theory which thinks the ways to live in harmony with our environment. Is the language of rights the appropriate one or should we shift to another one, an environmental rationale? The first section envisages the origins of the Western conception of nature and assesses its implications in the dominant discourse of human rights. The second one explores the possibilities of a nonanthropocentric approach of nature's rights. The third part suggests 'weak anthropocentrism' as a framework that reconcile rationality and rights of nature. Since Christianity replaced paganism 2000 ago, humans have only shown disrespect or indifference towards nature, nature had to be mastered. The Genesis, which tells about an absolute transcendent God who made man in his image, also dictates men how to behave with nature: master it, subdue it in order to multiply. It grants rights to humans over nature and exhorts them to exercise it by force and fear. It allows men to consume nature The fact that Adam gave names to the animals gave him power over them. It is the 'source of Western man's ill treatment of nature' Christianity is anthropocentric in the sense that God is given a human shape, inscribing divinity in human nature. Passmore (1974 : 6) Ibid. 9 In Europe, the Enlightenment, an intellectual movement that spread throughout the 18 Now emancipated from the yoke of religious beliefs, man does not lose his god-like superiority, but it is relocated in human's reason. This notion has been developed from Augustine, Descartes to Kant; and its echo reached the 19 The world is viewed as a teleological system; it means that the whole world is moving, living towards one end, the human. The ability of humans to know the world and nature through the categories of the reason means having power on it. Modern science desacralised nature when it considered it as a scientific object. The reification of nature was a necessary step in the construction of nature as a mere branch of scientific studies. Science and technology became the saviours of mankind, the secular religion of progress. These features of the Western civilization are present in the capitalist as well as in the communist political and economic systems. Both the capitalist and the communist systems of production and consumption are based on the", "label": 0 }, { "main_document": "the Exponential Growth Phase, the increasing of the number of the cell was extremely fast, that was one of the reasons that a semi-log graph was used to draw the growth curve. At this stage, individual cells actually cycle through increases and decreases in cell mass (e.g. growth, division, growth, division etc.). In this phase, the specific growth rate was determined as the increase in bacterial biomass per existing unit of biomass per unit of time, and the culture doubling time \"t The result was under expectation, though some points lie outside the line, this might caused by: 1. Mistaken reading of the spectrophotometer: as the sensitivity of the spectrophotometer was very high, and the lid of the spectrophotometer was not able to close during testing, light from the environment can transferred into the spectrophotometer through the glass, which in turn affected the reading. (Also, if someone's hand moving close to the spectrophotometer, the reading changed accordingly). 2. Time of the reading taken: which might not be very accurate, this also could affect the reading. At about 110 minutes, the Retardation Phase began to show. This was because as the bacteria grow, the nutrients were used up, while toxins build up due to metabolic events. Both of these result in reducing in the growth rate of the population. Meanwhile, some cells stopped growth before others. Finally, the Stationary Phase is reached where cell division ceases. Though the Death Phase was not indicated in this figure, it's the final fate of the cells when all the nutrients have run out. The discussion and conclusion of the viable count is not included in this report.", "label": 0 }, { "main_document": "supporting evidence. Each section will conclude with a brief summary of current practice with respect to each of the elements of PRICE, as extracted from a questionnaire distributed to 500 members of the ACPSM. A summary of the questionnaire response can be seen in The guidelines are for the application of PRICE; they are based on the assumption that an accurate diagnosis has been made, and that PRICE is the appropriate intervention of choice. Individuals may decide to apply alternative modalities, as identified in Appendix C. Individual consultation and agreement may be required to ensure adherence to the guidelines. Situations in which caution should be observed have been identified in sections 1.2, 3.4 and 7.7.1. Specific contraindications are identified in individual statements relating to each of the elements of PRICE. A summary of precautions and contraindications are presented in Appendix G. Injury results in a sudden drop in the tissue's ability to withstand tensile stress. Immediately following injury there is little or no drop in tensile strength but within the first few days, as the inflammatory process evolves, a significant loss of tensile strength is observed (Houglum, 1992). The extent of this loss is proportional to the degree of tissue damage. Protection against further tension to the area of tissue damage is essential until an accurate diagnosis of the extent of the lesion is established (Hunter, 1994). In the early days following injury (up to four to six days), tension to the site of injury may disrupt the fragile fibrin bond which forms a network of 'scaffolding' which provides the link between the margins of the injured tissue (Hunter, 1994). Protection may vary from the initial moving of the injured athlete from the location in which further injury may occur, through provision for general support to avoid using the injured part (eg crutches), to the application of specific means of limiting movement (eg braces / splints). Various modes of protection have been advocated: Protection does not form the focus of any of the papers reviewed. However, it is mentioned as an element in the total early management of soft tissue injuries (including following surgery) but specific recommendations are not provided. However, taking into consideration the nature and timing of the healing process and the evidence of the effect of excessive early stress on healing tissue, the following recommendations are made. After a comprehensive evaluation of the disorder, the orthopaedist will make recommendations for the treatment plan. For most orthopaedic disorders and injuries there is more than one form of treatment, and the treatment plan may involve a combination or progression of several. Physical therapy. The application of heat to an affected joint may provide pain relief. Exercises maintain muscle power and improve the mobility of weight-bearing joints. Hydrotherapy may be useful. Paracetamol for pain. If not effective try NSAID's which should be used on an intermittent rather than a continuous basis. Low dose tricyclics are used for nocturnal pain. Reduce weight, walking aid held in the contralateral hand in hip arthritis, in the ipsilateral hand if knee arthritis. Do exercises (e.g. regular", "label": 1 }, { "main_document": "Brass displayed its characteristic property of twinning under the optical microscope. The image looked pretty similar to Figure 7. We could see alternating white and yellow phases. The specimen composed of 60% copper and 40% Zinc. Twinning is the phenomenon by which two portions of a crystal have the same relationship. It can be formed due to the shear force during machining or by recrystalisation during annealing. We looked at a specimen of the X19 steel during the lab session. It had 0.35% Carbon and it was furnace cooled from 870 C. It however wasn't annealed. It has dendrite shaped pearlite crystals which is a mixture of cementite and ferrite in a ferritic phase. The pearlite was formed because of a slightly rapid cooling rate. It's known to be quite hard and brittle. Hydroxyapatite or HAP is supposedly an excellent medium for growing bone tissues in. Artificial joints use polymethylmethacrylate cement (PMMA) to secure the implant in the bones. They then coat it with HAP for the bone cells to grow around the implant. One of the properties that make this material suitable for growing bone tissues in it is its large pore size. The average size of the pores seems to be about 50-70 microns in diameter. There also seem to be an average of 140 pores per square mm. Figure above, shows the formation of bone tissues in the grain boundary of an apatite crystal. It's probably a good example of biocompatibility and bio-integration. The size of the grain boundary is less than 1 micron. The bone tissues however are growing inside it and integrating with the material. Cancellous bones are found in the interior or a bone structure. It's a low density spongy bone and is normally surrounded by high density bone on the exterior. It thus enables the flow of blood and nutrients to the entire depth of the bone to facilitate bone tissue formation. It forms in layers and is spongy in texture. Cortical bone is present at the outer shell of the bones. It comprises of cylindrical osteon cells that are stacked vertically. It has many tiny pores of about 5micron diameter and has some larger holes in some places of diameters between 120 and 200 microns. The shortage of pores, the vertical dense stacking, etc make it hard and resistant to bending and torsion. The HAP Particles are naturally found in bones and are also coated or sprayed on bone implants to allow the implant to integrate with the bones. The size of the crystals however don't seem to be uniform. The size of the particles roughly ranges between 10 and 30 microns in length and different breadths. PE or Polyethylene is a very important biomedical material that is used in bone implants in surfaces that bear friction especially in between the femoral and tibial component of the knee implant as a plate. A unique grade of polyethylene called Ultra High molecular weight polyethylene or UHMWPE is used in the knee implant. The topology of the specimen seems to be quite smooth from the image.", "label": 0 }, { "main_document": "to that found at Westward Ho!, Devon, with its weakly developed soil characteristics, massive structure and root channels with organic and silty infill (Balaam 1987). The numerous episodes of channel formation by roots and burrowing organisms may originate during the period when the site was still vegetated before marine transgression occurred. Once the site was waterlogged by rising sea-levels, the conditions became reducing and pyrite formation occurred. At a later stage, some of this pyrite was oxidised to gypsum, but it is not possible to determine when this occurred. The only evidence for human activity at the site from this slide comes from the presence of charred plant material in higher quantities than at other parts of Goldcliff East (Yendell 2004). Although charcoal can occur naturally, elevated levels in an estuarine environment suggest some human activity, either from small fires at occupation sites for cooking or warmth, or more general burning of reeds and woodland. Such burning has also been identified at other sites, such as Star Carr (Dark 2004), and may have been intended to encourage new growth of food plants or grazing for herbivores. The trampling of the upper boundary of the land surface may also represent human activity, but could also result from animal trampling - footprints of humans and animals have been found at Goldcliff East Sites C and E (Scales 2003). This study has been able to draw some conclusions about the nature and origin of the sediments at Site B at Goldcliff East, and the interaction of humans with environmental change at an estuarine site. In particular: The land surface was found to be a sandy clay with soil development occurring. There is little direct evidence for human activity from this slide, but charred material suggests possible anthropogenic burning of the reed swamp. Numerous post-depositional processes have acted on the sediments, including burrowing roots from vegetations, burrowing organisms and waterlogging of the site as a result of rising sea-level in the Holocene. The study of this site using thin-section micromorphology has enabled a very detailed understanding of the sedimentary history of the site, which would be difficult to fully understand without this technique. Although analytical techniques can be used to characterise the particle size and organic content of a sediment, it is only when viewing the sediments in thin section that the relationships of the various components to each other can be properly identified. In addition, the complex post-depositional history of this sediment makes micromorphology the best approach for interpreting the sequence of events correctly. Although palaeosols have been identified at Goldcliff East, both in this section and in other areas, the exact mechanism of soil development has not yet been fully established (Yendell 2004). Further study should be undertaken to determine when and how this soil may have formed, as micromorphological techniques are not able to date these layers to a specific period. Radiocarbon dating of the organic material should assist understanding of the formation of the palaeosol. Additionally, the order in which the post-depositional disturbance occurred should be confirmed. There are also questions about", "label": 1 }, { "main_document": "In a one day study at Monkey World Ape Rescue Centre, Dorset, U.K, behavioural study techniques, including focal animal sampling and focal instantaneous point sampling (Altmann 1974), was explored on individuals within captive populations of chimpanzees The study population of chimpanzees were an all-male group of 12 individuals, all with more or less traumatic captive backgrounds, and the focus group of this report, the squirrel monkeys, were a group of 7 individuals (demographic data unknown) that had been rescued from a laboratory in Holland. According to the information provided by Monkey World, this group had not been exposed to any experiments during their time at the laboratory and had never been in an outside enclosure before their arrival at Monkey World. Wild squirrel monkeys are found in the primary and secondary moist forests, riverine forests, swamps and mangrove of South America where they live in non-territorial multimale-multifemale groups of 20-40 individuals (Rowe, 1996). Their main diet is composed by animal prey, such as frogs, snails, crabs and insects (Rowe, 1996). The species is one of the most commonly used primates in laboratory studies ( The study aim was to explore the activity budget of two different species using two different study techniques, however, only the results from the squirrel monkey study is presented below. Focal animal observation (Altmann, 1974) was conducted in the afternoon of 21 October 2005 at Monkey World Ape Rescue Centre, Dorset, U.K. The outdoor temperature was about 12-15 degrees Celsius. The enclosure contained an outdoor and an indoor facility. The indoor facility surface was approximately 9 m It was separated in two compartments with a metal wire mesh wall. All animals had access to both compartments during the study. Wooden and metallic branches placed at different heights, cloth hammocks and ropes were part of the indoor enrichments. The outdoor enclosure approximated size was 100 m The animals had access to both the indoor and outdoor parts of the enclosure during the study. The indoor enclosure was somewhat obscured by marmoset The out-of-sight time was excluded from the analysis and subsequently added at the end of the predetermined observation period to reach the time set for the study (Lehner, 1996). The study individual was chosen at random on site and behaviours recorded according the focal animal sampling technique during a 45 minutes session. Sex and age of the study individual was unknown. All behavioural observations were noted on pre-printed ethogram at the time of occurrence. The ethogram was derived from the modular handbook 3 - P20103, Oxford Brookes University, U.K. No description to the behavioural units is provided though each category is quite descriptive in itself. The categories used in the ethogram were: Time of behavioural events was recorded from the clock display of a cell phone, Nokia 3510i. Descriptive results were derived and analysed using the Microsoft Excel XP software. The study subject spent all of the study session indoors and most of that time in inactive observation (vigilant) of its surroundings (60.4%; Table 1) from an elevated place (Figure 1), a cloth hammock, in the enclosure. The", "label": 0 }, { "main_document": "fossilization. Another multi-tasking ability is needed here - ensuring a high level of motivation is maintained with personalized, interactive materials. One point of difficulty arising from my teaching context is the use of the Present Perfect (PP) tense (vs. the Past Simple). It presents a difficulty to all my students and the obvious reason for it is the interference of the learners' L1 in their interlanguage (Swan, 2001). It is useful here to take a snapshot view of the place of PP and past simple tenses in the L1 structures of my students - all examples have been evident in their performance in class: Through the inherent link of the Indo-European languages, learners from Western Europe are relatively familiar with the structure of PP. Time relations in German, Spanish, French and Italian are similar to those in English, especially in form, e.g. their equivalents of the PP are formed with the infinitive 'to have' + past participle, in some cases 'to be' + past participle, which is also the case with Bulgarian, for instance. In all of them, however, the PP has a different usage; German uses Present and Past tenses to express PP meanings ( French and Italian use the PP form to express past meanings ( the PP in Italian refers to the recent past; Spanish uses the present tense in place of PP ( Russian, on the other hand, has no PP tense and uses the present simple instead; Arabic also lacks the PP, it is replaced by the past simple or present tense for duration (I work/working five hours now); Chinese is particularly challenging as it does not have a well-structured classification of grammar but relies on context, intonation and word order to convey meaning. This presents learners with a harder obstacle; Two common facts for speakers of all these languages is (1) they have difficulty in establishing a relevance of PP to the present time, and (2) they use the present simple with (Swan, 2001) Interestingly, the above findings stand as a testimony that a language and its usage are a reflection of the thinking and cultural heritage of a nation. A multi-layered coursebook is based on 'pragmatic eclecticism' (Harmer, 2001). The syllabus of An overall look at both coursebooks shows the visibly more attractive design of In both books the PP is well suited in the middle of the syllabus due to its higher complexity and medium frequency of use (Thornbury, 1999). Both coursebooks use standard English 'that is internationally acceptable in formal contexts' (Hewings, 2005). Relevance to the present is not often established in American English so or Although neither of the books makes reference to American usage, students should be aware that past simple may take the function of PP in American English. The target audience and the acceptability of language usage should be considered when choosing the right form (Swan, 2005) - considering my students' interests, I would briefly teach the American usage of PP. In order for learners to appreciate the 'systematic relationships that exist between form, meaning, and use' (Nunan, 1998),", "label": 0 }, { "main_document": "supporting mechanisms of trade policy are applied, in order to either discourage excessive imports or to encourage exports of price depressing surpluses, such as import and export licences, import duties and export refunds, however the latter do not apply to soft fruits. The horticulture sector...op.cit., p. 9. The system of support described above allows the EU producers of certain fruits and vegetables to operate in a reasonably stable environment. It is worth noting, that the instruments encouraging farmer co-operation proved to be effective in the former EU-15, as they resulted in high levels of market organisation, expressed in the percentage of production marketed through producer organisations, shown in table 1. Nearly 1 350 producer organisations channelled almost 40% of all fruit and vegetable production to market in the year 2002 Their number and sizes varied widely among Member States of the former EU-15. While in the Netherlands and Belgium more than 70% of all fruit and vegetable production was marketed through producer organisations, the percentage was much lower in the three most important producing Member States: less than 30% for Italy, 40% for Spain and 50% for France. Analysis of the Common market organisation in fruit and vegetables, Commission staff working document, Brussels, 03.09.2004, SEC(2004) 1120, p.13. This numbers, however are incomparable with the rate of fruit and vegetable market organisation in Poland. In mid-June 2005 in Poland there were only 38 registered producer organisations. Their share in total fruit and vegetable production did not exceed 2%. Market analyses, Fruit and vegetable market. Current state and perspectives, Institute for Agriculture and Food Economics, June 2005, p. 4. The main question that arises is what are the reasons for this state of affairs? In Poland the concept of producer organisations appeared in the public debate in the late 90s, but entered the legislation only in 2000 However, in the common perception the created law did not encouraged farmers for getting together In 2004 the regulation was amended in line with the EU legislation The legal act of 15 September 2000, concerning producer groups and their associations (Official Journal No. 88). Producer groups, Polish Television (TVP), Farm News, 3 October 2005, The legal act of 19 December 2003 on the organisation of the fruit and vegetable, hops, tobacco and dried fodder (Official Journal No. 223, point 2221). Nevertheless, according to the survey run among 839 members of producer organisations in agriculture in the Podlasie region, by the Farm Advisory Centre in Szepietowo, the most important reasons for such poor levels of market organisation are more of social than legal and economical character. G The respondents noted that low levels of farmer co-operation lie in: lack of the team spirit and mutual trust (52.5%), lack of leader (7.5%), lack of knowledge as to the running of the enterprise (7.5%), lack of interest and finally lack of substantial financial benefits (both 7.5%). Only 25% respondents quoted the insufficient support from the public and non-government institutions as a reason. This findings supposedly, with a low risk of over-interpretation, can be generalised across the farming population in Poland", "label": 0 }, { "main_document": "Price determination has received an increasingly important position in contemporary literature, as it is essential to our understanding of the complex issues of unemployment and inflation. The increasing attention of policymakers regarding inflation targets has necessitated further research in this field. Nevertheless, there is surprisingly little literature on both single country and cross-country price determination analysis. Still, some interest has been put in different theories of price determination, and the two most central theories are the purchasing power theory (the law of one price) and the mark-up theory. The purchasing power theory asserts that prices are determined in the long run by world competition, thus the theory emphasises the importance of import prices. Martin advocate, through his theoretical and empirical assessment of the UK economy He shows that interactions between domestic and foreign actors cannot be neglected when it comes to theories of price formation, thus both domestic costs and import prices are important in determining the domestic price level. Accordingly, there is no clear view in the literature about which variables determine prices at the aggregate level. I aim to utilise the mark-up theory in my paper; however, papers that estimate the price mark-up equation is, rather unexpectedly, not very prevalent. We can think of two theoretical justifications for including coordination as an explanatory variable. First, as coordination between unions increase, firms realise that their competitors suffer from the same rise in wages; hence a higher mark-up is defendable. Second, higher coordination could create barriers of entry and could make higher mark-ups sustainable. The implications of both these theoretical justifications are that a higher degree of coordination increases the mark-up, ceteris paribus. As far as I am concerned, there has hitherto been no attempt to explicitly address this issue. Earlier studies have focused solely on the price curve dynamics of the price determination, and accordingly assumed the wage curve to be fixed. See Dornbusch (1992) for further details. Kalecki (1943) and Kalecki (1971). Martin (1997). See for example Martin (1997) and Clements and Sensier (2003). Nunziata He estimates the determinants of labour costs in 20 OECD (Organisation of Economic Co-operation and Development) countries, and finds both a direct effect on wages, through wage pressure despite excess supply in the labour market, and an indirect effect via the matching process of unemployed individuals to available job vacancies. Furthermore, bargaining coordination, employment protection and benefit replacement ratios are significantly affecting labour costs and consequently wages. Nunziata finds that bargaining coordination is significantly decreasing wages, while employment protection and benefit replacement ratios have a significantly positive relationship to wages. Even so, it seems hard to theoretically rationalise that employment protection and benefit replacement ratios can affect prices. Nunziata (2001). Thus far the only article combining mark-up analysis with wage bargaining is an article by Sen and Dutt Nonetheless, the article has merely theoretically appraisal of the issue and does not present any empirical testing of the presented theory. The paper shows that the size of the mark-up does not merely depend on the product market factors, but also on the bargaining strength of the", "label": 0 }, { "main_document": "Testing for the presence of drugs in a motorist's blood has recently become even more important as the amount of accidents involving motorists found to have illicit substances within their blood has dramatically increased. This has been shown by various surveys such as that carried out by TRL (Transport Research laboratories) Crowthorn 2001 who found that from 1985 - 2000; the amount of motorists involved in an accident whose blood contained illicit drugs had risen from 3% up to an alarming 18%. In response to this increase the government introduced Standardised Field Sobriety Tests (SFST) and Drug Recognition Examinations (DRE), which resulted in Field Impairment Testing (FIT) this is still in use today by police officers at the scene of an incident if they feel its cause may be due to the driver being impaired, through the use of drugs or alcohol. The examination consists of five simple tests, the first being pupil dilation, where the officer examines the motorists eyes for signs of constriction or dilation. The second is the Romburg test, during which the driver is asked to stand up straight, tilt their head back, close their eyes and then count to thirty. The third is a test of the driver's balance and coordination, where they have to walk nine paces in a straight line, heel to toe, firstly forwards and then backwards. The fourth is another test of the drivers balance involving the driver having to stand on one leg alternating between their left and right. The last test requires the driver to close their eyes, and touch the end of their nose with their index finger three successive times using both their right and left hands. If after these tests the police officer believes the motorist is impaired a doctor is called to the scene, they are then arrested and taken to the station where a blood sample can be collected for examination. If these blood tests are conclusive the penalties for taking drugs and driving are the same as that for drink driving. These include facing a minimum driving ban of a year, a fine of up to In the way of legislation within the UK, a driver is believed to be causing an offence if they are either driving whilst unfit through the use of drink or drugs, which can lead onto the second more serious offence of causing death by driving whilst unfit through drink or drugs (Road Traffic Act 1988). If either of these is suspected of being committed by an officer at a scene the offending motorists blood has to be analysed. The presence of a drug needs to be found in order to convict if they were driving whilst unfit but the presence of all drugs present needs to be established if a death occurred. The Forensic Science Service has developed a streamlined semi-automated method in order to analyse large amounts of blood samples simultaneously and so in the shortest amount of time. Due to their previous experience in testing motorist's blood they now only concentrate on a small section of drugs", "label": 1 }, { "main_document": "year and about 5 million people (8.3% of the national population) will take part in local fitness clubs. Although Fantastic Fitness Club is a small service company, but it still has to face lot of legislation. Except the environment law, the company should concentrate on other laws. Because the club supply sport facilities to customers, the sport safety security law is the most important issue. Meanwhile, the club provides drink and food, so the hygiene certificate is also needed. Otherwise, the labour law is important to the club and the club also needs to provide clean environment to consumer without any pollution. (Robert, Barry, Roberta, 1990) pointed out that at present, the marketing matrix has involved into seven elements based on four Ps: product, place, price, promotion, physical evidence, participation and process. The product in service is related to the types and depth of service in particular area. It includes the services which match the target markets, service level, after-sales service and warranties. When the club started to design the products and service, the first step is identify the customer's needs which are to be responsible by marketing department. After informal process or formal marketing research study, the department got a definition of the needed service. The description of the need service is usually called performance specification and illustrates what the demand and requirement the customers need. In this case, marketing staff found that customers need convenient meal which can keep the effect of exercises after doing exercises. Then the product or service designers transfer it to design specification, which indicates the services will meet the need. In result, the designers invent the patterns which combine the exercise and food together. The important issue of this process is that performance specification should include plenteous information to assure the design specification can product the correct service to match customers need by kind of competitive advantage. The flow of the process as follow: Place in marketing refer to the location and distribution channels of service. Location is absolutely the most important element in service industry because the accessibility decide whether the business success. The club considered the following criteria (Robert, Barry, Roberta, 1990) before making decision that set it in the city centre of Birmingham. Meanwhile, the club avoided to make mistake of selecting location, such as pay too much attention to land cost. Because high land cost indicates strong desirable area and high customer volume. When making pricing decision, marketer should consider different steps from company strategy to price level. From (Bart, Roland, Paul, 1998) study, the steps include six aspects: The pricing decision is not an isolate decision from other marketing three Ps. It under the company's objective and marketing strategy and used as a tool for marketing strategy. The objectives of pricing are not always making maximum profit. There are several other objectives which impact the pricing, for instance, expanding market share, being the price leader, reclaiming investment, maxing long-term profit. The objective has to be agile and practical. The nowadays market situation will influence it. Pricing strategy involves the way", "label": 0 }, { "main_document": "of this myoglobin turned out to be 15015.8 g/mol. (see appendix I) Then, according to Beer Lambert Law, the molar or mass concentration of the US is or Figure 2. shows the values of BSA concentration and the corresponding absorbance (denoted by *), the two different absorbance corresponding to different concentration with repeated observation for the US are also indicated in the graph (shown in two dash line for each concentration). The straight line is obtained from the data set in the sense of least square. A hand drawing plot is also provided as attachment of this report. From the value obtained from the graph, we can see the absorbance for the US concentration at approximately 300 Similarly, for 200 By taking the mean value, the concentration found in the graph is 276 By subtracting the baseline provided from the CD spectrum, we get the following diagram (figure 3.): With no available CDsstr software for protein structure determination, I use DICHROWEB as the tool to do the myoglobin protein secondary structure analysis. By setting mean residue weight as 105, protein concentration as 0.1mg/mL, and path length as 0.1cm, we get the percent of Figure 4 shows the relationship experimental data with expected data. At the wavelength 222, 208, we can see clearly negative peaks occurred, which means the existence of The fluorescence spectra are presented as below ( figure 5). The first peaks of each curve should be ignored since this is the light from the lamp of the fluorescence machine. The second peaks are studies. Those peaks represent the intensity of fluorescence light the molecule made. By taking the value of fluorescence intensity at wavelength 580nm, and use the mean value of 10 data points around this wavelength, we obtain seven fluorescence intensities values as -0.0267, 409.7130, 461.4506, 496.7539, 554.4751, 573.3974, and 740.6843, respectively. Plot these fluorescent intensity versus DNA concentration (0 10 20 40 80 300) ( see figure 6). The first fluorescence intensity value (the blue curve at the bottom of figure 5) is neglected since it is only a zero scale for the other fluorescence. According to the lecture notes, we can construct a table to calculate the value of L Here, we use the fluorescence intensity at [DNA]=300 Since the first and last rows do not contain useful information, we just simply neglect it. However, maybe due to some problem, the second last row data did not follow the trend of the previous values, and I have to neglect it in order to obtain the Scatchard plots (figure 7), with the binding constant K=-0.022. Ideally, 1mg/mL protein solution should have absorbance of 1. However, I got the absorbance at 280nm as 0.8728, which is slightly lower than 1, and the calculated protein concentration as 1.19mg/mL, which is a bit higher than 1mg/mL. The difference may due to the protein sample, the measurement error, and the environment situation. The assumption underlying this method is based on the following condition: no contribution from light scattering, no other chromphore in the protein, and no other absorbing contaminant, etc. If we", "label": 0 }, { "main_document": "The texts chosen for this comparative analysis have the potential to demonstrate the ways in which two different genres present with a similar topic: How should the West, notably the US and Britain, react to armament processes in so-called 'rogue states'? However, the analysis will show that the presentation of information in both examples is not as different as one might expect. It will also demonstrate how discourse analysis becomes more difficult when genres evolve and overlap. The genres covered in this essay are political speech and academic writing. However, as both these genres contain a variety of subgenres, the following discussion will concentrate on their most central characteristic features and the features specific to the selected subgenres. Text 1 is an extract from Prime Minister Tony Blair's statement opening the Iraq debate in the House of Commons (HoC), a communicative event belonging to the genre of political speech. Other communicative events of the same class, in Swales' (1990: 58) terms, are inauguration speeches, election campaign rallies etc. All of these text types share a set of communicative purposes, like persuading the audience to back or oppose a certain policy, attracting votes or establishing a feeling of sympathy and trust between author and audience. The discourse community, in this case, can generally be identified as politically aware members of the public. Some subgenres, like parliamentary debates themselves, may require more expertise from the audience due to prerequisite specialist background knowledge or their highly formalised language containing archaic expressions like \"my right honourable friend\", which make the discourse slightly less accessible for non-expert members of the discourse community. The speech at the centre of this analysis, however, is different because of its special significance as part of a very public political campaign. The participants of the discourse event are not only the addressor, PM Tony Blair, and the addressees present at the time and location of the event, the MPs in the HoC, but also the whole country and potentially the whole world, due to modern media like television and internet. This is because the function of the speech is not only to convince MPs of the necessity of military intervention in Iraq, but also to gain understanding and support from the British public and the international community for a highly divisive policy. Thus, the public is what might be called a secondary addressee. Due to these circumstances, the situation (Cook 1989: 99) of this discoursal event cannot be restricted to the locality and time of the opening speech of a scheduled debate in the HoC, the Upper House of the British government, on 18 March 2003. A similar complexity is evident in the physical form (Cook 1989: 99) of the text. According to Hasan (Halliday and Hasan 1989: 69), there are great structural differences between written and spoken genres. In a political speech however, these two macro-categories of genre overlap. The large majority of political speeches are carefully written, redrafted and tailored according to purpose and audience. Yet they are delivered as spoken discourse and are therefore subject to spontaneous alterations and", "label": 0 }, { "main_document": "of the empty chair crisis when France withdrew from EC business for a portion of 1965. Even later with the accession of new member states such as the UK, Ireland and Denmark in 1973, \"member governments made clear that they would resist the gradual transfer of sovereignty to the Community, and that EC decision-making would reflect the continuing primacy of the nation state.\" Pollack, 'Theorizing EU Policy-Making' p. 17. Therefore Hoffman used this evidence to argue that the role of the nation state had been underestimated and he portrayed Western Europe as \"grappling with the contradictory logics of integration and diversity.\" He also realised that there was a difference in the effectiveness of integration when you considered high and low politics separately. It is true that in the EU integration has been much easier to achieve in areas of low politics, for example health and safety, but much harder when the autonomy of national governments or issues of identity are at stake. Even if nation states are willing to make sacrifices economically for the long term good of the EU during economic downturns or recessions countries firmly look to their national interests as a priority. Many scholars would point to the current situation in the EU as a perfect example of Hoffman's analysis with countries rejecting the constitution to become a more united Europe in favour of keeping their national interest as the most important concern. However, intergovernmentalism, like functionalism, is argued to be more important than useful as it led to the formation of the final grand theory of integration; liberal intergovernmentalism. Rosamond, P. 77. Andrew Moravcsik has been the main advocate of liberal intergovernmentalism and argues that there is a mixture of national and European level interest. He believes that a national preference is put forward and then bargaining occurs at the EU level with the choice to adopt some EU institutions and delegate some sovereignty to supranational actors. This is perhaps the most persuasive of all the grand theories, and was the leading theory most recently in the 1990s, as it recognises the compromises made in all policy making decisions and appreciates the informal decision making process which occurs. Moravcsik argues that \"the transaction costs of EU bargaining are low because of the long time-frame of negotiations and the innumerable possibilities for issue linkages, trade-offs and sub-bargains.\" This can be seen by the fact that so many countries have opted to join the EU as the advantages of being a member far outweigh any loss of sovereignty that will take place from time to time due to compromising over a certain issue. Cram 'Integration Theory', p. 138. Liberal intergovernmentalism is particularly useful because it focuses directly on policy making and is more concerned with trying to show what happens within the EU and not what should happen if the theory's predictions turned out to be correct, something which federalism does for example. Moravcsik also shows how, as a result of preference setting and bargaining, the environment in the EU becomes information rich as there is \"widespread knowledge about the", "label": 1 }, { "main_document": "time with intense meaning. If we take a broader scope, some forms of electronic media that used to dominate, such as radio, or still dominate, such as television, have all been demonstrated to be influential in \"disciplining\" our temporality. Scannel (1988:17, italic added) noted that in 1920-30s, \"BBC became perhaps the central agent of the national culture as the Also, Mellencamp (1990: 240) described US network television as \"disciplinary time machine\" because \"[t]he hours, days, and television seasons are seriated, scheduled, and traded in ten-second increments modeled on the modern work day - day time, prime time, late night, or weekend.\" It is these media events, and the shared temporality and meaning they create, that help create the kind of \"imagined community\" argued by Anderson (1991). Similarly, while the media destroy the traditional identities of many places through promoting \"uniformity of landscape,\" they also play the crucial roles in how certain places are represented, perceived, and imagined, and in effect help creating or affirming the identities of these places. Harvey (1993) used the example of New York Time Square to illustrate how the representation (as 'symbolic place') and imagination of one place can actually create a sense of place identity and have significant material consequence. In my personal experience, I consume media to construct my imagination of London before coming here; now, I use the internet to nourish my emotional attachment with my hometown. Moreover, while Relph and Aug Crang (1998:103) clarified the meaning of place as it \"provide an anchor of shared experiences between people and community over time.\" In a traditional society, such a place is surely a geographically bounded one, which is \"relational, historical and concerned with identity,\" in Aug Yet in contemporary society, while the communities in the traditional sense are replaced by the \"community without propinquity\" (Webber, 1963) or \" personal community\" of \"individual's social network\" (Wellman and Gulia, 1999), such an \"anchor of shared experience\" is no longer necessarily a definite geographical locale, but could be some common elements in the life of the people involved in the new forms of community, for instance, Starbucks, Tesco, and some of the \"non-places\" Aug As Massey (1991) pointed out, \"places should no longer be seen internally homogenous, bounded area, but as 'spaces of interaction.'\" This purpose is evident in discussions about CIS (company identification system ) in the business literatures. Contrasting the arguments presented above, either the \"placelesness\" or \"mediated place construction,\" \"timeless time\" or \"disciplinary time machine,\" only captures one facet of the complicated picture of how media reshape our conception and experiences of time and space. While the media blur the boundary between the private and the public, cause the \"uniformity in landscape\" which erode traditional place identities, and tumble the old sense of time, they are also \"supplying new definitions of, and imperatives for, time and space (Ferguson, 1993:170).\" In sum, the media provide a diverse set of new frames of time and space, over-layering on the old ones, with which we may construct our conception and experience of time and space in diverse ways.", "label": 0 }, { "main_document": "individual is left to judge moral actions on conscience alone and the more standards are suppressed the fewer there are and the less immanent one's conscience is. In a society where there are no rules, her actions are justifiable. Even her claimed ownership of the novel is an attempt to find an identity, an attempt of self fulfilment which the Postmodern thinker presumably must applaud. Warner's lack of moral criticism opens up the question of whether or not Morvern has any choice in the way she acts. What happens to morality and ethics when there is no centre? How does one identify oneself when there is no one to be accountable to? It could be argued that the society created, and therefore Morvern, is simply amoral, rendering her innocent, blameless and unaware of any harmful consequences of her actions. However, I would argue that our individual consciences would point us to the distinction between right and wrong. Our objective ability to see a clear wrong action indicates an objective moral standard that is not controlled by the individual or the state. Morvern may well be a product of her society, but that does not render her or her society innocent. It is no great intellectual or practical attribute of any individual or society to confuse morality with immorality, or to confuse right with wrong. see Romans 2:15 ultimately the state cannot control morality as it is constantly in a process of modification, re-interpretation and disintegration with no firm foundation see Isaiah 5:20 Foucault is correct that one cannot know dogmatically the intentions, the circumstances and the thoughts of an author. However, there is a difference between exhaustive and correct knowledge. The reader's inaptitude to know the whole context does not mean they cannot know some of it. Enough evidence can lead to a correct conclusion, without having perfect proof. As human beings, we may not share all circumstances together, but it is not a fair conclusion to say that we have nothing in common, as we are equally human. Foucault makes an illogical assumption that because meaning and interpretation will be open to some question, it is impossible to communicate any meaning faithfully. In effect, if Foucault's theory were true, then we would not be able to read his own essay and understand it so coherently. Foucault leaves the individual as a helpless, alienated and incommunicative being. However, one does not need to look too far to see communication everywhere, whether by translation, signs or systems; not perfectly, but successfully. some of the ideas in this paragraph have been taken from Marcus Honeysett's In conclusion, I have argued that The Modern subject fails in their search to find order and stability in the world and is left darkly disillusioned and empty. The Postmodern subject seeks to fulfil the 'self' in a desperate conquest to find meaning and identity, a search that has become helplessly subjective. The individual, if Foucault is right, is left to act alone in a bleak world that has become incommunicative, de-stabilised and seemingly meaningless, despite their inner need", "label": 1 }, { "main_document": "remarkably upgraded from ministerial to summit level and subsequently ambitious timetable for regional trade liberalisation which pledged complete elimination of trade barriers by 2010 (by 2020 for developing member countries) was suggested in 1994. However, this blueprint was later revealed as an nominal political proclamation The intention of the United States to avoid censure against leading 'closed' regional grouping, unlike the commitment for multilateralism, created the unique rule in managing the APEC. The APEC members did not exclusively dependent upon formal rules and stipulated provisions in implementing arrangements in trade liberalisation. Instead, consensus building and non-coercive recommendations from the peer members were emphasised as the main element of decision making. See Accesed on 29/4/2006 This voluntary co-operative mechanism was also facilitated the traditional type of social communication named the 'Asian Way' in Confucianism-based Asian countries. Liberalists dubbed this new trend as 'open regionalism'. However, no concept generated more diverse interpretation than this term within the realm of international political economy since the establishment of the GATT regime. To begin with, simply speaking, by the definition of dictionary, 'open' structure is far from the nature of 'regionalism' and 'regionalism' can hardly be described by 'open' structure. Thus, the assessment about 'open regionalism' also varies, ranging from a historically new concept to distinguish the PTAs led by the United States since the 1980s to mere rhetoric dazzling people. Liberalists denied the fear of re-emergence of exclusiveness and protectionism which might be generated from the competition among three regional powers - the United States, EU and Japan -which formed the tripolarity after the end of the Cold War (Gamble and Payne 1996, 251). Even the radical strategic trade policy of the United States was advocated on the grounds that it was an attempt to stimulate foreign competition rather than avoid it (Gamble and Payne 1996, 252). Post-Cold War moves towards regionalism were also strongly supported under the name of open regionalism. They stressed the proliferation of the resultant benefit of unilateral trade liberalisation implemented by one member (Drysdale and Garnaut 1993, 187) From the perspective of liberalism, thus, the non-discriminatory nature of open regionalism is consistent with the spirit of the GATT. Drysdale and Garnaut described the benefit of unilateral trade liberalisation as a 'prisoner's delight' in contrast with the 'prisoner's dilemma' which warned the unfavourable outcome the absence of actors' co-operation can provoke. On the other hand, economic realists Likewise, the NAFTA was also perceived as one of the potential alternatives the United States was willing to choose if the multilateral negotiation would fail (Hurrell 1994, 266). 'Economic realists' as a theoretical strand on IPE is used as the perspective emphasising the state actor's willingness and aim in determining international economic policy, which is derived from classical realism in international relations theory, for example, Hans Morgenthau and E.H.Carr. See Nesadurai (2002) for details. This argument is also substantiated by the fact that even American ideologists overtly began to warn reluctant negotiators of the emergence of trans-Pacific deal, and resurrection of aggressive unilateralism by the United States if the Uruguay Round failed (Bergsten and Noland", "label": 0 }, { "main_document": "Gap Activity Projects. Despite the criticisms which can be made of them, the accuracy of Cohen's assessment of a very personal element of a post-modern style trip suggests that typologies can provide a useful, basic framework for the analysis of tourists. Although typologies can be useful in order to predict However, this is a particularly complex issue. Often there are 'push factors' which stimulate desire for travel [ While on one hand, the likes of MacCannell argue that tourists desire authenticity, others (Boorstin, 1971; Finlayson, 1991, in McIntosh, 2004; Kotler et al, 1993, in Hall, 1997) suggest that tourists prefer experiences which meet their expectations, regardless of authenticity. I ended up visiting a number of World Heritage sites, suggesting that I followed a well-worn tourist trail, as Dann would say, being a \"chaser of images\" (1996: I wasn't looking for 'repeatable and factitious' events as Boorstin would have it. I wanted to avoid \"the more or less disinterested, 'sheep-like' tourists that followed their flock from one Lonely Planet recommended site to another\" (Cole: It would seem then that motivations are important in terms of knowing how to market a destination or activity to a particular group of people, but the distinction between 'authenticity seekers' and 'image chasers' is not so clear cut. Tourists \"like pilgrims searching for the authentic\" and others \"driven by childlike and hedonistic motives\" (Selwyn, 1994, in Dann, 1996: In Buenos Aires, for example, at times I was willing to accept my tourist status and all the superficial experiences which that brought, while at other times spending time with local people in their own, authentic environment was the priority [ Travelling around Argentina, access to 'back regions' (MacCannell, 1973) was not expected, but in the two cities which I began to feel were 'homes away from home' it was a satisfying privilege \"to see behind the others' mere performances, to perceive and accept the others for what they really are\" (MacCannell, 1973: Living with Argentine's in their homes gave me that opportunity and it is certainly something I would seek to gain from future travel. The search for authenticity is related at least in part to the self-image of travellers. Many people would like to think of themselves as Boorstin's traditional traveller who craved the authentic. Tourism marketers appeal to the side of our personality where we would all rather be intrepid travellers than admit that we are just another tourist, often protected from the reality of the destination and its people. We often remain in 'front regions' (MacCannell, 1973) and inside our 'environmental bubble' (Cohen, 1972: As we gain greater touristic understanding, travel is increasingly used to generate a specific identity (MacCannell, 1973: Teas (1988, in Dann, 1996: They wish to break out of the constraining authority structures of the home, school, and workplace, and fashion their own destiny.\" The travel autobiography certainly supports this view (p.9 & 11), demonstrating that I wanted to use travel to become a certain kind of person, while also escaping from the negative elements of life back home: \"I was determined to", "label": 1 }, { "main_document": "when objects seemed to have an internal source of energy and autonomous movement, if they appeared to collide with a static element in their environment they were less likely to be perceived as animate. This suggests that elements in the objects environment can affect whether they are perceived as animate, also providing evidence against the Newtonian violation hypothesis. It would seem that there are cues other than non-Newtonian movements which people also take into account when distinguishing an animate from an inanimate object. Tremoulet and Feldman (2006) explained their findings that when objects participate in a collision with a static element they are rated as less animate, by hypothesising that this type of passive behaviour counteracts the animacy cue given by violation of Newtonian laws. They argued that this is because the collision deters the observer from attributing intentionality to the objects movements. Other experiments have also suggested that intentionality is an important cue to perceiving animacy from the movements of an object; many have done this by looking at goal-directed movement. Goal-directed movement occurs when an object moves autonomously whilst also directing its movement \"toward (or away from) another object, state, or location\" (Opfer, 2002, p. 100). Opfer in an experiment carried out in 2002 found that when unfamiliar entities which he called 'blobs' behaved in a goal-directed manner by moving towards an irregularly shaped dot, they were identified as living organisms and therefore animate. 'Blobs' that moved in an identical manner but with the goal dot removed failed to be identified as living, this suggests that goal-directedness is a decisive factor in deciding whether an object is animate. Opfer (2002) proposed that this was because goal-directed movements suggest that there is a reason for the objects movements and therefore that it has either biological or psychological intention. It is from evidence such as this that a second hypothesis was formed proposing that people only believe an object to be animate when intentionality is perceived (Tremoulet & Feldman, 2006). The intentionality hypothesis unlike the Newtonian violation hypothesis allows for a variety of factors to affect peoples' perception of animacy. Whereas the Newtonian violation hypothesis relies solely on whether or not energy is conserved in making animacy judgements, the intentionality hypothesis takes into account: the delay before movement, directness of movement, termination of the movement, bodily orientation and the context the movement is within (Tremoulet & Feldman, 2006). The delay before movement can effect whether an object is seen to be acting with intention as when an inanimate object encounters an element in its environment its movement is only affected when contact is made and this reaction occurs without delay. Therefore when an object reacts to an element in its environment without making contact or if there is a delay before reaction, this can increase the perceived animacy of the object. This is because the non-contact or delay implies that the object is not moving involuntarily in reaction to physical forces but intentionally due to psychological causes (Premack & Premack, 1995). The directness of movement towards the goal and the point of", "label": 1 }, { "main_document": "Yield management is the process of maximising revenue by selling the allocating the right amount of capacity to the right customer at the right time (Kimes, 1989). This means setting prices according to the demand for a given period. In periods of low demand, customers can take advantage of lower prices, where as when demand is high, higher prices are charged (Ingold, McMahon-Beattie and Yeoman, 2000). However, it has been well documented that the process of yield management can be perceived to be unfair by the consumer, as they may be paying a different price to another consumer. Yield management is a practice that is fairly widely accepted in the airline industry (Kimes, 1994). Kimes believes that this is because the practice has been happening for longer in the airline industry. Customers of the hotel industry find it hard to accept the fairness of yield management, as they often cannot differentiate the products received by themselves and others who may be paying a different price (Kimes, 1994). When consumers pay for a product, they have a reference price for that product, which is the price that they perceive to be reasonable for the product they are receiving. This reference price is either gained from previous experience or what other people have paid for the same product (Wirtz and Kimes, 2003). In a hotel, a guest may compare their room rate with someone else's. When yield management is practiced, the room rates that people pay may be different, and so the reference price may be lowered for the person who paid more, and they hence perceive the rate they pay to be unfair (Wirtz and Kimes, 2003). Consumers consider a price to be fair when they feel that the price is reasonable, and that the firm is making a reasonable profit (Kahnenan, 1986). The price becomes unfair to the consumer when this balance shifts in favour of the firm. However, price increases can be acceptable. Consumers consider that price rises are fair when the costs increase for the firm or the market changes (Wirtz and Kimes, 2002). Much can be done by firms to increase the perceived fairness of a yield management system. The main way to make the system seem fair is to differentiate the products in the mind of the consumer (Kimes, 1994 and Wirtz and Kimes, 2003). Restrictions can be applied to the hotel rates in order to reward those that can be more flexible with their dates, by giving them more favourable rates. A study by Kimes (1994) discovered that consumers felt that some restrictions were fair, as long as sufficient reward was available for the restriction being in place. An example of such restriction would be an advance payment with no or limited refund for cancellations. However, Kimes (1994) discovered that if too many restrictions were put in place, then they would be considered to be unfair. Another way to differentiate the product is to introduce rate fences (Wirtz and Kimes, 2003). These fences could include the length of stay in a hotel, or how far in advance", "label": 1 }, { "main_document": "Personalization and mass customization: Personalization and mass customization can be used to tailor information (Chaffey, 2004). Company can track those customers who have similar interests or buying behavior by collecting users' reference and content stored in databases, and send them similar e-mail about the particular service updates or special shipments and promotions. Online communities: The key to successful community is customer-centered communication. In X's Online Delivery Community, customers can discuss their experience of using delivery system. Those unsatisfied customers who complain in the community alert the company to the problems which need prompt attention and correction. This platform assist the company personalize the response and resolution to individuals to minimizing their dissatisfaction and keeping their loyalty. It aims at increasing the lifetime value of the customer to the company by encouraging cross-sales, for example, this delivery company can offer certain amount of loyal customers the option of a loan or a deposit account. Direct e-mail is also useful in encouraging repeat visits by publicizing new content or promotions (Chaffey, 2004). Procurement refers to the complete action or process of obtaining items from supplier. This includes purchasing, contracting, negotiating and logistics. As Chaffey (2004) pointed out that e-procurement could achieve significant savings and other benefits which directly impact on the customer. Annually, this delivery company pays bills on indirect goods and services such as office supplies, delivery carrier supplies, vehicle parts and others. It used to rely on inefficient paper-based procurement to manage indirect purchase requisitions, ordering and fulfillment. This process was manual and labor-intensive. While e-procurement solutions will automate and streamline the whole paper-based procurement process. Its advantages are as follows (Edelstein, 2001): (1). Time saving: As the suppliers have the access to this delivery company's website, they can check the updating information, the sales record, and provide the delivery company with timely support. (2). Achieving more competitive pricing from suppliers and giving better service to employees. (3). Achieving full return on investment (ROI). (4). Reducing the spending on commodities and services such as PCs, office supplies, temporary services (Kenner, 2000). The globalization and severe competition both result in the need for companies to collaborate and improve their effectiveness and efficiency. E-collaboration is about sharing information within organizations, planning, coordinating, decision making, optimizing resources utilization and maximizing their advantages. The application of e-collaboration in this delivery company is the B2B (business to business) collaboration. This strategy is aiming at meeting or exceeding customer expectations (Oracle, 2001). Firstly, there are three aspects that need to be considered before E-collaboration: 1) The roles of different participants: It is important to define each participant's role in the partnership and make rules for resolving conflict during the collaborative procedure. For this small delivery company, the truck or other transport companies, petrol company, office suppliers, computer company, software and hardware company and packaging company are the main suppliers who play different roles during the whole collaborating process. 2) The understanding of the complexity of the task: Before collaboration, each participant should well understand the common task, and make an effective and feasible collaboration plan. Both the independence", "label": 0 }, { "main_document": "at this stage, its expression is restricted to P1 and P2 blastomeres, in a way similar to the localisation of the PAR-3 protein, thus it could be regulated in a similar fashion. POS-1 seems to be positively involved in the translational regulation, while SPN-4 acts negatively on apx-1 expression. GLP-1 and APX-1 have nearly reciprocal patterns of expression. Furthermore, pie-1 mutants, which can fail to specify the fate of ABp, have defects in the localisation of APX-1. At the 12-cell stage, the MS blastomere provides a signal that is necessary for determining left-right asymmetry in cell fate and the development of neighbouring ABa descendants, which form most of the anterior half of the pharynx. Since both P2 and MS signal to GLP-1, which has to be expressed by ABa descendants, it was thought that the signals originating from P2 and MS might be identical or at least related proteins. But while APX-1 is present at high levels in P2 at the 4-cell stage, neither embryonic nor maternal APX-1 was detected in the MS blastomere at any stage of embryogenesis. Other related proteins could not be assigned as MS signals either, and as of yet the ligand for GLP-1 in MS signalling has not been identified. There is a possibility that zygotic instead of maternal gene activity in the MS blastomere is needed for the expression or activation of a ligand for GLP-1 at this stage. A lot of research has been done in order to understand the mechanisms that establish asymmetry and specify cell fate in the early The major conclusion the processes are achieved via arises through a combination of asymmetric cell divisions and cell-cell interactions. It has been found that a whole range of maternal effect genes play a vital role in early embryogenesis but it is not clear in all cases exactly how the maternal information acts on the expression of zygotic genes and in which way these genes participate in the processes described. Further analysis of zygotic gene mutations is needed to elucidate this. Furthermore, there remain several gaps in our knowledge about how the various maternal factors interact with one another. The spatial and temporal regulation of expression of any given maternal effect gene often seems to be dependant on a number of other maternal effect gene products, so it should be interesting to shed light on the connections between regulation of different maternal genes in different stages of early embryogenesis.", "label": 0 }, { "main_document": "various volatile compounds, in particular trimethylamine (TMA), dimethylamine (DMA), ammonia and volatile acids (Wong, 1967 in Pearson, 1976). These were diluted into the trichloroacetic acid at the beginning of the practice. These also mean that older fish had further spoiled than fresh fish due to more protein was broken down. Compared with the reported value shown in introduction part, the This distinguishes probably due to increasing ammonia formed by breakdown of protein during the high temperature distillation. According to Pearson and Muslemuddin, this problem can be prevented by distilling the intact sample under reduced pressure at 50 When looked at the figures for water content, fresh fish contained obviously greater value than did the older fish and it showed its agreement with acceptable value. There was one thing should be bonded in mind that the samples for determining water content differed from that for determining the proportion of TVN and TMA. But nevertheless, they were used in the same equations that should result in there more or less change occurred in results. Referred to the figures for TMA in fresh fish had completely agreed with the accepted values. The reason is the more fresh fish is, the lower TMA contented. Older fish had significantly higher figure than that of fresh fish. This can explain the physical changes on older fish such as very unpleasant smell. As spoilage processing, the more TMA could be produced in fish meat, the stronger smell could be. Further more, it should be noted that before titrated the solution from older fish, there too much indicator solution was added which resulting in strong colour in solution. This could more or less influence the judgement with colour changed which resulting in the wrong reading for the ending points. This could be another reason for getting not accurate results. Over view the practice, these methods for determining TVN and TAM in fish during different process spoilage were efficient and relatively accurate. When compared with the public values, most of results shown their agreement. Older fish had worse physical conditions and higher figures in TVN and TAM than had fresh fish. Although the contentions of TVN were considerablely higher than usually value which due to impurity of the reaction occurs during distillation.", "label": 0 }, { "main_document": "investment, well-trained and skilled labour force, and high levels of economic and social organisation, the ability of individuals to attain success in the industry naturally rested on their possessing a substantial amount of capital in the first place. The great returns achieved by exporting progressively larger amounts of silver, following the discovery of more mining sites, granted the industry a central importance in New Spain's development. The sustained extraction of deposits from areas like Zacatecas allowed the colony to develop a lucrative economy which ensured it occupied a position of far higher importance than a simple agrarian area of the Spanish overseas empire. Macloed P.47 Brading P.2 Indeed, silver mining emerged at the centre of the internal economy's marked development, relegating the agricultural sectors to a peripheral position. The creation of mines and its organisation of labour and equipment, as well as the requirements of infrastructure, frequently meant that large, economically prosperous towns sprung up in new regions to accommodate the new demand associated with production. As the historian Enrique Semo has observed, prominent centres of the internal economy developed adjacent to the production sites. The Spaniards desire to obtain the commodity often led to the creation of mines in frontier areas, where hostile Indian groups such as the Chichimecas would plague the settlers. This trend saw the creation of mining enclaves within vast areas of unconquered terrain, which forged economic links with the existing cities of the central regions and achieved the colonisation of challenging regions. Semo P.73 Bakewell P.204 Mining's prevalence in the colony became entwined with the Spanish expansion into new regions of the country, particularly in the north. Mining's labour system (wages labour in towns) The high demand in Europe for precious metals such as silver meant that the Crown placed great importance on establishing a colonial system which would maximise production of the commodity. The Crown's ensured its involvement by issuing a legal code and levying a tax on silver produced, and significantly through monopolising the supply of Mercury. The quinto was often reduced to a tenth of the value that the miners were required to pay, and frequently, particularly in the century's later years, this reduction was extended to the refiners, workers and merchants involved in the industry. This favouring of the mining industry contributed to its becoming the central part of Mexico's economy on which all other activity came to be linked. Brading P.16 Williamson P.127 Brading P.17 The importance attributed to mining and its central status within the economy also affected the distribution of wealth within the country and had great implications for the restriction on growth of other industries. Though mining ensured that the colony moved away from an agrarian-based tribute system, it essentially consolidated a \"feudal-capitalist structure\" Though it would be false to claim that manufacturing did not develop in the cities during the sixteenth century, its growth was certainly impeded by the elite's tendency to prefer the goods produced in Spain and Europe, using their wealth to purchase products made in Europe. Whilst the wealth created by silver poured into the", "label": 1 }, { "main_document": "use to seasonal pastoral activities associated with deteriorating pastures and heather scrub. It is a common feature in British soils. A literature survey of C Brown earths drain more or less freely. They contain no free calcium carbonate, redistributed iron, aluminium and organic matter. Weathering causes brown colour due to formation of hydrated iron oxides. Brown earths of the upland have a low base status. Surface water gley soils have a water blocking C horizon, and show grey mottled areas with reddish brown iron-manganese concretions, see Figure 3 Calcareous soils retain calcium carbonate throughout the profile although leaching occurs under British precipitation conditions. Three broad categories can be distinguished: Rendzina, brown calcareous soil and gleyed calcareous soil. Rendzina soils are commonly found on chalk material and limestone. These are free draining shallow soils which consist of a dark organic A horizon which sits directly on weathering limestone, the C horizon. The brown calcareous soils are more common, they show an A horizon which leads into a brown or reddish brown calcareous B horizon. The red colour comes from ferric oxides in clay particles. The pH ranges from neutral to slightly alkaline. Gleyed calcareous soils are mostly derived from highly calcareous clay and shale. They are not freely draining, normally deep and coarse structured. The A horizon is fairly dark, the B horizon often mottled with yellow-brown and grey, and the C horizon has a grey gleyed structure with carbonate concretions. Soils originating from fens, mires and bogs are described with organic soils. Peat formation is influenced by the water source (groundwater/precipitation), temperature, the trophy status and topography of the landscape (Courtney, Curtisal. 1976), see also the chapter on Upland Britain. Most of the British lowland terrain was covered by deciduous forest prior to Neolithic times, which favoured a nutrient recycling process and resulted in little decline in base content. These soils were mainly brown earths on well-drained sites and gley soils on badly or slow drained sites (Courtney, Curtis, Trudgill 1976). The woodland clearance and burning of Neolithic and later times led to an initial increase of plant nutrients through ash fertilisation and high turnover of organic matter due to ploughing and soil aeration. But the burning caused as much as 70% loss of vegetation bound Nitrogen, Sulphur and Phospor into the atmosphere. This nutrient boost was used up within a couple of years. Subsequent land use practices led to nutrient losses. Nutrient export (particularly Nitrogen and Phosphor) took mainly place due to grazing combined with stable/kraal husbandry, hay making without fertilizing and crop harvesting. Leaching, the loss of water-soluble soil components due to precipitation occurred and caused decalcification and further loss of nutrients. Run-off was also happening in winter, when precipitation was high and fields lay bare. As much as 50% vertical losses of Potassium, Magnesium and Calcium can occur. The distribution of chalk parent material can be seen in Figure 4: Chalk is a freely draining material due to small fissures in the rock and initially very calcareous. The initially high proportion of calcium carbonate from chalk derived soils in", "label": 0 }, { "main_document": "help reduce the inflammation in your rectum and colon, which will improve your diarrhoea. The course of this disease is variable and it can be controlled but it can also cause relapses in most people. We would like to keep seeing you in our clinics so that we can monitor you but you will be able to go home once we have your current symptoms under control\". To explain the nature of the disease: \"Ulcerative colitis is an inflammatory bowel disease, which affects the lining of the colon and rectum causing chronic inflammation. We do not know what causes this disease. You have experienced the main symptoms of UC, with bloody diarrhoea, weight loss and loss of appetite. It is common for the attacks of UC to happen in intervals of a few months and your history of 6 months is a more chronic attack that we need to get under control. UC can also lead to anaemia, which we have found in you, which explains partly why you have been feeling very tired recently\". To describe the management plan: \"We have carried out camera tests and an ultrasound to look inside your bowels and have confirmed the extent of your UC. There are a number of ways in which we will manage your problems; firstly, we will prescribe you a course of steroids in order to control your diarrhoea and hope to reduce the inflammation in your bowel. Whilst you are in hospital, we will give you fluids to help re-hydrate you following your diarrhoea and you will receive a blood transfusion to try and increase your haemoglobin levels. Once we have gained control of your symptoms and you are feeling well we will consider/discuss discharging you. We will keep you on a cause of steroids when you are discharged and gradually reduce down the dosage so that you come off the steroids eventually. IF we find that the steroids become less effective at resolving a relapse or if you are frequently relapsing we will have to consider another medical therapy which suppresses your immune system. If your UC becomes unmanageable with medical therapy we will have to look into surgical resection of your bowels, but that is a long way in the future and you may never reach that point if we can get the steroids to keep you in remission from an acute attack. However, it may be necessary to remove some of your colon because the diagnosis of ulcerative colitis increases the risk of you developing a cancer in the bowel\". Mesalazine (aminosalicylate) was prescribed which is used in the treatment of mild to moderate UC and maintenance of remission. Observation whilst Following the start of treatment, it is important to monitor its effectiveness and whether It is also important to check for side effects of the treatments. This will also be a long-term observation. Monitoring the amounts of blood in the stools is also important, as a reduction in blood will indicate a reduction in inflammation in the bowel. Long-term observation will continue on a regular basis", "label": 1 }, { "main_document": "specifically involved with control of a specific morphological trait. The second group under review centres on the evolution of genetic control. Although genetic analysis of morphological evolution still remains relatively unexplored progress has been made recently. Gene mapping of floral symmetry, sex determination, inflorescence architecture and compound leaves provide glimpses into evolution of morphological applications. However, these need to be studied as a unit in order for the ecology of the development of morphological traits to be understood fully (Shepardal. 2002). From (Shepardal. 2002), it can be stated that homeotic genes are genetic regulators of structural development whereas MADS box genes control flower morphogenesis. Shepardal. (2002) state that establishing causal relationships between genetic molecules and morphological variants will require comparative studies of closely related species alongside molecular phylogenetic and population genetic analysis. Studies have suggested the role of certain genes in association with certain morphological characteristics in plants. For example, the transition from vegetative to reproductive stages are likely to be due to the TFL 1 and the LFY gene in Arabidopsis. Inflorescence indeterminacy seems to be controlled by the CEN/TFL 1 gene, and the compound leaf development seems to be controlled by KNOX/LFY/UFO genes. Pandey (1979) states that, as an evolutionary strategy, plants have developed steps that encourage outbreeding. This means that the pollen of one individual is unable to fertilise flowers of the same individual and therefore helps the species maintain genetic variability. Pandey (1979) suggests that the S-gene has been found to control this self-incompatibility and synthesis of the S protein by controlling transcription and translocation of the S-gene. This paper (Pandey et al 1979) suggests there may be some evidence that there are both intraspecific and interspecific incompatibility as functions of the same S-gene complex. This indicates that allelic specificity controls polymorphism at two levels. This (Pandey 1979) suggests interspecific incompatibility is primitive and developed in the progenitors of anemopilous gymnosperms. Intraspecific incompatibility is thought to have followed in angiosperms, and the role of this secondary specificity is thought to promote cross-fertilisationand therefore maximize genetic variability. In their paper, Perrieal. (2005) investigated two species of fern in New Zealand. These were distinct due to one character: leaf morphology. The Both AFLP DNA - fingerprinting and sequencing of chloroplast DNA With the use of an analysis of molecular variance (AMOVA), this study found there to be no absolute discontinuity, and therefore no genetic difference between the two 'species'. This paper is unclear in the conclusions drawn. It is unlikely that environmental factors can be the cause of differences between ferns due to their close geographical location. The first two papers reviewed in this section (Shepardal. 2002 and Pandeyal. 1979) were clear, concise and gave good explanation with use of appropriate figures. The Perrieal. (2005) paper, I found unclear. The three papers recognise the possible different effects of one gene on different species and that multigene effects can act to affect a single charictoristic. Irishal. 2004 states, for reasons mentioned above, This states that evolutionary development Irishal. (2004) has been used to give rise to different morphologies by means of", "label": 1 }, { "main_document": "acquired. All financial projections will be monitored and the new venture will begin operating. Despite my feasible Action Plan, it does have areas which if not tackled may change the defined path; however, these will not influence my main career goal. There is the possibility I may not be accepted for a job in the company stated above, or achieve the position I require, and therefore lack the management experience I plan to attain, in the defined target. This will force me to work more years so as to gain relevant knowledge. Undercapitalisation may also be a hindrance, postponing my venture opening, as any option forcing me to give up part of my venture will be considered. Although our career objectives remain the same, our choices to pursue them vary as we progress in life. I've always wanted my own business, however, when I came to Brookes I wanted a strategic management career in an international hotel company, and then to have my own business as a hobbie. Management would provide me a more comfortable, easy-to-do job, with fewer worries, no need for start-up capital, and generally an easier lifestyle. Nevertheless, I was never able to convince myself it would make me happy. Having to comply with rules and work for someone else, never (as much as I tried!!!) appealed to me. This past year showed me that unless I follow my true aspirations I will never be satisfied with life. Therefore, though recognizing it is a tough and challenging path to tread, I believe I have found my way to achieving my dreams, and just that is enough to keep me motivated.", "label": 0 }, { "main_document": "signal amplified during the filtering process. Shown below is an active filter with a non-inverting op amp and a low-pass filter: Now all that is needed is to employ this into the circuit and combine it with the Wheatstone bridge circuit. Taking the output signal from the Wheatstone bridge circuit and using it as the input signal for the active filter does this, as this is the signal that needs to be amplified. The combined circuit is shown below: The circuit has now been set up with strain gauges to sense the stresses in the shaft, and these mechanical outputs have been converted into electrical ones and amplified to a sufficiently high level with the noise attenuated so the signal can be read by a voltage-measuring instrument. This system now needs to be able to be read and calibrated for data presentation. To calibrate this system, we need to precisely measure R The system calibration constant C for the entire system is then given by: Where, S The strain recorded with the system is given in terms of the system calibration constant as: where, d", "label": 0 }, { "main_document": "the top of the hierarchy. Most people generally know Standard English, being the one that is taught in schools, and most people can use it when called for. However, as long as the dialect they use relates to the situation they are in, it can be understood, and that, to me, is all that matters. It is therefore to do with register and style according to circumstances. Lingua Francas work because both parties understand them. They do not need to be Standard English as long as they are understood, and since both parties have comprised them, both parties should have understanding of it. I feel therefore that the conclusion of this essay should be that there is no question of equality in language itself, more a question of equality in society. As long as there is understanding between the participants, one variety is equal to all other varieties as long as they are understood. Standard English can be used when called for, and as long as it is still taught as a basis, there can be divergence towards this when people of two different dialects conflict. The increase in mobility of people also means that we are more knowledgeable of other dialects, and with knowledge comes acceptance, and therefore 'equality'.", "label": 1 }, { "main_document": "The question of grammar instruction has always been central, yet controversial to ELT. The emergence of methods such as the Audiolingual, the Natural Approach, Communicative Language Teaching, and more recently, the Task-Based Learning, have challenged traditional approaches. This, in turn, has affected teachers' attitude to grammar instruction and their use of coursebooks. This essay begins with a short reflection on grammar teaching as a foundation for my approaches to instruction. It will describe the learners in my teaching environment, their needs and difficulties with the Present Perfect tense. Analysis of similarity, contrast and relevance to my classroom context will follow between The essay will conclude a flexible, eclectic mix of elements from all books would best suit my learners' needs. Details of preferred advantages to my classroom context will be added along the coursebook analysis. Grammar is concerned with how meanings are built up (Bloor, 1995). It translates linguistic components into meaningful messages of communication. Hewings, A. and M. (2005) confirm that 'understanding grammar is a part of successful learning'. They explain through knowledge of grammar learners are enabled to express themselves with sophistication, especially in written contexts and genres. These arguments briefly convey my belief of the importance of grammar teaching. My most recent teaching context has involved predominantly adolescents from diverse backgrounds, at pre-intermediate to intermediate levels. Harmer clarifies that 'one of the key issues in adolescence ... is the search for individual identity, and this search provides the key challenge for this age group' (2001, p. 39). I need to take in consideration challenging factors that characterize my young students, such as A priority for me is to engage my learners' attention and allow them to use their high potential for learning, creativity and commitment (Harmer, 2001) - which is why the jungle path method is suitable in this context. At times I teach monolingual classes - the benefit here is that learners can sometimes help each other in their L1, which saves time. In multilingual classes, on the other hand, the pace of learning is not smooth which slows the process and affects the group dynamics. As a teacher I have to multi-task in addressing all L1 transfers - an example is my concept-checking in both Russian and Persian during a grammar activity with Kazakh and Arab students. Another challenge in my working context is the expectations of students: I find many adolescents not motivated enough to learn English beyond the 'plateau effect'(Lightbrown and Spada, 2006). In addition, a frequent characteristic I observe in teenage students is their belief of being competent at language theory, which makes them unwilling to go through practice activities, even though they struggle with errors. Sometimes they directly ask that they do not study grammar in class. Since language is viewed as a system of communication and grammar conveys meanings (Hewings, A. & M., 2005), a focus on form will make one's speech intelligible which will facilitate the comprehension of meaning. As a result, I choose to provide them with tasks that focus on meaning as well as form, so as to prevent", "label": 0 }, { "main_document": "use another way to calculate the concentration of protein, We get a slightly lower protein concentration than the ideal one 1mg/mL, and the value calculated from the absorbance at 280 directly. The above formula is used in the case when the protein solution is not so pure, containing a significant amounts of nuclei acid ( as a few percent). In this case, we should also determine the absorbance at 260 nm to correct the presence of nuclei acids. Exercise answer: If we know that the molecular weight of lysozyme is 14314, n which is not so far away from the experimental value of Similar for chymotrysinogen, according to the value of its molecular weight 25670, n For near UV (240-290nm), absorbance can be used to determine aromatic side chains, which present especially in Tyrosine, Tryptophan, Phenyl alanine, and Cystine. Proteins that contain aromatic side chains can use the above formula to determine the extinction coefficient, such as lysozyme and chymotrysinogen. The absorbance and the protein concentration should be linear relation in an ideal case. However, due to measurement error and environmental influencing factors, the observed absorbance did not fall exactly on a line, which requires using regression to connect each data set in the least square error sense. By comparing the absorbance method and BCA method, the results obtained by both methods are consistent with 1mg/mL. However, the concentration determined by absorbance is a bit larger while the value from BCA is slightly smaller. CD signal reflects an average of the entire molecular population. So, CD can only determine the percentage of the With the help of the online website ( The Scatchard plots did not work so well in this case, due to the data value obtained from the fluorescence machine at DNA concentration 40 This problem caused here is probably due to the unpurified DNA sample or manipulate error. I have tried twice for this measurement, and the data used here is the value of the second time since the first time those two curves looks identical. In this laboratory, we determined the protein concentration with two methods - absorbance and BCA method, with consistent but slightly different results. CD spectroscopy can determine the protein secondary structure by telling the percent of Fluorescence spectra is measured to determine the binding constant for DNA and Ru at different concentration.", "label": 0 }, { "main_document": "disappears as suddenly and mystically as he appears. Marceline at least has positive human qualities we can relate to. However it is true that most of the other characters in the book simply provoke reactions in Michel. Marceline tries to bring him back to his former self, but ends providing something for him to rebel against, and M Marceline's death serves to awaken Michel to what he has caused with his relentlessly self-centred lifestyle. Beyond this the characters do and say very little. I would argue, however that this does not make them unimportant characters. Their two dimensionality may be due to the fact that Michel sees people this way because he is so wrapped up in himself. The fact that their functions are very clear in the novel may well be a device to make the reader more aware of Michel and the effect they have on him, rather than what they are like themselves. At the end of the day, the only developed character in this story is the man protagonist, through whose eyes we continually see. He is far too concerned with himself to be bothered with anyone else, and accordingly so are we. What I find interesting about his relationship with M He complains that ever since he was born he has never really lived, just followed rules set down by others, mindlessly and unquestioningly. Although he does attempt to question Melanque's version of individualism, eventually he finds himself agreeing. 'You can't expect them all to be different... I realized what I had said was stupid' pg81. In my opinion this shows Michel to be the weak character that Marceline realises is excluded from his philosophy. He is not even strong enough to put his own theory into practice without the push from M Alan Sheridan expands on this, stating that the book is 'an exploration of what can happen if such ideas fall into the hands of someone too weak to sustain them' He also supports my point that Michel cannot live the individualist life that M Sheridan, Alan \"Introduction\" in Gide, Andr Sheridan, Alan \"Introduction\" in Gide, Andr M He represents an ideal to which Michel becomes determined to live, but cannot without sacrificing something, which ends up being Marceline, the only one who really looked out for his ethical well being. Davies points out this idealism as the characteristic of M Perhaps again this was intentional on the part of Gide, he is a product of these ideal values and therefore cannot be seen as a real person. People like M Marceline and M However this does not make them irrelevant because they both have huge impacts on the way Michel thinks, which is of course the main focus of the novel. Gide is not concerned with creating a cast of complex characters who we grow to know and care bout, and besides, he has one character that is complex enough on his own to make up for the others.", "label": 1 }, { "main_document": "paradoxes have cornered language into artificiality. As Haack has mentioned, Tarski's theorem seems to lack intuitive justification independent of its utility to combat paradoxes in natural language The Undefinability Theorem is an ad hoc response in this regard, and offers no further aid in defining what exactly constitutes truth. S. Haack, 1978, Philosophy of Logic: 144 Saul Kripke's try at rescuing language from paradox involves rejecting the assumption that a well-formed statement must be either true or false. Ordinary sentences acquire their truth value through the concept of This can be extended to '\"X' is true' is true' and so forth. Kripke rejects the infinite hierarchy of meta-languages proposed by Tarski in favour of one formal object language having an infinite hierarchy of partial interpretations. Only the truth predicate is the basic partially-interpreted predicate in this formal language. Additionally, grounded sentences do not necessarily possess truth values. For example the statement 'the present king of France is bald' is neither true nor false. Sentences such as the classical Liar are ungrounded without truth value, because it is self-referring and unverifiable. Quine's paradox, although being able to be grounded, succumbs to the unavailability of classical truth predicates in Kripke's system and thus ceases to be a paradox. However, the strengthened Liar poses a problem for Kripke. The statement 'This sentence is either false or paradoxical' forces one to abandon the refuge of ambiguity and decided between choices that inevitably lead to contradiction. Both the Liar and Quine's paradox present difficulties for natural language to overcome. They are similar paradoxes because they both self-referentially assert their own falsity. The main difference between the two paradoxes is that of direct and indirect self-reference. Quine's paradox refers to itself indirectly due to it being comprised of two sentences, only to be activated when one sentence is appended to the other. On the other hand, the Liar is a directly self-referring paradox. 'This sentence is false' refers to one and the same sentence and requires no auxiliary semantic equipment to convey its intended paradoxical nature. Quine's paradox is therefore able to dodge solutions that capitalise on the direct self-reference of the Liar. This is the significance of their contrast. However, any solution that neglects focusing on direct self-reference in favour of taking aim at self-reference as a whole is able to attack both the Liar and Quine's paradox, as shown by Tarski's Undefinability Theorem. The advantage of the Liar becomes apparent upon observing the strengthened Liar variation, which surpasses the limitations of Quine's paradox under the treatment of Kripke. However, despite the effectiveness of many of these proposed solutions, they also eschew the naturalness of language. Perhaps in the end paradoxes are something we have to live with; subtle yet recalcitrant gaps in the tapestry of communication.", "label": 1 }, { "main_document": "support was equivocal. Other variables such as self-esteem, occupational group pressure and personal ability are suggested, which could be said to moderate the relationship between performance and satisfaction. By far the most popular approaches for testing the relationship between job satisfaction and job performance have involved the use of moderate variables (Judge et al, 2000). A few researchers including Herman (1973) stated that in the absence of these situational factors, a strong position would be obvious for satisfaction and performance. Yet, no enterprise performs in a vacuum environment where all the factors are void. Hence, Selladural (1991) suggested that organisations ought to gain the knowledge of these factors which could be helpful in improving the satisfaction and performance relationship of their employees. Moreover, he provided a job satisfaction - job performance relationship conceptual model where eight variables are identified (see figure 2). Similarly, Ledford indicated that organisational changes, for example, in job design, compensation systems, management styles, mission and so on would lead to a change in employee satisfaction level. The most investigated variable is reward contingency. Job rewards mainly referring to economic incentives has been discussed in the previous model. The underpinning logic of this proposition, indicated by Judge et al (2000) is that payment is valued by employees and high performance should be satisfying to the extent that salary is associated with performance. Nevertheless, payment is only one of rewards, and research shows a weak correlation between economic reward and job satisfaction (Spector, 1997). According to Jurgensen (1978), employees report that they value intrinsic rewards more than money. In spite of these constraints, after reviewed previous literatures, Podsakoff and Willams (1986) reported that the relationship between satisfaction and performance was stronger in which rewards were linked to performance than where there was no pay contingency connecting to performance. Another key moderator of the job satisfaction - job performance relationship is job characteristics. This one is similar to the monetary factors since both are concerned with rewards. The difference is that job characteristics are intrinsic while payment is extrinsic. The job elements consisting of task variety, responsibility, and autonomy have been proved to affect the relationship between satisfaction and performance (Locke, 1976). Hackman and Lawer (1971) found that when a worker perceived his job loading with those features, both of his satisfaction and performance would be increased. The underneath principle of job characteristics influencing satisfaction and performance is that the improved positive aspects of the job lead to larger satisfaction, which enhances employees' willingness of performing better (Hackman and Oldam, 1976). Additionally, the job characteristics framework formed by Hackman and Oldam (1976) is categorized into the process theory of motivation, and it has been greatly used to support the thesis of satisfaction leading to performance. Another commonly tested moderator is self-esteem. According to Korman (1970)'s self-consistency theory, individuals are more satisfied when taking those actions which are consistent with their self-image. In other words, high performance would not necessarily bring satisfaction to low self-esteem individuals because it is incompatible with their self-perceived sufficiency. In reviewing of related literatures, Judge et al", "label": 0 }, { "main_document": "If you were to ask a person what an atom looks like, be it a teenager studying physics or chemistry, or an adult that has an averagely sound knowledge of basic science, I am certain the vast majority will give this answer: an atom looks like a round thing with a nucleus in the middle and electrons whizzing round the outside in certain orbits. This picture of the atom is known as the Bohr model, and is widely taught in schools and colleges. There is however a problem with the model: it is not completely correct. The Bohr model of the atom has been very successful but why, considering it is only approximately correct? The Bohr model is generally the first model of the atom taught to students. Although the model has its problems (discussed later), it is the first atomic picture that students will come across and thus it will be the model that sticks out. It is used to introduce students to the idea of quantisation of energy in atoms, or in other words, the concept of energy levels. It is an easy model to understand (ie it is an easy picture to visualise), and can therefore be used to explain various atomic processes and some useful concepts in electricity [ref.1]. It is because of this that I believe the Bohr model of the atom has been successful despite its flaws. The theory of the Bohr model was developed in the early twentieth century, around 1915. The first experiments used to acquire this atomic picture were those conducted in 1910 by Lord Ernest Rutherford, an English scientist. In his experiments, relatively large charged alpha particles were shot against a thin sheet of gold foil. What Rutherford found was that the majority of the alpha particles passed through the foil, while some were deflected. These results allowed Rutherford to conclude that each atom consisted of mostly empty space, but had a positively charged dense central region, a mass, which would not let the alpha particles pass through [ref.2]. The Danish physicist Niels Bohr then began to work with Rutherford, in Rutherford's laboratory in Manchester. There he learned about the successes and problems with Rutherford's model of the atom, and provided the fundamental ideas for what later became known as the Bohr Theory of Atomic Constitution [ref.3]. So let's take a look at the model (known as the Bohr model but should really be the Rutherford-Bohr model) in a bit more detail. It is seen as a type of planetary model, as the electrons orbit the central nucleus much like the planets in the Solar System orbit the Sun. However, the electrons are not restricted to a plane as approximately true for the planets [ref.4]. The nucleus consists of a combination of equal numbers of protons and neutrons (but not for hydrogen, an atom that consists only of one proton and one electron). The Bohr model and the planetary model are similar in a sense that electrons orbit the nucleus, but Bohr pictured the orbits as circles, positioned at increasing distances from", "label": 1 }, { "main_document": "several reasons. In her analysis of Dutch practice concerning the councils Veersma notes that these agreements may be preferred as the management 'has the feeling that it is able to influence the direction taken by the development of the EWC in practice' (1999:306). Cressey (1998) complements the advantages these preliminary agreements may have for employers in pointing the possibilities they provide for more flexible arrangements, greater managerial control, better fit with existing information and consultation procedures and seemingly positive company attitude towards transnational consultation. It is visible that companies have seen the opportunities pre-emptive Article 13 agreements grant and have used them to gain another benefit from EWCs' operation. The expenses of the EWCs which must be met by the central management of the companies are amongst the most quoted drawbacks of the bodies in the surveys executed by Wills (1999) and Nakano (1999). In his case study analysis Cressey confirms that by pointing that the management in his researched firm felt that 'the financial costs were not insubstantial when the preparation work, elections and annual running costs were added up' (1998:75). Fears of increased expenditure due to EWCs are also included in the accounts presented by Lecher and R However, several of the same research papers (Cressey 1998, Nakano 1999) acknowledge that costs are not a considerable charge for employers, when calculated per group employee and 'would be more than offset by increased labour productivity' (Nakano 1999:310). Clearly, although financial costs are an issue for managers, the benefits that EWCs bring justify expenses, which turn to be not that significant in the end. Management concerns over increased corporate bureaucracy and rigidity that could be caused by EWCs are expressed by respondents in several surveys - Wills (1999), Weberal. (2000), Vitols (2003) and included in the research analyses of Gold and Hall (1994) and Weston and Martinez Lucio (1997). At the same time, that rigidity could be overcome by what Gold and Hall note as the 'the stronger emphasis in the [EWC] Directive on bespoke, enterprise-specific arrangements (...) [which] potentially offers a considerable degree of flexibility' (1994:183). As for the other non-quantifiable burden of bureaucratic delays, the survey of Weber et al. finds that actually the 'process of consultation was not seen to slow down management decision making' (2000:19) (also Wills 1999). In sum, anxieties over growing bureaucracy are present in employers' perceptions and these might be seen as a possible disadvantage of EWCs but these concerns are often found to be unjustified in reality. Employer respondents are found to regard EWCs as a source of unwanted employee expectations in the studies done by Wills (1999), Nakano (1999) and Weberal. (2000). It is feared that an EWC may raise 'expectations of what this forum might achieve, particularly in terms of influencing management decisions in relation to restructuring, employment and working conditions' (Weber et. al 2000:19), at odds with the agenda of managers who would most often use it to simply convey information on matters they are legally required to (see also Wills 1999). However, it must be noted that a survey carried", "label": 0 }, { "main_document": "many considerations of common time series issues like autocorrelation. However, the regression coefficients that are represented in the several tables of the paper only tell so much about the actual relationships between the variables. If we take the surprising fact that poverty was insignificant for example, this might be a result of a wrong specification used as other studies have found evidence that income has a positive effect on fertility. Duflo and Udry find that when there was a positive shock on the crops controlled by women, the expenditure of the household on food and children went up. This did not occur when the crop was controlled by men. This inconsistency could be the result of the poverty measure not being accurate enough, the authors recognise the problems in having to interpolate the missing years as the poverty headcount index is only measured in intervals. What could be included is a measure that looks at income or expenditure more directly and then a dummy for women to see the possible separating effect. Duflo, E and Udry, C. (2001) \"Intrahousehold resource allocation in Cote d'Ivoire: Social Norms, Separate Accounts and Consumption Choices\" NBER Working paper #10498 The authors state that it was not possible to include child mortality in the main equation because it is likely to be affected by the dependent variable and this would cause problems. The solution presented in the paper is to use an instrument to remove the inconsistencies. The authors suggest that access to drinking water is a viable instrument because it can be assumed to be highly linked with child mortality but not with to fertility. I think this can be contested as access to drinking water has a straightforward effect on the health of the population, therefore including the mothers and areas where the access is low are also likely to be poorer areas. In that case, the inclusion of the access to drinking water should affect the coefficient estimates on the regional dummies, son preference (if mother's health is poor they require more assistance to survive in the future), poverty and female literacy. When comparing the coefficients presented in tables this seems to be case, as these variables have increased in significance. This change is acknowledged in the paper but it is explained to be the cause of controlling for child mortality. The possible interference of the drinking water variable could also be recognised, although it is not likely to cause estimation problems because of the lack of perfect multicollinearity with any of the explanatory variables. As stated in the concluding remarks, \"the findings of this article consolidate earlier evidence on the connection between female education and fertility in India\" (p.54). The additional value of this paper is the verified robustness of the fertility and education relationship, which is a relatively small accomplishment if we compare to the amount of analysis done. The approach itself is not very innovative but it manages to generate a conflicting result for income and additional value would have been added, had this been investigated further. The significance of the", "label": 0 }, { "main_document": "of the matter has improver significantly after the works of such scholars like Polanyi, Dalton or Service, later also Hodder and Preucel. The latter two reevaluated the theories of the former ones, pointing to their main limitations and suggesting an alternative solution. Thus, instead of using merely substantivist and formalist approach to study of exchange, they promote considering also symbolic value of material culture and their political implications. In a way they did not achieve much more than their predecessors, as they did not offer an ultimate way of approaching the subject, but rather a series of another very specifically focused theories, which are also by no means flawless. Therefore, it is necessary to bear in mind the limitations that all these theories bring and preferably try and apply all of them at once (to a certain extent, of course) in order to obtain an accurate picture of the past exchange systems and their social implications. The nature of the artefacts themselves seems to be well-scrutinized by Preucel, Hodder and Kopytoff. They emphasized the complexity of the nature of exchanged artifacts and the shifts in their value due to various social processes. This draws the attention of scholars to the social aspect of material culture instead of merely examining the typology and distribution. Kopytoff, moreover, stresses the individual character of goods, which can become individualized throughout the history of their use, which corresponds to the theory of Hodder, that we need to remember about the role of individuals in the process of formation of symbolism of objects, which fills in the gap in the substantivist approach represented by Polanyi. Therefore, it is evident, that anthropology contributed a lot to the understanding of the material culture and together with archaeology can form a coherent and decent view of the past exchange systems. For the past few decades archaeologists criticized the amount of attention that the typological approach was gaining within archaeological study. With the development of anthropological approaches, we need to beware of the danger of pushing the line in the other direction, not to be faced with the problem of insufficient archaeological data in the future.", "label": 0 }, { "main_document": "from hundreds into many thousands of pounds. Finite element analysis tools enable prototypes to be virtually modelled in a fraction of the time for physical models to be produced and tested. Fewer iterations of designs are required, and alternative materials can be 'virtually' tested and only those with adequate potential tested in the laboratory. Testing must be performed however to validate any results obtained through computational analysis due to calculation errors and over-simplifications. For an initial expense, the investment into CAD and FEA is of great benefit to enterprises. Data management is far simpler due to electronic storage, and designs are easily passed between departments and even continents so design can continue around the clock.", "label": 1 }, { "main_document": "Program to calculate the cost of an order off from an array of records of 3 items Declare records and array Declare procedure to ask for details of record Ask user if they want to place an order If they want to, ask which product Search for product in database If found, display details of products Ask if they want to place an order for that product If yes, Ask how many they want to order Calculate cost, Decrease stock Ask if they want to add another product. Add until they say that they do not want to add another product, incrementing cost, and decreasing stock also Output Cost of Order Ask if they want to place another order 1. Declare records and array 2. Procedure which gets all details about the products and store it in array 2.1 Ask for the details for 3 products 3. Procedure to determine if the user wants to place an order and to validate the answer 3.1 Ask if user wants to place an order 4. Procedure to ask which product the user wants to supposedly order 4.1 Get product name from user 5. Procedure to search the array for product name entered by user 5.1 Compare product in array with product name input by user 5.1.1When they match, call procedure to display details of that product call procedure to process order about that product else 5.1.2 loop procedure to search array again 5.2 Procedure used to do part of statement when product is not in array 5.2.1 display error message - record not found 5.2.2 ask for product name again 5.2.3 call procedure to search the array again 6. Procedure to display details of products 6.1 Display Product details 7. Procedure to get order amount for product displayed, calculate the cost of the order, and adjust stock level 7.1 Ask user if they want to order the product 7.2 If answer = 'N' 7.3 Make loop exit 7.4 Else 7.5 Call procedure to obtain number of products that user wants to order 7.6 Process Order - Calculate cost, new stock number(procedure 9) 8. Procedure to ask user how many of the product displayed they want to order and validate the 8.1 Ask user how many he wants to order 9.Procedure to determine whether user wants to add another product to the order 9.1 call procedure to ask for product user wants to order 9.2 call procedure to search for item in array 9.3 call procedure to check if item has been found in the array 9.4 Call Procedure to ask whether to order again Procedure which coordinate the other procedure when the user wants to order 10. Call procedure to ask whether user wants to order something or not 10.1 ask if the user wants to add another product to the order 11. call procedure that deals with adding product to order 12. output cost of order 13. loop and ask if user wants to place another order Introduce program Call procedure to take product details from the user Process the order if user", "label": 0 }, { "main_document": "something was going to happen and it was coming closer. This sensation seems to become real with the /r/ alliteration in different clusters l.9 and 10 (\"air... transistors crawled through his ears\"). Something is bound to happen, and the revelation of this \"thing\" may be found in the analyse of the second part. The repetition of the same structure from line 14 to line 21 is very striking. Each line is a post-modifying clause referring to \"he\" (l.13), constructed on the basis: except line 19, which deviates as \"Clean\" is substituted by \"Pure\", whose function (adjective) and meaning are identical, but it creates a different sound effect, as we shall see later. Thus, each line is a simile that provides a new facet concerning \"this man\" (l.11), comparing him to various things. We can apply Short's parallelism rule Y is another common noun (or proper noun in l.19) whose meaning refers to death, destruction, homicide. The similes define him as a murderer in a less subtle and more outstanding way than the allusions found in the first part. This effect is achieved on one hand thanks to the irony on the association of two words of opposite meaning (\"clean\" versus images of blood); on the other hand, thanks to the phonetic patterns of the passage that we shall describe now. Short, M. The /k/ alliteration in the first line of the second part (l.13) is very salient as it is on syllable-initial sounds. This consonance is echoed l.16 and l.19: \" Its effect is reinforced since it is a velar plosive, stressed in most occurrences and also because it is the same sound that initiates the word \" These cutting sounds can be interpreted as the expression of the destructive violence of that man. The nasals are also prominent in this passage; there is a lot of /m/ and /n/ in these lines, and some / It is also present in the repeated \"Clea Line 19 introduces the bilabial plosive /p/ in a conspicuous manner - \"Pure\" is foregrounded as it is the only line where \"Clean\" is replaced by another adjective. It is echoed in the cluster heard in the sound pair \" This echo ironically emphasizes the semantic antagonism between the adjective (idea of purity) and the noun it refers to (sense of death and blood). Toward the end of the poem, it is worthy to note the shift from /p/ to /b/ alliteration: \" They are both bilabial plosives but /b/ is voiced whereas /p/ is unvoiced. Again, the effect is underpinned by the fact that the consonance is on syllable-initials and it is the first sound of the word \" As we said before, the vision of blood is overwhelming in the poem. This is proved again through the presence of an unusual lexis. We already noticed the morbid connotation of the words in the Y place: \"gun\", \"slaughter-house\", etc. More arresting is the use of the word \"tampons\", almost never used in poetry. The parallelism between l.22 and l.23 enables the reader to link \"the tampons\" with", "label": 0 }, { "main_document": "Room ratio, they have been separated to allow management the correct analysis and hence, the appropriate measures to reduce such costs. Payroll cost ratio is relatively important because payroll constitutes a high cost, especially in the following department (sometimes unnecessarily) and this ratio will clarify how much staff costs per customer (Riley, 2000). As these ratios will be repeated in the following departments, although with different figures, their explanation will not be gone over. It is crucial to regard both costs and revenues, and variances in terms of percentages and money. Analysing percentages helps pin-point the problem (Burgess, 2001). In the rooms department, there is an increase of 6.4% in sales, whilst costs have risen 2.6%. It is also important to note, where, within the costs, there are high variances. There is an increase of 8.5% between actual and budgeted travel agent commissions, followed by 7.0% in guest supplies and a decline of 6.1 % in linen and laundry. It is important to note that a variance which indicates a decrease may have both positive and negative effects being the duty of the manager to identify and justify occurring gaps (Burgess, 2001). The advantage of the departmental P&L is that by separating all costs and determining the marginal costs, there is the possibility of efficient and effective measures to reduce high costs (Chin et al 1995). Food and beverage factors to evaluate performance are quite similar to the rooms department, meaning they focus on volume and price. Seat turnover and the average spend per customer are the most used measures to assess F&B performance (Chin et al 1995). The increase in occupancy, and hence, number of guests in the hotel may be a cause for the raise in F&B sales, when compared to the budget; also, a change in menu items or a more detailed menu engineering (Fattorini, 2001)-possibly rising the inclusion of dishes with a higher contribution margin-may have the same effects, as well product - bundle pricing, where many products are joined and sold at a cheaper price (Kotler et al, 2003). Additionally, if managers want to further increase sales, and consequently revenue, it may be necessary to implement actions in order to raise the percentage of beverage sales, as these items are a fast way of increasing revenue (Fattorini, 2001). It is in this department that costs are higher than other sections as all raw materials must be considered, and managers must try to keep costs adequate, as a large decrease in costs may have negative consequences, connected to a decline in quality (Chin et al 1995). Cost of sales has also risen, when compared to budget, almost in the same proportion as the increase of sales. It is clear to notice, that when sales rise, there is a larger need of raw material and therefore costs will relatively increase as well. In this situation, seat turnover cannot be calculated, as the number of seats available is not provided, however it is possible to compare average spends per cover, concluding that food spend per cover has risen, however, beverage", "label": 0 }, { "main_document": "open lesions are transferred into the surrounding environment. The lesions that are found on badgers are similar to those found on cattle, but 'one of the most important differences is that much greater numbers of bacilli are found in badger lesions' The transfer of Bovine Tuberculosis from badgers to cattle in field conditions is often described as 'circumstantial'. This is because the transfer is reliant on the live bacteria being inhaled or ingested by the cattle Badgers often urinate on pastures which will then be used for grazing cattle and this urine can often contain high quantities of tubercle bacilli. The infection of Bovine Tuberculosis in this manor often leads to infection in the lungs. For the tubercle bacilli to produce infection in this way, there needs to a much larger dose than if infection was to occur through the respiratory tract In the UK the problem of bovine tuberculosis has become an issue of increasing concern. This is partly because the disease is no longer so localised to certain areas of the country. However, the main problems still persist in the south west and western areas of the country The amount of culling of cattle that is now taking place has therefore increased significantly, and the problem seems to generally be worsening. Bovine Tuberculosis is also now a significant burden on the taxpayer. In 2004/05 the cost of the disease was Expenditure on Bovine TB is rapidly increasing and is likely to continue to do so unless further measures are taken to try and prevent the continual spread of the disease. There has been a new compensation system introduced this month to try and balance out the payments tax payers make and the compensation farmers receive for cattle that have been infected with Bovine Tuberculosis. This is because it was thought that previously farmers were at times over compensated for the value of their animals The following diagram shows the geographical distribution of bovine TB outbreaks in 1998 and 2004. It demonstrates that there has been a significant increase in the areas affected by the disease. Badgers have been known carriers of the M. bovis strain of TB that affects cattle for a number of years. This was after the discovery of a dead badger on a farm that had been infected with Bovine TB. However, there has been an ongoing debate around the transmission of this disease between these two hosts In 1997 there was significant evidence reported by the Independent Scientific Review group that showed 'in Britain, badgers were a significant source of infection in cattle' There is also evidence which shows that badgers are more significant hosts of M. bovis than many other types of wildlife. The following table demonstrates the different levels of infection of Bovine TB in a variety of wildlife species: There have been a variety of methods that have been suggested or used to try and reduce the spread of Bovine Tuberculosis. As much of this spread is linked to badgers being one of the main hosts of the disease, the culling of badgers", "label": 1 }, { "main_document": "Five experiments were carried out to investigate the properties and uses of ultrasound waves in solids. Longitudinal waves were passed through two metal blocks to determine their longitudinal moduli, M, and Poisson's ratios, For the aluminium block, M was (101 For the mild steel block, M was (264.5 The echoes of longitudinal waves were also used to detect and size defects in an aluminium block, which proved successful as four defects were found. Shear waves were then produced from reflected longitudinal waves and were measured to have a velocity of (3500 Their angle of reflection and velocity were then tested against a version of Snell's Law, which proved inconclusive. Longitudinal waves were totally internally reflected to produce surface waves, the velocity of which was measured to be 2860ms The wavelength of a surface wave is proportional to energy, which is related to the depth of the wave, so by passing the waves through a slot of varying depths, its wavelength was found, with a value of (0.98 Sound with a frequency greater than 20kHz This experiment investigated the properties and some uses of the three types of ultrasound waves that travel in solids: longitudinal waves, shear waves and Rayleigh waves (see fig.1). Physics, Alonso and Finn, In this experiment, the wave pulses are produced by transducers, whose mechanical vibrations (causing the ultrasound) are made by the Piezoelectric effect. More about this effect, regarding transducers, can be found in reference 2. Although all three types of waves travel through solids, only longitudinal waves can travel through liquids. Therefore longitudinal pulses are the only ones that were generated by transducers in this experiment. Ultrasound Physics and Instrumentation, Hedrick, Hykes and Starchman, A longitudinal pulse can be converted into shear waves can be produced by means of reflection and refraction, as figure 2 shows. As Rayleigh waves only travel on the surface of a solid, they are also known as Surface Acoustic Waves, or SAWs. They are produced by setting SAWs travel with a retrograde elliptical motion (fig. 1), and have exponentially decreasing energy as they get further from the surface (fig. 3). This investigation comprises five experiments, each with specific aims. Bulk ultrasound waves can be used to determine properties of solids. Consider this equation for Young's Modulus: The density of material is easy to measure, so if a longitudinal wave were passed through a material, its Young's modulus can be calculated. Notice however that equation 1 is only effective for a 1D object, so for this experiment, where 3D solids were used, the equation gives the longitudinal modulus, M, instead of E. Poisson's ratio, Poisson's ratio is defined to be \"the ratio of the contraction strain normal to the applied load divided by the extension strain in the direction of the applied load\" Due to the Poisson's ratio website This can be solved by using M from the previous part of the experiment and the theoretical value of E. The second part of the experiment investigated the conversion of longitudinal waves into shear waves (fig. 2) by testing their properties against two given", "label": 1 }, { "main_document": "social group. When code-switching occurs within a conversation, there is often more going on than a signal of social identity. Gal (1979) explains that in Oberwart, code-switching is often used within a conversation to make the connotations of a language affect the direction the conversation takes. She gives examples of German in a Hungarian conversation being used as a This is a good example of how code-switching within a conversation without changing domain can affect the conversation. Gal is however quick to note It is not. Mesthrie (2000) points out how other researches have Marked choices may function as attempts to redefine aspects of the context, or the relationship between speakers\". Mesthrie also states that The four code-switching patterns identified are briefly outlined below. Mesthrie (2000) writes that An example of this would be if two strangers meeting in London began their conversation in English, but upon learning they were both Portuguese changed language to Portuguese. This could be used to create the solidarity relationship of 'Portuguese compatriots' rather than the relationship of strangers. Mesthrie (2000) writes that In this case no meaning need be attached to any particular switch: it is the use of both languages together that is meaningful, drawing on the associations of both languages and indexing dual identities\". An example of this is that in Oberwart people may code-switch between standard German and Hungarian, using the German to suggest that they are Intelligent and successful and then using the local Hungarian dialect to suggest that they are trustworthy and honest by emphasising their local identity. Mesthrie (2000) writes that This can clearly been seen in the use of German as a 'topper'. Here, German is a marked choice as the rest of the conversation has been conducted in Hungarian which is the unmarked choice as it is the language expected within that domain. Mesthrie (2000) writes that An example of this type of code-switching would be in situations where two people are unaware what code to use in conversation and may end up alternating between codes in an attempt to find a suitable one. In this essay, I first introduced and defined code switching. I then introduced the concept of 'domains' and showed how a switch of domain is nearly always accompanied by a switch of code before outlining some reasons why this happens. I then investigated Diglossic situations, concentrating on what motivates people to change code in bi-lingual situations and relating it to language choice and code-switching. I finally looked at specific types of code-switching and why people select certain codes for certain situations as well as the various reasons that people switch between codes within a single situation or 'domain'. From the evidence given, it is reasonable to conclude that not only code-switching, but any form of language choice in general gives speakers the ability to influence how other people view them. This is particularly true in code-switching where I have shown that by changing codes, speakers can draw on the identities associated with more than one ethnic group. In Diglossic situations this can be membership", "label": 1 }, { "main_document": "work-related matters, e.g. quality circles) were present in 41% of the enterprises (Geary 2003). The delegative forms, 'involving individual employees and organized around team structures were found in 53 and 37 per cent of British organizations respectively' ( It is apparent that the use of new work practices and, more specifically direct employee involvement techniques on the side of UK employers has been growing in recent years or has stabilized at relatively high levels. More often than not, this has occurred at the expense of indirect, representative forms of involvement (IRS 2002, Marchingtonal. 1994) The above briefly described data complements earlier studies which have demonstrated similar trends - as Bryson notes 'between 1984 and 1998, the proportion of workplaces with only representative voice arrangements halved, while those with solely direct voice mechanisms more than doubled (2000:10). In addition, reviewing surveys executed before 1995 (ACAS 1990, CBI 1990, UK Employment Department Survey 1991), Hyman and Mason reinforce that 'management moves toward techniques aimed at employee integration through involvement have become increasingly visible in recent years' (1995:29). Having presented the above survey evidence on the growth of direct employee involvement techniques in the UK, it is important to include a word of caution as to the different formats, terminology and designs that different surveys use. Exploring companies of different sizes, business sectors, ownership (private as well as public sector) and corporate cultures, it would be quite hard for them to draw a correct and fully comprehensive picture of UK enterprises with regard to involvement trends. Several texts draw attention to difficulties of precise interpretation of survey data, due to differences in terminology used (Marchington and Wilkinson 2005) or marking the presence or absence of techniques without taking their embeddedness into company culture and operation into practice into account (Marchington 2005). Moreover, threats to objective conclusions could be found in the fact that it is often only managers that present views on the operation of EI techniques in their companies (the case of WERS 2004) or that most surveys treat employees (actually or potentially subject to EI) as a homogenous group or explore the main occupational group in the enterprise only (the case of the EPOC survey, see also Shapiro 2000). Despite these possible shortcomings, survey data demonstrates growth in the utilization of EI schemes, as Marchington succinctly puts it 'in the early part of the twenty-first century, there is little doubt that the most widely used forms of participation are concerned with communication and participation' (2005:29). Bearing in mind the probable limitations of the survey evidence, this text nevertheless acknowledges the rise of new work practices in UK workplaces, as shown by the data, and will proceed to explore the likely managerial objectives in introducing and implementing such practices. As noted above, direct EI initiatives are primarily management-led in their introduction and implementation. Managers have various and multi-layered motives in applying such schemes and some of these will be examined below. The societal and economic conditions and environment in the UK form the broader level of managerial motivation for introducing direct EI schemes. Marginson", "label": 0 }, { "main_document": "line to the original position and then reset the offset check. This algorithm made sure that the pixels were offset by half a pixel for every other line. The subsequent section was the control of colour. There are a variety of colour models used in computer image applications. The difference between colour models is the method in which the RGB system is manipulated. OpenGL provided two different models for colour manipulation - the simple RGB model and the Colour-Index model. The RGB model relied on the user inputting the specific values for each of the red, green and blue elements to produce a desired colour. The Colour-Index mode used an index value matched against preset colours in a lookup table. It was decided that the RGB model would be more suitable for the project. The RGB model provided a more direct manipulation and control of colour. Although it was possible to obtain a vast number of colours using the Colour-Index model, its inflexibility made it unsuitable for the image processing later on in the project. The OpenGL RGB model used 1.0 as the maximum value for an individual RGB element and 0.0 for the minimum. For example, setting each of the RGB values to 0.0 would result in the colour black. Setting all the values to 1.0 would change the colour to white. The table below gives a few examples demonstrating the colour flexibility. It was now possible to apply colour to the hexagons by using the OpenGL call, gl.Color. The call took three inputs, namely the values of the individual red, green and blue components. To colour a polygon in OpenGL the colour was set first by using the glColor call. All polygons would then be drawn with that same colour until a new colour was set. Therefore the colour for a particular hexagon had to be set before the hexagon was drawn. A test program was written which, by using hexagons, gradually went through the colour range for a few colours. This gave great promise for the future development of the project. The next section of the project involved processing the images within the program. Since this would involve a lot of work developing, integrating and testing it was decided that it would be best to research ready written code on the web. As previously mentioned Java is a popular language and there is much ready written code on the web. One source provided code that would load images with JPEG, GIF and BMP codings into the Java program. Another source provided code that stored images in a method desirable for the project. The images were processed and separated into the individual pixel elements. The location and colour value of this pixel was then stored in a zero based 3D array. The first two elements of the 3D array represented the (i,j) position of the pixel whilst the third element identified the colour of the pixel. This colour value could be further processed to extract the individual red, green and blue elements of a pixel - the elements used in", "label": 1 }, { "main_document": "Mr Mr At the time of presentation there was no history of associated abdominal pain, loss of appetite, dyspnoea or fatigue. There was no mucus in the stool, only a small amount (~1 teaspoon) of fresh blood which appeared in the pan and on the paper. No abdominal bloating or distension. No tenesmus. There was no history of recent travel or contact with infected persons. Mr He was subsequently referred to the GI clinic. When assessed in clinic approximately a month later, he reported that he had lost about stone in weight since his initial presentation. The diarrhoea and rectal bleeding had persisted. There was no family history of GI disorders. His father died of an MI. Mr Prior to his retirement he worked as a factory supervisor. He has one son who is a renal physician at Guys hospital in London. Mr Given the age of this patient and the history of PR bleeding, weight loss and a recent change of bowel habit, the most likely cause for this presentation is colorectal carcinoma. More specifically these symptoms suggest a left sided lesion. This is in contrast to the classical symptoms of a right-sided lesion which include anaemia, weight loss and a palpable mass. There was nothing in the history to suggest obstruction, perforation or fistula formation had occurred as a result. Other important differential diagnoses to consider would include chronic diverticular disease and diarrhoea resulting from infectious causes such as food poisoning (Salmonella), dysentery or viral enterocolitis. However as the symptoms, particularly PR bleeding, had persisted for nearly six weeks an infective cause seems quite unlikely. Medical causes such as excessive use of levothyroxine could also be responsible for his weight loss and diarrhoea however this would not cause rectal bleeding. Upon physical examination a focused abdominal examination will be carried out to look for specific signs such as abdominal tenderness or palpable masses. A rectal exam will also be performed to confirm the PR bleeding. In addition PR masses can be detected in 60% of right sided carcinomas (Longmore et al, 2004). A general examination should also be performed to look for relevant systemic signs such as anaemia, and to allow the patient's general health to be assessed. The physical examination did not reveal any clinical signs which suggested that Mr There was some finger clubbing present although this may have been congenital and therefore with hindsight this should have been queried further. There was no pain or palpable masses on abdominal exam, however these symptoms rarely occur in left-sided carcinoma unless the patient presents late or obstruction has occurred. Although tenderness in the LIF is usually present in chronic diverticular disease, this still remains an important differential at this stage which would require further investigation before being excluded. However given the history of associated weight loss, left-sided colorectal carcinoma remains the most likely cause of Mr Mr Chronic diverticular disease still remains a possible differential diagnosis at this stage, however this could easily be clarified by performing further imaging investigations. Although Mr Receiving a diagnosis of cancer may therefore", "label": 1 }, { "main_document": "average, while Group B The order of groups in terms of total sales remains as in the first quarter, with groups B and C doing slightly better. In Quarter 3, Group A has came back to the first place in terms of average, while in Quarter 4 it is again Group B. The Total Sales in Quarter 3 and 4 remain similar to first 2 quarters. The amount the company has to pay to their employees in terms of commission and bonus depends on their predetermined sales target and the bonus and commission rate. Target changes affect the employees as well as the profitability of the company in terms of bonus, while the commission does not change as it does not depend on the sales target. If the target is increased, in this case to 115% of the original target, some salespeople may receive less bonus or none at all, whereas a company may save some personnel cost. In the Scenario when the target is increased from Thus in this scenario, the company would save approx. If the target is decreased, workers can make less effort to achieve the goals, or even those who did not reach the original target may receive a bonus, but company may have to spend more on personnel cost. A reduced target set by the company from In one case the bonus doubled and other 4 salespeople who did not reach the original target and thus would not receive a bonus, would also benefit from this target. Thus in this scenario, the company would have to spend approx. ___________________________________________________________________ companies only for this year.", "label": 0 }, { "main_document": "Nocturnal primates of the family Galagidae have recently been under great taxonomic revision due to new discoveries concerning their behaviour and morphology (Groves 2001; Grubb New species are appearing at such a rapid rate that the important process of scientifically identifying and naming species has been put on the backburner. Effective conservation work is dependent on an updated taxonomy in order to acknowledge the width of the biodiversity one is trying to save (Groves, 2001; Grubb The proposed investigation concerns a taxonomic study of one up to four as yet scientifically unnamed primates of the genus The study's main objective will be to compile sufficient amounts of character data, i.e. behavioral, biogeographical, morphological and genetic data, to enable the endorsement of species status independent of species concept applied (see Groves, 2001 for overview) with the specific aim as to publicly name these species in accordance with the International Code of Zoological Nomenclature (IUZN, 1999) regulations. The data collected will create the body of a comprehensive database over the known information of the family Galagidae. Listed below are 50 sources of information that is relevant to my final project with the working title \"The naming of new species within the genera The sources are listed under subheadings (with number of sources in brackets next to it) inferring to their relevant area of importance for my project. The subheadings are arbitrarily set and do not set an absolute topic content to the source information. Many sources could, for example, fit under Material and Methods but has ended up under some section in Behavioural Ecology. Each source reference includes the author's affiliation, at publication date, and a short description of the sources content. For some sources, additional information is given as to its relevance to my study. Those sources lacking this information are by no means less important but the relevance of the source is assumed self-evident from the source description. Included as an appendix are further references, in many cases, of equal importance as the annotated, as well as a list of websites and possible and confirmed collaborators to the project. The author, a researcher at the American Museum of Natural History, introduce (1983) the species concept: Phylogenetic Species Concept (PSC), and elaborate on it (1997) e.g. on its role for conservation biology and in taxonomy. These are both seminal papers that introduce a new, and now, widely accepted species concept by means of theory, structure and use. PSC has been described as a pattern-based concept (in contrast to the Biological Species Concept and its alike in being process-based) and has been described by Groves (2004) \"as nearly objective as we are likely to get\" when it comes to describing taxonomy of extant species. PSC is defined as \"the smallest cluster of individual organisms within which there is a parental pattern of ancestry and descent and that is diagnosably distinct from other such clusters by unique combination of character states\". The author, a naturalist at the U.S. National Museum, describes the findings from the Smithsonian African Expedition in British East African in the years", "label": 0 }, { "main_document": "that I consider to be stressors for the family and in my opinion, the family would agree with these observations. When the doctors came round on the first day to review Elle, Mrs Stacy was good at asking questions to the doctors, one of her concerns was regarding the length of stay and recovery for Elle. The doctors told Mrs Stacy that hospital stays can vary with children with chest drains and that it can range between 4 days to 10 days, and that one week is the average length of stay. I stayed back to talk with the family and to make sure that they understood everything that the doctors had said and to ask if they had any further questions. I was able to encourage Mrs Stacy that she had asked the doctors good questions and I was able to reassure her that she wasn't being too fussy. On reflection I can now see that this was Mrs Stacy trying to look ahead to be able to plan ahead for the stay in hospital. At a later date, when the doctors were reviewing Elle again - once she had the chest drain in, Mrs Stacy asked again what date the doctors estimated that Elle would be discharged from hospital. The doctors were able to reassure Mrs Stacy that Elle was making a good recovery, but that at this stage they were still unsure. Being in hospital was obviously largely disruptive to every day life for the Stacy's, however, being able to plan for the next few days seemed to be a way of coping for Mrs Stacy. From conversations with Mrs Stacy and the above examples I would identify 'uncertainty' as being a stressor for the family, especially for Mrs Stacy. Drawing from the Neuman systems model, this stressor would have been influenced by environmental factors, this specific stressor coming from the internal environment, an intrapersonal stressor within the Stacy's, especially Mrs Stacy. To add to the uncertainty, Elle seemed to deteriorate from Sunday to Monday; Elle had less energy and was more lethargic and less bubbly, like her normal self. Another aspect of being in hospital and brought uncertainty was the fact that Elle's bed or room was changed 3 times. Elle was moved twice in the bay of beds and then finally into a side room away from the nurses desk. Mrs Stacy reported to liking the new room away from the ward, yet felt it hassle to be moved 3 times. I think that Mrs Stacy originally liked being away from the busy noise of the ward and to the room out of the ward, but soon realised that there is less to do and less people to talk to away from the ward. I noticed Mrs Stacy talking more with me once they had been moved to the side room, this may have been because she was lonely, but it also could have been because we were getting to know each other more and Mrs Stacy was more comfortable around me. Mrs Stacy once said to", "label": 1 }, { "main_document": "because they are inter-linked with other privations. Various organisations within and beyond the NHS must work in tandem. Health action zones (HAZ's) seek to achieve this goal. The primary aim is to specifically improve the health of the least well off. Patient-driven policies are pursued which impart a sense of empowerment and decrease apathy and depression. They have an area-based approach and help to build social networks within populations as recommended by Acheson. However, HAZ's have been accused of myopia because they only reach a minority and are therefore limited. HAZ's are still very much a work in progress and represent a long-term financial commitment by running for up to 7 years. I believe that HAZ's in conjunction with other national policies, e.g. the introduction of Health improvement programmes (HimP), will be beneficial because they explicitly address the localities immediate needs. The government has launched 'in cash' and 'in kind' initiatives to decrease poverty. Unemployment is higher in Godivaville than in other towns. With Welfare to work - The New Deal, a range of programmes is in operation to encourage unemployed people to move into work. Special focus has been put on the needs of lone parents who are over-represented in Godivaville compared to national averages. Employment zones aim to help the long term unemployed to improve their employability. For those unable to work 'in kind' services increase the standard of living. The New Deal for communities (NDC) pathfinders tackle social exclusion in the context of the Social Exclusion Unit's report - 'Bringing Britain Together; a national strategy for neighbourhood renewal. Sure start is a cross-departmental strategy specifically focusing on the health of children (0-4 years) and their families in an attempt to improve their health outlook. This upstream policy aims to prime children so that they thrive when they get to school. Education Action Zones (EAZs), established by the DfEE in 1998 are another upstream policy aiming to help children achieve their full potential in areas of high social and economic deprivation. Concurrent to tackling income inequalities it is vital to increase access to primary healthcare. Problems accessing primary care services tend to be among particular groups for example, the homeless, drug users, refugees, ethnic minorities, and in particular areas; inner cities, sparsely populated rural areas, housing estates in deprived areas. The Social exclusion unit aims to reduce social exclusion faced by people when they experience a combination of linked problems such as unemployment, poverty, poor housing and family breakdown. A detailed knowledge of the problems faced by ethnic minorities is essential because they are found in greater numbers in Godivaville than in other towns. If a large proportion of an ethnic minority delay seeking health care due to language barriers it is the responsibility of the primary care group trust to overcome this health inequality. There is evidence to suggest that the disabled and older people may be more vulnerable to the effects of poverty and ill health. This report has discussed some of the theories as to why health inequalities occur. The various national schemes that have been undertaken", "label": 1 }, { "main_document": "bolstered by awareness of River's founding by culturally-empowered members of the English aristocracy in the late nineteenth century, accounting for the anglicised club name, originally derived from 'R Sean Ingle, 'Why Do River Plate Hate Boca?', 18th October 2000. < Tony Mason, (London, 1995) Consulted at: Roland Soong, 'Brazilian Football Fans', 27th February 2003. < Cathy Runciman (ed.), (London, 2004) P.9 Sean Ingle, 'Why Do River Plate Hate Boca?', 18th October 2000. < The 'Camilo Cichero' stadium ('La Bombonera') and the Boca team however, though also bearing an apparently partially anglicised title of 'Juniors', emerged as a prominent force of local culture within its barrio. The club's historical association with the poor Italian, specifically Genoese, immigrant population of 'La Boca', and the unique history of a traditionally underprivileged region within the Argentine Republic, has immortalised its affiliation with those occupying the nadir of the social hierarchy. The Genoese link is detectable even today in the use of a popular nickname applied to the fans, the 'Xeneize'. Historically, 'La Boca's' defense of neighbourhood autonomy against higher metropolitan authorities bread a palpable regional identity in the late nineteenth century, something with which its football club is inextricably linked. Today the 'Bombonera', alongside the multi-coloured 'Caminito' street, continues to generate pride in the working-class community, with the near ubiquity of the team's blue and yellow emphasising its role as a cultural focal point of tradition and local identity. Compared with the city's numerous other poor areas deviod of any internationally renowned theatre of sporting excellence, the team's presence is unequivocally positive. Nonetheless, whilst it is fair to regard the club's success as beneficial to the process of integration, the rigidity of the social labels created by rivalry with their privileged adversaries to the north perpetuated an enduring spectacle of class tension. Author's notes from visit to the Boca Juniors stadium, 20th November 2004. (Stadium address: Estadio Alberto J Armando (La Bombanera), Brandsen 805, y la V Sean Ingle, 'Why Do River Plate Hate Boca?', 18th October 2000. < The prevalence of this class ideology is most palpably illustrated in the nicknames ascribed to the conflicting fan bases. Though serious violent confrontation is rare between fans, given that approximately fifty per cent of Argentina's population claims to support one club or the other River fans are popularly known as 'Millonarios' (Millionaires) and 'Gallinas' (Chickens) by their Boca counterparts, an allusion to their aristocratic timidity. Boca fans are conversely known as 'Bosteros' (derived from the Spanish 'bosta', people of dung), reflecting theoretically humbler origins. Revealingly, though such name-calling was originally pejorative, their significance has been reevaluated by the recipients to become adulatory. Inscribed in the buildings surrounding 'La Bombonera' is a great quantity of graffiti positively affirming the Boca label, with claims such as 'Yo soy 100% Bostero The River label remains seemingly more negative since such declarations are seldom reciprocated, though the wide use of the labels does nonetheless evidence the extensive appeal of the class struggle ideology. Boca fans positively endorse their sobriquet as it reinforces and upholds their aspirations towards a lower social bracket.", "label": 1 }, { "main_document": "Insider dealing is summarised by McVea as, \" The problem of individuals abusing privileged inside information is not a new one. As early as the seventeenth century there were official reports of insiders using their privileged information to dump over-valued securities on the market. When asked what they would do if they learnt of a merger with a company whose shares were certain to rise when news of the merger became public, 12.5% of company directors said they would purchase shares and 11.5% would give the information to a friend. McVea, H., What's wrong with insider dealing?, 1995, 15 Legal Studies 390; see Criminal Justice Act 1993, section 52 for statutory definition Rider, Alexander & Linklater, Market Abuse and Insider Dealing, Butterworths, 2002, London, pp.1 Webley, S., An Enquiry into Some Aspects of British Businessmen's Behaviour, 1971, pp. 8-9 It is instructive to consider that misuse of privileged information was only first criminalised by the Companies Act 1980 Prior to criminalisation, there was no statutory prohibition of insider dealing and the common law did not make insider dealing actionable. Companies Act 1980, sections 68-73 Percival v Wright [1902] 2 Ch 421 The Criminal Justice Act (CJA) 1993 is the current regulatory legislation and defines inside information as information relating to specific securities, which, if it were made public, would be likely to have a significant effect on their value. Criminal Justice Act 1993, section 56 ibid, section 52(2); this is known as 'tipping' and 'tippee liability' It is important to note that there is no \" It is part of everyday commercial practice to use privileged knowledge to make a profit. If in every contractual situation a party with superior knowledge was required to share the knowledge with the other party, there would be no incentive to gather such knowledge. In transactions involving \" see, Rider, supra, no.2, pp.1 ibid, pp.2 Criminal Justice Act 1993, Schedule 2 ibid, section 57 Insider dealing is not universally recognised as a problem and many commentators believe that it should neither be subject to regulation nor criminal sanctions. This essay will discuss, in part one, the principal reasons why securities receive extra legal protection and investigate whether the current regulatory system, especially whether or not the criminal offence of insider dealing, is either necessary or desirable. The second part of the essay will look at whether the current law has sufficiently broad powers to allow the regulatory bodies to prevent and punish insider dealing. It is commonly believed that society is positively affected by an efficiently functioning stock market. Detailed and reliable information ensures efficient investment of resources, as companies which are poorly managed or which no longer provide goods or services in which society is interested will not receive investment. It is therefore in a company's interests to release \" see: Preamble to Directive 2003/6/EC of the European Parliament and of the Council, 28 January 2003, on insider dealing and manipulation (market abuse), Official Journal L096, 12/04/2003, pp. 16-25 Channelling investors' capital to its most productive use in the market is known as \"allocative efficiency\"", "label": 1 }, { "main_document": "The aim of the practical is to analyse acidity and free and total SO 10cm Then was diluted to about 50cm 4 drops phenolphthalein indicator were added into the conical flask A burette was filled with 0.1M NaOH solution. Note the Acid in flask was titrated until solution just goes permanently pink. Note the Titration was repeated until consistent results ( Since the rough and 1 The wine has low pH2.7 as measured means the French medium dry white wine is highly acidic, which confirms the high tartaric acid percentage calculated as 72.25%. These demonstrate a fairly valid and reliable set of results, and no major errors had been observed. Positive end points observed qualitatively - blue black complex formed with starch. Iodine was in excess when all SO Average I Aver. I N.B. Wine used in the two titrations are obtained from different boxes provided, as the first box had been finished by the time the total SO Under limited time, the two redox titrations carried out to determine SO On the other hand, they are the hardest titrations in terms of observing end points, as the first blue black colour usually fades shortly. This problem has certainly affected the accuracy, reliability and validity of the results. The Averages I2 solution volumes have been calculated based on the results shown in Tables 4 and 5 (p. 2). Here are some possible improvements on the practical: The entire titration (with the repeats) should be carried by the same experimenter, in order to obtain as high consistency as possible. Both the 25cm For total SO It is always ideal to use a clean dry conical flask for each titration carried. This could minimise errors. Nevertheless, accurate set of apparatus were used: 50cm The free and total SO", "label": 0 }, { "main_document": "the protein. The major domains structure is formed from several 8-stranded It is these sub domains that protrude from the protein giving it a stable structure. RecA's role in homologous recombination is to catalyse the pairing of single-stranded DNA with homologous double-stranded DNA. To achieve this six RecA monomers (see figure 5) combine around the single-strand of DNA to stabilise it and protect it from degradation. RecA protein is a crucial enzyme in the recombination process as it catalyses the pairing of single-stranded DNA with complementary double-stranded DNA. The loading of RecA protein in the DNA is facilitated by the RecBCD enzyme (Arnold The enzyme contains 3 separate binding domains two for DNA and one for an ATP molecule. The DNA binding domains are located in the central domain of the RecA polymer and allow for the binding of a single-stranded DNA molecule and a double - stranded molecule. Both DNA binding domains include disordered loops (L1 and L2), containing residues with low electron density. In a study by Kumar During this study a surprising discovery was made when the quantity of The ATP binding domain is also located in the central region of the RecA polymer at a phosphate binding loop (P-loop). Within this loop lysine72 and theronine73 are know to interact directly with the phosphate group on ATP (Konola Once the single-stranded DNA has been coated in RecA the protein then binds to double-stranded DNA. Through a helicase action it has been partially unwound to facilitate the base pairing allowing the formation of a D loop structure (see figure 6). The exact mechanisms to how the single-stranded DNA, RecA complex finds a complementary sequence of DNA is still largely unknown but it is believed that through the binding RecA to double-stranded DNA the double-stranded DNA is \"activated\" even if the strands are none-complementary. In E.coli branch migration and resolution of the Holliday junction can be preformed by three Ruv proteins A, B, and C which are encoded for on adjacent genes. The RuvA subunit consists of three domains (I, II, and III) which forms a fourfold symmetric tetramer (Rafferty Detailed analysis showed that domains I and II were responsibly for Holliday junction binding where as domain III plays a regulatory role in ATP dependant branch migration through the contact of RuvB (Nishino Within the RuvA tetramer each DNA arm is recognised on the minor grove side by two helix-hairpin motifs inside domain II. At the binding site interface hydrogen bonds between phosphate oxygens of DNA domains and atoms from the major protein chains along with water mediated hydrogen bonds from the interactions that hold the DNA in place. The central acidic pin of the RuvA tetramer is formed from Glu55 and Asp56 residues from each subunit and repels the DNA backbone away from the junction centre by electrostatic repulsion (Yamada The function of RuvA is to force the Holliday junction into certain structures which are suitable for branch migration and resolution. The RuvA-DNA interactions are thought to be suited to rotation arms sliding of the DNA arms over the RuvA", "label": 1 }, { "main_document": "to find gold in America. However, only the Spanish were successful. Their greed for gold predetermined their relationships with the Aztecs and the local people; the Aztecs had the gold and the Spanish would take it from them by force. The Dutch and the French, who landed in North America, found no gold. Instead they found populations of native peoples who were willing to trade furs that were expensive luxury items in Europe for European goods. The English also found people to trade with, but their primary interest was land and power. Andrew Sinclair, A concise history of the United States (Thames and Hudson, 1967) The Europeans in North America largely saw the natives as savages. The Spanish saw the Aztecs as \"rational beings\" and were amazed at how developed their culture was. However, they were shocked at their religious practices and where possible sought to convert them to Catholicism. In contrast the Europeans of North America initially left the natives to their traditional beliefs. The nature of the alliances that developed between native and European in Mexico and North America initially seem different. In Mexico, the alliances formed between the Spanish and native populations were more political and military in nature. The ones formed between the natives and the Europeans in North America appeared initially to be for trading and support purposes. However, although the time scales are different both the Spanish in Mexico and the Europeans in North America were able to manipulate their native allies so far as to dominate and subdue them. An additional factor determining the relationships in North America was the use by Europeans of native tribes to extend their rivalries to America, which did not happen in Mexico. Although the Europeans had superior weapons, the natives initially had greater numbers and a superior knowledge of the terrain and should have been able to repel them. However, the native tribes in North America weren't unified and in Mexico the Aztecs had not succeeded in crushing all tribal resistance within their empire. Both also made many tactical errors that contributed to their defeat and were greatly weakened by the diseases that the Europeans brought from the \"Old World.\"", "label": 1 }, { "main_document": "Hume, David (1779) Henry D. Aiken, London: Macmillan, p.77 Philo in Hume's As an The fact that there is apparent order and purpose in the world suggests the need for a designer, rather than leaving it to chance. The universe is decidedly complex, so the simplest explanation is that there must have been a designer. And that designer is God. Gurney's hymn states, \"Yes, God is good, all nature says . . .\" This echoes the main ideology behind the Design argument. William Paley defined the argument by positing his watchmaker theory. Gurney, H. John (1802) Hymn Number 363, Hick, John (1964) The main push of Paley's theory suggests that if we were to stumble upon a complicated mechanism such as a watch, we would propose that there was a creator, for the watch cannot create itself or \"just appear.\" Paley argued that even if parts of the watch were lost, or it if it did not work properly, we would still postulate a designer. Richard Swinburne furthered the argument, suggesting that the fantastic order and beauty in the world points to a designer God. \"Order is a necessary condition of beauty . . . And the world is beautiful rather than ugly.\" Davies, Brian (1993) As already demonstrated, one of the common objections to this theory is that the world is not necessarily ordered and beautiful, as, by viewing the world, we can observe chaos and disorder. Hume, David (1779) Henry D. Aiken, London: Macmillan, p.79 Indeed, many aspects of the world produce chaos and suffering, such as earthquakes and hurricanes, suggesting that the worlds' order is not particularly perfect. But does this disprove God? Paley took great pains to reassure that disorder does not discredit a designer. \"It is not necessary that a machine be perfect, in order to show with what design it was made.\" Rowe, L. William and Wainright, J. William, eds (1998) Swinburne reinforced this idea, suggesting that people expect too much from God. He believed that God made a world where humans are free to learn and develop and in turn, choose to do either right or wrong. Pain and suffering is therefore crucial in order for humans to develop. \"Why God should make the universe ugly would be to give creatures the opportunity . . . to make the world beautiful for themselves.\" Swinburne, Richard (1979) However, John Stuart Mill argued that even if God is testing humans, his attributes are compromised. Natural Theology can indeed point to a designer of the universe, but it certainly cannot point to a Christian God. God's common attributes are omnipotence and omni-benevolence, which cannot be possible in the face of extreme pain and suffering. Vardy, Peter (1990) In short, if God were all-powerful, he would not be all-good and vice versa. However, one may ask why the designer God needs to be a Christian God? This argument only stands if Natural Theology aims to prove the existence of a God who embodies these traditional attributes. Hume wonders why the world's designer needs to be \"God.\" If that is", "label": 1 }, { "main_document": "Thoughts are caused by the extra-cranial world but the contents of those thoughts are not determined by the external world as the meaning of thoughts are not determined by the external world. The content of thoughts is narrow rather than broad. Answers to this question may be broadly divided into two large categories. While there is general agreement that the external world is able to cause thought, debate exists between the Externalists who propose that the content of our thoughts are determined by the external world. In contrast the Internalist/ individualists believe that mental states are products of intrinsic states only, with no reference to the external world. This essay will address and critique the main arguments given by the Externalists Putnam and Burge that began a large turn to Externalism. By highlighting the problems with these theses, it will then show a more viable Internalistic account of mental content. Much of the debate in this field of philosophy of mind is concerning the linguistic issue of meaning. The central dilemma may be rephrased as the question of whether 'meaning' is in the head or caused by the external world. There is the underlying assumption here that by studying linguistics, we are able to understand the nature of the content of thoughts (Cain 2002). The idea is that language is used to express beliefs as expressed by Segal (2000): This essay will now examine the Twin Earth thought experiment from Putnam as evidence for externalism. In order to show that the nature of the external world impacts upon the meaning of words Putnam (2002) proposes a \"Twin Earth\". This Twin Earth is identical to Earth in nearly every way. On Twin Earth however the substance that fills the seas and lakes, that agents drink and treat in the exact same way as inhabitants of Earth treat water is chemically different. On both Earths the substance is called \"water\", but on Earth, we know this to be \"H20\", whereas on Twin Earth the chemical composition is abbreviated to \"XYZ\". Some important details of this thought experiment must now be highlighted. The \"water\" on Twin Earth, or Twater as it is often referred to must be used in the same ways as water on Earth to ensure that the mental states of the agents on each planet are identical. For the thought experiment to be useful, there must be only one variable - the chemical composition of the water. Putnam (2002) then concludes that as this is the only variable, that meaning must therefore be attributed partly to the influence of the external world (Cain 2002). Putnam (2002) also asserts that even in the early eighteenth century, prior to the discovery of the chemical compositions of water and Twater, the agents on the two planets still \"understood the term 'water' differently\" (p.590) even though they were in the same psychological/mental state. The meaning of \"water\" (and this is extended to all natural-kind terms) is \"decomposed into two factors\" (Segal 2000, p.27). He accepts that there must be a set of descriptions: liquid between 0", "label": 1 }, { "main_document": "49 Epoxy and E-Glass Epoxy. In the case of the Kevlar 49 Epoxy the maximum possible whirling speed was calculated to be 1076.5 rad/s for a corresponding combined thickness of the laminate exceeding the permissible radius of the shaft. For the E-glass Epoxy the calculated The reason for this result is that the E-Glass Epoxy in particular has a lower specific modulus compared to the other materials (i.e. Therefore in calculating This negative result generally signifies that the resonant frequency of the shaft is reached (?). Of the four suitable materials the IM6 Epoxy meets the design specification at the smallest mass (i.e. 0.369 kg). This is the expected result as the IM6 Epoxy has a comparatively higher specific modulus than the other materials. On the other hand the AS4-Peek meets the design requirements at the highest mass (i.e. 0.51 kg). This is because it has relatively the smallest specific modulus of the four materials. Of the four materials the IM6 Epoxy provides the most suitable choice for the shaft design. Comparing the steel and composite drive shafts, it is clear that that the IM6 Epoxy is more suitable overall. Although the steel solution provides a slightly thinner wall thickness, more importantly the drive shaft offers a significant reduction in the overall weight of the drive shaft; up to 4x lighter or 25% of the steel variant, while meeting the important design criteria. The composite shaft is also potentially more economical as it eliminates the need for bearings, joints and other parts that are otherwise often required in the design of steel shafts. The results of this investigation have demonstrated the suitability of composite materials for drive shaft design. It was shown that four of the six materials were suitable for the drive shaft while the other two failed to meet the design specification. The material properties accounting for the differences between the two groups were confirmed to be the specific modulus of the materials. That is, the materials which encountered problems were those of lower specific modulus. Consequentially, the most effective material was identified as the IM6 Epoxy, as expected, owing to its relatively high specific modulus, while the lowest of the suitable materials, the AS4-Peek, was the least effective due to the lowest specific modulus of the suitable materials. Finally, composite design was said to meet the design, in all criteria, including those of cost and mass, more effectively in comparison with the steel solution.", "label": 0 }, { "main_document": "schemes (Cunninghamal. 1996). This argument is reinforced by the findings of the IRS 2002 survey where the desire to create a sense of ownership in employees (thus urging them to work towards improving company performance) and the need to ensure that workers understand their role in achieving business success are amongst the most quoted reasons for involving staff. It becomes visible that organizational success and efficiency, attained through improved employee commitment, motivation and job satisfaction (attributed to direct EI techniques) largely shape managerial objectives in utilizing involvement schemes. Giving employees more discretion and responsibility over how they execute their work tasks through EI schemes may sound like an achievement of the transition from control to commitment approach to people management. While clearly there are sides of the process that benefit the workers in allowing them to have more say about their work, management control has not vanished or even diminished. In summary 'new work systems [have] reorganized but [have] not [transformed] the workplace regime' (Edwards 1992:388). This point is referred to by Geary as well in noting that 'control remains as pervasive as ever, albeit organized in a different and sometimes more distant and less immediate manner' (2003:347). Engagement in involvement systems (team meetings, quality initiatives) may mean imposed stronger ties to work practices and more possibility for supervision, besides the increased discretion. Evidently, models of rigid managerial control have not been replaced by ones of empowerment and flexibility (Geary 2003), but have started co-existing and blending together through EI initiatives, providing another managerial motive for using them. Another point worth noting is the influence of the legal framework and especially the EU Directives requirements transposed in the UK. The European Works Council Directive and especially the Information and Consultation of Employees (ICE) Directive (DTI 2004) have a strong bearing on the operation of representative mechanisms in different formats of UK companies, although they affect the direct involvement forms less profoundly. Despite this fact, the ICE regulations provision for retention or extension of current consultation preferences in the workplace is likely to promote the further use of direct forms in non-union workplaces (where they are logically preferred), supplying managers with another justified objective in utilizing direct EI techniques. Having briefly described some of the managerial motives for implementing and operating direct EI initiatives, this essay will now examine the gains of these systems, as demonstrated in reality. As stated above, one of the main aims of introducing management-led direct involvement schemes is the pursuit of high-performance work places. Promoting employee commitment and motivation, EI systems are seen as invariably leading to improved company results and effectiveness (see also Edwards and Wright 2001). In reality, there are several problems associated with such claims. Firstly, the causal link between enhanced employee commitment and satisfaction and overall company performance is very difficult to establish. As Wilkinson et al. reasonably note, based on their case study research, 'any attempt to (...) draw any causal links to enhanced performance is problematic (...) precise details of cause and effect are almost impossible to disentangle' (2004:311). Where positive effects", "label": 0 }, { "main_document": "such cooperation. In this sense, this essay aims to identify the conditions that enable East Asian countries to push forward a meaningful progress in economic cooperation. It begins with the review of conventional explanations on the lag of economic cooperation in East Asia. Subsequently, it moves to the examination of obstacles to impede the intra-regional economic collaboration. Finally, this essay will conclude with several suggestions to eliminate the obstacles and hasten the economic cooperation in East Asia. Although East Asia has relatively high intra-regional trade volume, the degree of institutionalization of economic cooperation is strikingly slight so far. Though the Association of Southeast Asian Nations (ASEAN) has established in 1967, its principal aim was an alliance for anti-communism based on the fear of threat of communist power triggered by formidable regime change in Vietnam. In 1960s, many developing countries were pushing forward an economic integration beyond simple functional cooperation. Central American Common Market (CACM, signed in 1960), Latin American Integration Association (LAIA/LAFTA, signed in 1960), Andean Pact (AP, signed in 1969) are launched at this time. The only exception was East Asia. Since then, this institutional underdevelopment in economic cooperation has been explained from different theoretical perspectives. Many scholars have attributed the incomplete institutionalization in this region to cultural and ideational factors. They paid attention to the Asian tradition that preferred trust and consensus-building rather than contract-based transaction and law-based problem-solving as means of cooperation and confrontation with neighbour countries. They saw Asian characteristics can be applied to the explanation of institutionalization in this region. According to North, 'informal constraints' are likely to produce cooperative behaviour as well as the existence of formal institution. He also paid attention to the importance of 'self-imposed codes of behaviour' as a factor affecting cooperation and conflict (North 1990, 42-43). In addition, neoliberal institutionalists claim that institution does not merely mean the formal organization as a physical entity; it also includes informal type of convention that affects actors' behavior without explicit rules (Keohane 1989, 3-4). Thus, it could be interpreted that institutionalization in East Asia had already been implicitly implemented within the shared norms or the expectations of actors' behavior in this region. In contrast to this, realists have identified the causes of the lag of cooperation in East Asia from the political conflict based on security instability in this region. The most prevalent explanation is that the opposition of the United States prevented East Asian countries from pursuing deeper intra-regional cooperation. In relation to the stance of the United States, the focal point was given to the role of Japan. James Baker, then U.S. Secretary of State recalled that he would do his best to 'kill' (EAEC ; East Asia Economic Caucas) because he did not envisage the East Asian Community without U.S. His mission, at this time, was to lessen the risk to the America's economic interest that the new emergent economic bloc might bring about in this region (quoted in Terada 2003, 259). The U.S. vigilance over East Asia, different from that directed toward Europe, gave rise to the preference to bilateral security", "label": 0 }, { "main_document": "that men use more of these strategies, they are seen as dominant in conversation. One of these strategies is interruption and Conversational Analysis research done by Zimmerman and West (1975) shows that \"interruptions were far more likely to occur than overlaps and both types of simultaneity were much more frequently initiated by males than females. For example... 96% of the interruptions were by males to females\" (Zimmerman and West in Coates 1998:168). Male language asserts status and dominance also. A higher frequency of swear words is found in male conversation and features such as minimal responses which the research done by Leet-Pellegrini shows to be a source of strength rather than defeat; making the partner feel that what they have to say is trivial and he is uninterested by it. These are seen as dominance strategies and the fact that men use them more frequently indicates to dominance theorists that men are dominant. Nevertheless, since today's society sees many women in high-status roles, is it gender or power that overrides the other where language is concerned? Candace West is among the researchers who have looked into this. It is widely accepted that interruptions are used to convey dominance and control. \"Men's interruptions of women in cross-sex conversations constitute an exercise of power and dominance over their conversational partners.\" (West in Coates 1998:396). West's research into power, status and gender was done on physician-patient relationships. 21 patients were observed (recorded with unobtrusive cameras and microphones), 10 males and 11 females. Her findings were that out of 188 interruptions encountered in a patient-physician conversation where the physician was male, 67% were initiated by the physician and 33% by the patient. In a patient-physician conversation where the physician was female, however, the physician initiated just 32% of interruptions and the patient, 68%. It can be seen that the figures are almost exactly reversed from the male doctor figures to the female doctor figures. This leads me to think that males and females may enforce dominance and control in different ways or possibly even view power from different perspectives. The study also shows the different ways not just in how women and men talk, but how they are talked about. The main point that comes across is that if the physician is female she is referred to as a 'lady doctor' or 'female doctor' with obvious gender marking whereas male physicians are simply referred to as 'doctors', with no morphological gender marking but often an inherent male gender. West concludes: \"gender can have primacy over status where women physicians are concerned\" (West in Coates 1998:409). Deborah Tannen also looked at power relations in her work place study Female managers are likely to soften a blow when criticising an employee whereas male managers are more direct with criticism. This shows differences in management styles. Women's management style tends to be more consultative and inclusive, whilst men's style seems to be more directive and task-oriented. The patient-physician study shows some evidence for the dominance theory but the management study by Tannen could take the difference perspective. The difference", "label": 1 }, { "main_document": "shows how the Eisenstein irreducbilty criterion may be used. Example This is irreducible since One of the most important ideas in Galois theory is the field extension. To quote page 29, I.N Stewart's Galois Theory, Chapman and Hall/CRC (1973): But what is a field extension? Definition 2.6 Let A extension of We say that We also need to consider simple extensions, Definition 2.7 If N is an extension of M then it is a simple extension if N = M( We can classify simple extensions Definition 2.8 If Then Otherwise, Definition 2.9 A polynomial is monic over The minimum polynomial is the polynomial of least degree which meets the requirement of Definition 2.9. Definition 2.10 For an element Theorem 2.11 Let m be an irreducible monic polynomial over Then there exists The proof is not given here since it is not required for later proofs, but is worth looking at. (INS) The next theorem gives us an idea when the minimum polynomial of a field M is irreducible. The proof is simple but quite long so we only give an outline. Theorem 2.12 Let M be a field. If Outline of Proof: Assuming that m of Since m( Take p over M with Then there exist q and r over M such that With p( Theorem 2.13 Any element of Proof: Let We can see that m does not divide g and m and g are coprime since m is irreducible (a minimum polynomial is irreducible for an algebraic element). So ag + bm = 1. Then So The degree of r is less than that of d. For uniqueness we let With h = We have the degree of h less than that of m and h is unique by the definition of m. These results explores the relationship between extensions and isomorphisms. Theorem 2.14 If we have two simple extensions Proof: Every element of Clearly = = = Let We have 0 = Then, So Definition 2.15 The degree of a field extension It is written as This theorem is also known as the tower law and is always useful when dealing with field extensions. Theorem 2.16 If M, Proof: Let If With Let Let We know that the xi are linearly independent over N so We know that the yj are linearly independent over M so Suppose the LHS is finite. A basis for P over M spans P over N. So N is a linear subspace of P so This next theorem states the relationship between the degree of a minimum polynomial over a field and the field extension. This theorem is very useful when we are trying to decide on the properties of constructible points. Theorem 2.17 If Proof: By Theorem 2.13 we know So the form a basis of The definition of a finite basis and the concluding theorem of this section are important in later section when dealing with extensions that are finite. Definition 2.18 A finite field extension is an extension with finite degree n. This theorem links an algebraic simple extension with a", "label": 0 }, { "main_document": "by monsoon, early crop establishment with medium duration cultivars, and collective cultivation with uniform farming schedule are crucial to increase the yield and reduce the risks (Amarasinghe and Liyanage, 2001, Dhanapala, 2000, IRRI, 2006). Degradation of soil fertility by continuous cropping and runoff by intensive rain are common problems in Sri Lanka. For regeneration of fertility, Nitrogen addition with fertiliser may be recommended with caution to avoid excess use. Crop diversification is another method which has been promoted, for instances, cultivation of chili, onion, green gram, cowpea, soy beans, ground nuts, and other vegetables in rice fields. Legume crops are particularly advantageous that those plants help regenerate soil fertility. Reduction of weeds and pests and diversification of risk for household are also important aspects. Nevertheless, production has not been increased since the peak in the 1980s due to the fluctuation in yields by changes in climate, market demands and low prices and the increase in imports. Lack of government support is a major concern to promote crop diversification in the future. Addition of rice straws, animal wastes to build up organic matter would be practical at farmers level. Occasional ploughing to the depth of 20 to 25 cm also helps nutrient accumulation and soil conservation. Furthermore, provision of appropriate implement for ploughing at affordable price with the support of government is an important consideration (Amarasinghe and Liyanage, 2001, IRRI, 2006, Weerahewa, 2004). Integrated Pest or Weed Management developed and implemented in Southeast Asia would be the most recommended strategy. This practice puts an emphasis on the proper land preparation for prevention, minimum use of agrochemicals, utilisation of natural Biological Control mechanisms. Land preparation includes ploughing, peddling, and leveling to reduce the germination of weeds, and management of off-field landscape such as bunds between fields which provide refuges for important natural enemies of insect pests during non-cropping seasons. Use of clean seeds without weeds contamination and appropriate water management are vital consideration for prevention. Rice diseases would also be suppressed by such land management to control humidity, temperature, and weeds establishment which are favoured by disease pathogens and fungi (Jones, 2002, Gunatilleke, 1994, IRRI, 2006). In order to minimise post harvest loss, on-farm storage facilities should be constructed and managed properly. One recent attempt which would be practical is an airtight bin constructed in ferrocement technology. The experiment by Adhikarinayake (2005) proposed the bin consists of two reversed stacked cones resting on three vertical pillars with a storage capacity of 2.5 tonnes of paddy. The constructed bins effectively insulated air to control temperature inside the storage and the insect damage was neglectable of about 0.1 to 0.3 % loss. The significance of this proposed storage system was that it can be constructed by farmers, hence improve their control over when to sell rice, extended storage period enables farmers to obtain higher benefits by selling rice in the off-season (Ahikarinayake, 2005). In addition, proper harvesting time and processing are also important to improve productivity, education to farmers is the key strategy in this aspect. Innovation on post harvest management which is easily practiced by", "label": 0 }, { "main_document": "be two feet tower than his project, Chrysler gave the go ahead to Van Allen to not only prevail against his former partner's project, but to surpass even the 1024-foot-high Eiffel Tower, the world's tallest structure at the time. In what was certainly one of the greatest secrets and publicity coups in Manhattan real estate history, the stainless steel top was installed to the public's, and the Bank of Manhattan's, utter surprise in about 90 minutes in November, 1929. The tip of spire was 1046 feet high. The stainless steel cladding had been hidden in five pieces within the building's shell and was hoisted out of the top of the building and riveted into place. What is remarkable about the stainless steel cladding is how much of the top it covered and, more importantly, how original, striking and exotic it was and what an intricate design it had. The stainless steel cladding is ribbed in a radiating pattern and has many triangular windows that followed curves of the seven steps of each of the roof's four facades. The general massing of the building's base and shaft is rather unremarkable, but the building's point is breathtaking. Chrysler had Van Alen incorporate some decorative designs associated with automobiles on the facades, namely simulated hubcaps near the top of one rung of setbacks and great stainless steel eagle gargoyles, two at each of the shaft's four major corners (right). At a lower setback, stainless steel Chrysler-like hood ornaments serve as ceremonial winged urns (below) which reflect the Chrysler car logo. The building was opened on 27 In total, the building is 318.9 metres high to the top of its spire, 282 metres to the top of the roof, and 274 metres to the top floor. Had there not been such competition for the tallest building, this remarkable structure would never have had these extra details that have made it so famous as so well loved today. It is a masterpiece of design and one which has been fuelled by competition, and is a testimony to the American age of Art Deco.", "label": 1 }, { "main_document": "The field work was carried out in March 2005 at Friog Quarry, nearby to Fairbourne in Wales by JE Engineering Ltd. The investigation was carried out on the understanding that the site is desired to be converted into an area of public access that can be used for leisure activities such as walking, climbing and diving. The aim of the investigation was to determine whether the area is suitable for such activities in terms of the stability of the quarry faces. This report presents the results of fieldwork comprising the dip direction of each of the quarry faces and a discontinuity survey including the dip readings of the cracks and bedding planes found on each face. The measurements were taken by means of a compass - clinometer. These readings can be used to determine the stability of the quarry faces and hence whether the site can be used for its suggested purposes. The results also allow for a risk assessment to be carried out which helps in deciding what safety measures, if any, need to be put into place. Friog Quarry is around 1km South-East of Fairbourne. Access by road is from the A493 towards Fairbourne and then taking a right next a small amount of cottages, where limited car parking can be found. From this minor road the only way to the site is by means of path that is around 400m along the track. The path is steep and unstable and is unsuitable for almost all vehicles as it is very rough and the terrain is uneven. The National Grid Reference is SH620121. The site is situated approximately 120m above sea level. The location of the site can be found on figs 1 and 2. The road up to the site is very unstable, has many large cracks and has clearly receded over the years. Due to the abundance of precipitation in the area, the ground conditions are also very unsafe with uneven rocks providing the majority, particularly on entering the quarry through the tunnel. The quarry can be modelled as a rectangle for simplicity, with the four sides, A,B,C and D, bounding a small lake in the centre which is 16 metres deep. The lake holds hazardous implications as it is very cold all year round and there is only point of exit should someone be in the lake. There are several disused buildings from when the quarry was operational. There are small amounts of vegetation comprising trees and bushes at the top of faces A and C but the majority of the site is covered with scrap slate. The path is very hazardous at the top of face A as sections at the top have crumbled away due to army training activities and a diversion path has been made away from the edge but this is quite unclear. Face A has a dip direction of 220 The face initially appears to be fairly safe with evidence of a limited amount of creep and toppling at the top, also there appears to be some wedge failure in the corner", "label": 1 }, { "main_document": "is defined beautifully at Blackpool. Standing on the beach as the sun sets, watching the sea roll in from the West, the landscape is mingled - the waves and sand suddenly combined with the black silhouettes of the pier and a mish mash of wooden struts and supports at the Pleasure Beach. Although the flatness of the sand compares astonishingly with the enormous hills of the collective roller coasters and long legs of the South pier, they have joined to form a single landscape. Rides and attractions are supported beneath a sandy base and endless waves smack the pier's structure. 9 The ideas surrounding the city in the The text discusses a relationship between the city and those who live/visit there based entirely on trust, '...to live here, one must have faith in a huge range of impossible illusions. One has to truly believe that paper is valuable (money), that cars will not invade pavements (traffic behaviour)...\" The relationship between the Pleasure Beach and its visitors depends completely on trust. The act of boarding a roller coaster which climbs 235ft high and throw passengers down the first drop at 85 mph depends on the trust of the thrill seeker. They must truly believe they are safe and will return to the loading station eager for more. 10 The Such events have happened simultaneously at Blackpool Pleasure Beach since it opened in 1896. Decisions taken to improve the park and its popularity according to the changing times have led to the opening of attractions which are major historic events in of themselves, and certainly examples of grand architecture. The huge first and second hills of the Pepsi Max Big one can be seen from ten miles away, a superb lure for the thrills that await. Across the park is the massive network of rides, each crisscrossed and entwined with its neighbour, every conceivable inch devoted to thrill seeking. So unlike theme parks popular today, particularly at Disney Resorts where everything is pre planned, antiseptic and perfectly laid out, the Pleasure Beach has evolved slowly from a simple fairground. It lacks much theming but its attractive, tangled clutter has become a theme in itself. No one could argue it is a place of the spectacular. And what happens when it stops? At the closing of the last day of the season, after eight magical months, the Pleasure Beach was about to close for the season. The people had gone, the music had stopped, the spectacular had disappeared. During the winter, the park and to a certain extent, the town shuts down completely. Without the thousands of holidaymakers neither one can function. The rides and attractions stand frozen, the kiosks are shuttered and the Christmas decorations hanging in hotel windows on the promenade look strangely out of place in a town associated with summer holidays. During this period, the Pleasure Beach becomes a non-place. Marc Auge states that, 'A non-place comes into existence, when human beings don't recognise themselves in it, or cease to recognise themselves in it, or have not yet recognised themselves in", "label": 1 }, { "main_document": "In this report, a torque sensor was designed. It was a shaft-type cantilever, which was clamped to a frame at one end, with the other end left free to be twisted under a torque. This torque was applied via a thin steel bar attached to the free end of the shaft, and a force applied to the other end of the steel bar. The shaft was a hollow one, made of steel with the dimensions as follows: length L = 200mm; outside diameter d Once this test-rig was set up, it had been proposed that the rig was instrumented so that an automatic measurement of the torsion could be given. Diagrams of the test rig were shown below. To give this automatic read out of the torsion, it was decided that a torque cell should be implemented. A torque cell was a transducer and transducers were devices used for the measurement of a physical quantity by electrical means, i.e. used to convert non-electrical measurands into electrical quantities. They were used because this method often assisted subsequent operations on the data. In this case, the torque cell was converted an applied torque into an electrical output signal. A torque cell contained a mechanical element- the circular shaft, and a sensor- strain gauges. Firstly the type of strain gauges was chosen. In fact there were two types of gauges, metal or semiconductor. The metal gauges were in the form of a flat coil of wire or etched metal foil and the semiconductor gauges were a strip of semiconductor material in-between two connection leads. The element was wafer-like and had an insulating backing material so that it could be stuck like a postage stamp onto surfaces, using a suitable adhesive. Strain gauges worked on the principal that: where So when the element was stretched, its length increased, its cross-sectional area decreased, and there was also a change in its resistivity. The result was that the resistance of the element changes. Generally, the semiconductor strain gauges were made from silicon and they had a much higher gauge factors than the metal ones which made them much more sensitive than the metal ones (Gauge factors were between 100 - 175 or -100 to -140 depending on whether the silicon was doped with 'P' or 'N' type material, compared with gauge factors of 2 for the metal ones), but it didn't mean that they were automatically better than the metal ones though. The strain gauges based on semiconductor materials were rather more expensive, more difficult to apply and had greater sensitivity to temperature changes than the metal ones. The greater sensitivity to temperature changes meant that the relative change in resistance is now non-linear and therefore more complicated. Due to these negative points, the metal foil strain gauges were chosen. Four active strain gauges are used in order to obtain maximum possible bridge output voltages, to provide temperature compensation, and to make the sensor/transducer insensitive to forces and moments other than the one being measured. These are mounted on two perpendicular 45 helices that are diametrically opposite to", "label": 0 }, { "main_document": "than half (from 53.6% to 24.9%) between 1998 and 2002. In Argentina this ratio was also cut by half when comparing the pre-and post-crisis situations. In all these cases credit to the private sector measured as a share of GDP does not show evidence of returning to the pre-crisis level even though the economy shows strong signs of recovery. To further address this issue I will develop an interesting insight to the problem, asking if GDP impulses the evolution of credit to the private sector, a question which, as far as I know, is not addressed in the literature about banking crises. Considering Table 3, a number of issues are worth emphasising. In the cases of Argentina and Mexico when the full sample is considered, the evidence suggests that the GDP Granger causes credit to the private sector. When sub-samples are considered the evidence also indicates causality in this direction but, since the number of observations is limited, the results are taken only as indicatives. The case of Indonesia shows a slightly different result. In fact, the causality seems to be in both ways, particularly after the financial crisis in late 1997. Up to now I have addressed the vulnerabilities and factors that triggered recent banking crises as well as the main characteristics of the Argentine bank distress. In the remainder of the paper I will stress a set of policy recommendations based on the previous sections' discussions and the analysis stated in Acosta Ormaechea and Todesca (2002) and Fanelli (2002) to address two main issues: (i) how to improve the current situation of the Argentine BS and (ii) how to avoid the vulnerabilities observed before the BS collapse. The role of financial institutions is, essentially, of arbitrage between investors and savers which is carried efficiently only if economic agents find a stable economic framework. So, the consolidation of a stable macroeconomic and institutional setup seems to be the first step towards the consolidation of the Argentine BS. In this sense the efforts of the Argentine government should be concentrated on restoring the economic and institutional basis of the economy, markedly affected after the collapse in 2001-2. From a macroeconomic perspective the economic authorities should promote: (i) a stable exchange rate, (ii) a conservative monetary policy to avoid inflationary pressures, (iii) a controlled fiscal budget and (iv) the renegotiation of the defaulted public debt One of the characteristics of the Argentine crisis is that it not only involved the collapse of the economy but also important political and institutional distress. In this area, the challenges seem to be: (i) improving the quality of Argentine institutions (ii) redesigning the bases for a new contractual arrangement in the banking system not based on foreign currency-denominated assets and liabilities. One of the most important tasks for the Central Bank is to accommodate its role and the BS to the new economic environment generated after 2001. In particular, it seems necessary to deal with the following issues: (i) Determining the norms that regulate the role of the Central Bank as a lender of last resort, a", "label": 0 }, { "main_document": "viewed themselves as healthy (Pill and Stott in Calnan, 1987). These results were similar to those discovered by Blaxter and Paterson who also researched working class mothers but they also found that an important aspect for these mothers was not to have time off work which meant rarely visiting the doctor (Blaxter and Paterson, cited in Calnan, 1987: 27). All these examples of class and age and their perceptions of health and illness show how different their perceptions can be depending on a number of other factors. These beliefs are all lay because they are influenced by a person's background rather than the medical profession. This shows that lay people may never be lay experts because their individual circumstances are always going to affect their view of health and illness. Folk illnesses are another area of lay beliefs about health and illness which reflects the notion that lay concepts are lay. Folk illnesses are influenced by the culture within which they are formed and have a range of symbolic meanings that have moral, social and psychological dimensions (Calnan, 1987). These influences on a person's beliefs about health and illness assist in the diagnosis of illnesses. Helman examined ideas about infectious illnesses amongst a North London community and here a folk classificatory system was identified (cited in Calnan, 1987: 56). This system suggested that the lay people within the community can predict, from the symptoms, what the cause of the illness may be and what the treatment should be. An example of this would be with colds, the folk classificatory system associates a cold with cold, rainy days (Calnan, 1987). Folk illnesses therefore show that lay concepts of health and illness are lay as they are influenced by the culture within which they are formed, rather than the medical profession. Individuals interpret their symptoms and condition themselves before seeking a doctor's advice and so only a minority of ailments are actually seen by the doctor. This is known as the 'iceberg of morbidity' and reflects the notion that lay people form their own beliefs about health and illness before consulting a member of the medical profession (Freund and McGuire, 1999: 167). An ordinary person's own experience, socialisation, their cultural background and their immediate social network such as friends and family help to shape lay people's beliefs about heath and illness (Freund and McGuire, 1999). As Dingwall said, the interpretations of symptoms made within the framework of lay concepts of health and illness are 'intrinsically different to medical knowledge' (cited in Calnan, 1987: 142). Prior argues strongly against the notion that lay people are experts, she believes that lay concepts of health and illness are lay, and nothing more (2003). Prior argues that although lay people have their own knowledge about health and illness and that they can become experts within this in order to challenge medical hegemony, they can never be complete experts in health and illness (2003). This is because lay people do not have the appropriate skills to correctly diagnose, they can often wrongly identify the causes of illness and in", "label": 1 }, { "main_document": "influenced element of the three elements of creativity. As organizational level, the challenge for creative management is that manager should allocate the suitable staff to do match job. The effective allocation means that the job can use people's professional knowledge and creative thinking skills. The most important issue is that the most match job will intensive staff's maximum motivation. (Meyer, 2000) pointed that managers should lead a corporate behavior group in the products innovation process. It means that no single staff can complete all need to be done, and a creative process need all staffs to corporate together. Managers should understand every staff well both on skills and personalities before the whole process. Both the literatures focused on the staff's perspective as challenge of creative organization management. Freedom When studying about freedom, it doesn't mean that managers leave the whole team without any target. (Amabile, 1998) denoted that the organization leaders should set clear strategic targets and give fully autonomy of the process to staffs which will inspire more creativity. The freedom process leads employees to use their professional knowledge and creative thinking skills. Meanwhile, they will exert all their power of motivation and responsibility of ownership to do achieve the goal. Of course, the freedom need a condition called \"self management\" which suggested by (Meyer, 2000). He paid attention to self management ability as individuals. It indicated that organization is comprised of a mount of staffs, which are work independently without heavy head control or guide. He insisted that the creative process will be operated well under condition that the individual objective is in line with the collective and corporate objective. Self management regard individual which impact all in the organization as crucial and initiative part of the whole creative process. If all the employees have the ability of understanding the strategic goals and approaches to achieve them, the management of organization will trust them to work freedom. Resource Two main resources have been mentioned in Amabile's research about creativity in 1998: time and money. Manager need to allocate these properly for increasing people's motivation. Time schedule could not be too rush or too long and need to fit the project's requirement. In the same way, managers should not allocate other resources such as money so tight that staffs will cost time to find additional resources. On the contrary, too much resource also can't cause creativity. In addition, he argued that the traditional concept which is important to create a physical space for teams is not crucial. Mangers should pay more attention to other initiative actions than this issue. (Robinson and Hackett, 1997) had opposite perspective about workspace as Amabile's. They illustrated a case study of a designer and manufactory furniture company. By the changing of workplace for leadership from private room to an open informal community room, the team increased communication and speed for any change decision making for whole organization. Work group features (Amabile, 1998) suggested that a creative team should have mulriple skills and backgrounds which may have different thinking methods and professional areas to encourage creativity. Furthermore,", "label": 0 }, { "main_document": "would provide a better basis for tax administration), without which his would be merely \"an empty gesture of sovereign power\" In that event, should our daring islet emerge victorious, we would have a problem of International Law - although the adventure might prove economically unjustifiable. Such as R.J. Martha, A. Qureishi, op. cit. Sticking to the plane of international taxation, the most straightforward hypothesis of overlapping tax claims as between states stream from the conflict of jurisdiction between the country of nationality / citizenship of the taxpayer and the country to which he or she may move (i.e. of his domicile / residence / permanent sojourn). The former country may want to impose an unrestricted tax liability upon this person (with grounds of its personal attachment of sovereign power upon him), whilst the latter may engage in same action (although relying on an economic attachment instead of a personal one, as some interests of same individual may be within the limits of its territory). From the point of view of the taxpayer, he would be taxed twice - even though each country would merely be applying its own tax laws, and, according to many scholars, none would be under any obligation to behave otherwise. However, if no country is to give any form of relief, our taxpayer in this example is far more likely to dissipate his assets so as to make for his tax obligations than both his fellow citizens in his native country or his neighbours in his host country - an outcome that neither the country of source (i.e. the state from whose territory income flows abroad) nor the country of income (namely that aiming to tax foreign incomes of its resident) would desire. As to companies, the same logic applies. If they are to be taxed by both the country where the central management and control / registered office of the holding of the group lies and the countries where it chooses to incorporate branches or a subsidiaries, they will have little incentive to extend their activities beyond the borders of country where the parent company have its registered office, and therefore would be precluded from making profits abroad and return part of this income home in the form of dividends, as well as from taking know-and expertise and creating jobs in possible host countries. It would then appear to be in best interest of countries desiring to favour the free circulation of people and capital in and out their territories to apportion the taxable income to be generated by this migrant companies and individuals as between themselves, thereby providing some relief to the latter so as not to disincentive them from engaging in cross-border transactions which might inure to the benefit of the Exchequer. After much argument, the situation is now far more stabilised in the form of the aforementioned bilateral and multilateral treaties inspired by the ones prepared by experts in the form of the OECD and the UN Model Conventions. The most critical issue covered by these Conventions is the conflicts that may arise when", "label": 0 }, { "main_document": "abuse, neglect, negligent treatment, maltreatment or exploitation.\" The inclusion of the words \"both within and outside the home\" is of particular importance: it signifies a formal acknowledgment of private sphere violence against women. Other than the Convention for the Elimination of All Forms of Discrimination Against Women (CEDAW), no other convention formally recognizes the violence that women face in their own home. Finally, the preamble also demands that a gender perspective be included in all projects to further the rights of persons with disabilities. Ad Hoc Committee on a Comprehensive and Integral International Convention on the Protection and Promotion of the Rights and Dignity of Persons with Disabilities, A/61 (2006) at 8. Article 6 of the Convention is expressly devoted to women with disabilities. It recognizes the multiple discrimination faced by women and girls and requires, in rather vague language, that the state parties \"take measures to ensure the full and equal enjoyment by them of all human rights and fundamental freedoms.\" While article 6 clearly delineates the Convention's status on women with disabilities, it leaves something to be desired in terms of implementation. Ad Hoc Committee on a Comprehensive and Integral International Convention on the Protection and Promotion of the Rights and Dignity of Persons with Disabilities, A/61 (2006) at 15. Article 8 outlines the obligation of states to raise awareness about persons with disabilities. This article is quite innovative and explicitly obliges the state \"to combat stereotypes, prejudices and harmful practices relating to persons with disabilities, including those based on sex and age, in all areas of life.\" The value of the article lies in section 2, where express measures are detailed. One progressive measure included in the article is the obligation of states to encourage the media towards portraying persons with disabilities in a fair and non-stereotypical way. This is particularly important for women, as they are most often portrayed in a negative manner (weak or vulnerable) in the media. If the media were compelled to portray women with disabilities (and all persons with disabilities for that matter) as strong and empowered individuals, many of the myths and stereotypes about persons with disabilities would be undermined. Ad Hoc Committee on a Comprehensive and Integral International Convention on the Protection and Promotion of the Rights and Dignity of Persons with Disabilities, A/61 (2006) at 16. Article 12 deals with legality before the law. While it addresses many accessibility issues, it does not expressly require legal services to be made accessible in terms of their existence. The issue of accessibility for persons with disabilities encompasses more than simply allowing them the use of legal aid. For example, in the city of Ottawa, population 750,000 as of 2005, there was only one lawyer in the city that dealt exclusively with issues relating to disability. Canadian law schools are only just beginning to offer courses on disability and normally only run such courses on a bi-annual and optional basis. One can only imagine the situation in a developing country that has twice as many persons with disabilities and one-tenth of the resources available.", "label": 1 }, { "main_document": "contribution by Svevo. On the 14 On the 13 After Svevo's death, his wife Lidia asked Joyce to write an introduction to the English edition of He refused, adducing that he never wrote on other work. In a letter to his brother Stanislaus, Joyce remarked that the relationship with Svevo was quite formal, that he had visited him only as a teacher, that Mrs Schmitz had always snubbed his wife Nora Ellmann, 1982:636 What is the truth about Joyce's refusing? Were Svevo and Joyce truly good friends? Did Joyce help Svevo as a revenge on his behaviour during his poor Triestine years? Or against Svevo's snob wife? It is really hard to say. Many critics are debating if it was a real friendship. It is true that their letters always show a certain formality, but also a deep respect, for the man and for the writer behind him. Without Joyce, Svevo had never had the opportunity of shining in the Italian or even the European literary scene. In one of his last interviews, Joyce said: 'We travelled a long way together and Svevo is to my mind the first Italian novelist to introduce the technique of the interior monologue... He was the Italian novelist with whom the generation born at the dawn of the new century can identify.' Ettore Settani, '", "label": 0 }, { "main_document": "There are 2 main pathogenic microorganisms of concern A pathogen modelling programme was used to assess the relative risk from each under different conditions and my findings follow. Assuming the salmon contains 3.8. % (w/w) Sodium chloride has a mean pH value of 6.2, at 5 Under aerobic conditions at level of concern is not reached in 10 days. 2.97 cfu/g predicted. Under anaerobic conditions the level of concern is reached in less than 10 days; between 8 and 8.5 days. Next the effect of varying conditions 'The worst case scenario' was considered i.e. If temperature abuse occurred and samples were transported at 25 4 alternative appropriate conditions were selected based on this. (Inhibiting The 4 conditions, listed below, were considered appropriate for limiting The programme gives values for the probability of microorganisms growing above the level of concern and also an estimate of the number of days in which this will occur. The 2 Conditions which resulted in acceptable levels of microorganisms i.e. lower than the concern levels in the delivery time 10 days are highlighted and are my suggestions for conditions that should be maintained to prevent growth of bacteria or toxin formation. The following are therefore the most acceptable conditions for storage and delivery are therefore: Although the conditions 2 and 3 have been rejected they were tested under worst case scenario conditions and might be useful but only after carrying out storage tests to ensure it is safe .This is because there is a high probability of microbial growth after a short time and dangerous spore levels are reached very quickly, However, if temperatures of 5 Finally, in answer to your questions: Would it be acceptable to post the salmon in a simple lined, cardboard box? No. You would need some form of insulated packaging. What conditions during transit would be required in order to ensure the delivery of a safe product with an adequate (5d) storage life at 5 To begin with, the fish should have low spore and bacterial counts, have the right salt concentration, and be at the prescribed pH range. Temperature of 5 This should be checked and recorded (using calibrated instruments) by the transporter and retailer of your product to assure the customer the correct conditions have been maintained. Any irregularities should be reported Is vacuum packing desirable? No, I it would be advisable to pack them in packs with modified atmosphere (higher concentration of carbon dioxide than normal if possible e.g. 60% CO Alternatively you may consider smoking the salmon and then vacuum packing it. This would allow you to mail order the salmon to your customers. 'Acceptable' means poisoning could occur but all reasonable measures would have been taken to avoid it by providing unfavourable growth conditions. Zero is not achievable without sterilization. Please contact me if you have any further questions. Sincerely,", "label": 1 }, { "main_document": "The authorities were very generous with granting credit (through the nationalised bank system) but in a quite unreasonable way. E.g. state owned agribusiness enterprises were major beneficiaries of this lenient policy that presumed that they, irrespectively of their financial performance, should be supported at any cost. And so amount of credit for them increased, but subsequently- due to losses- their debt grew higher (Biondi-Morra, 1993). Therefore about 1982, the Sandinista regime started to channel the support to the entities that were newly prioritized - cooperatives. Such adjustment of strategy was judged positive, but Martinez (1993) observes that even then the majority of peasants did not benefit in a great way from the change. Year 1985 seemed to be a breakthrough, as rate of land distribution to individuals was 300% higher than during the previous six years (Martinez1993) and other changes followed. It could be described as an expression of the Sandinista government turning more boldly from its initial policy of national unity (aiming at maintaining a \"multi-class alliance\") to popular hegemony, as Luciak (1987) observed. Cooperatives that proved not to be very popular with the peasantry, became more flexible and diversified, i.e. combining collective and individual production (Kaimowitz,1988). Cheap food policy principles had to be revised in the context of the economic hardship (including high inflation, unstable situation due to the war) the state faced, so producer prices were raised to stimulate production, subsidies on basic foods were reduced, and investment money was directed to productive inputs and rationalisations in the peasant sector (Martinez, 1993). The 1986 reform of the agrarian reform law (Presidential Decree no.14) had an important feature consisting in elimination of the bottom acreage limits for expropriation of idle, underused, or rented land (previously the limits applicable were: 500 The new law also eliminated compensation for expropriated properties, and formalised expropriations for public use or social interest (Luciak,1987). It can be said that hunger for land was partially satisfied, but the state of the economy that Nicaragua achieved under the Sandinista rule, was rather poor. At the end of the decade, they had to admit that market needed more independence from the state, so by 1989 it was allowed to determine important economic variables (previously under control of the state), including the exchange rate (Ryan, 1995). Certain austerity measures were taken in an attempt to revive the economy (Everingham, 2001). The Nicaraguan Agrarian Reform was declared finished in early February 1989 by then Minister of Agriculture, Jamie Wheelock Roman. It meant that the process of redefining the agricultural relations of production was completed. According to Martinez (1993), it resulted in one of the most successful land redistributions in Latin America as approximately 2,024,000 hectares were handed over to 120,000 families. However it is argued that Nicaragua became at the same time poorer during the Sandinista decade in power and the basic nutritional needs of the less affluent part of the nation were not satisfied, as it had been promised in the early days of the revolution (Biondi-Morra, 1993). The face of the Nicaraguan countryside had been transformed, but the", "label": 0 }, { "main_document": "Virginity obviously held meaning for the ancient Greeks, considering the amount of material that reflects the topic and it still has cultural and social relevance today. Most evidence comes from male writers as shown by Doherty Should this effect the analysis of such sources, as it is a potential bias? When looking at law opposed to 'entertainment', virginity has important connotations concerning marriagabilty, the reputation and retribution for the girl and consequences for the father who may have wanted to arrange a marriage to improve his connections (Sissa (b) 1990: 87-88). To discuss virginities importance in myth it will be necessary to investigate areas where virginity as a topic is underplayed or surpassed by another issue as well as why there is such a frequent occurrence of young virgins in mythology. DOHERTY, L.E. 2001. London: Duckworth. Pg 20 Considering firstly, possible explanations for the extent of represented virginity in myth, one could be that the myths served to educate young people. What happens to girls who lose their virginity before marriage but also what happens if you reject marriage and therefore try to remain in the virginal state. An example of the latter is Hippolytus. He wishes to remain loyal to the goddess Artemis, therefore incurring the anger of Aphrodite. \"... I destroy all those who with arrogant pride appose me.\" (Euripides, Hippolytus 6). Not only is he killed but also his step-mother commits suicide and his father is ruined. It is a strong image and it seems hard to believe this is what people actually thought could happen. The genre of theatre can not be accepted as universal truth, as plays have a purpose and that is to entertain. However it is known from medical sources that it was not only society that pushed for the convention of marriage. Marriage and therefore sex was suggested by doctors as a cure for some maladies at the time (Lefkowitz 1981: 13-15). Myths concerning girls who stray from what is expected of them, appear to preach that (Deacy 1997: 44). Therefore they encourage pursuit from gods and the like. This reflects the view some people have about rape in society today; the person might have brought it on themselves somehow. This more subliminal message is much easier to believe that this is how the ancient Greeks could have thought, as from what we know of society, women were often kept indoors to avoid \"seduction or rape\" (Deacy 1997:49). Mythology of young women who have been led astray always end in misery for the girl involved (Sissa (b) 1990: 358); there is no happy ending for girls who do not follow convention. Overall the myths seem to promote 'normal' marriages as the best option for happiness and health. Therefore young girls especially, know from an early age that marriage is their target (Dowden 1989: 144). It is an essential part of their transition into womanhood. Virginity also appears to have an important role in ritual and tradition. There were three virgin goddesses, therefore the likelihood of virgins playing a role in the religion at the", "label": 1 }, { "main_document": "the continuous range of interfering frequencies in the white-light spectrum (see equation (6)). The equipment was set up as for white-light, using a Mercury lamp, and Mercury fringes were observed (see Figure 10). By the repetitiveness of the spectrum, it is evident that the emission spectrum of Mercury contains a number of spectral lines rather than the continuous spectrum seen with white-light. As in section 3.2, intensity was recorded as a function of micrometer reading using the photodiode. The beat pattern observed was longer than the total adjustable length of M1 and therefore the coherence length could not be measured directly. The coherence length of the individual spectral lines was calculated from the emission spectrum (equation (6)). This was done by a Fourier analysis of the recorded intensity verses extension data. Consider equation (8) for two waves of equal amplitude, the contribution to intensity for a small increment in wavenumber where the extension By integration it can be shown that: where Now by taking the real (cosine) part of the Fourier transform of intensity the observed wavenumber spectrum of the sample can be found. The computer performed the Fourier transform on this data to produce a graph of k space (Figure 11). For each Using data shown in Figure 12, which is colour coded to correspond to Figure 11, the relative abundances of the measured spectral frequencies can be found. The low signal to noise ratio in Figure 11 means that the information displayed is not of a high enough degree of accuracy for any calculations of coherence length to be made. However, this does at least offer an insight into how this method can be used, and had the data been recorded over a larger range of micrometer readings the signal to noise ratio could have been improved. The anomalous peak, which precedes the yellow peak, is in fact in the infrared range and the data quoted is only for the visible spectrum. The refractive index of a material is defined as In order to determine the refractive indices of various gases the Helium-Neon laser was, once again, mounted upon the interferometer apparatus and a gas cell of length The cell, as shown in Figure 13, could be filled with the different gases that were stored within a gas bladder and releasing the appropriate valves. The length of the gas cell disregarding the length of glass either end was taken to be 40.57 mm and a pressure gauge was attached to the cell so the pressure could be determined at any given time. When the gas cell is evacuated the laser light travels through the cell at At this time, the cell length The frequency of the laser light is directly associated with the energy of the photons produced by the laser As a gas of refractive index Using This shows that as the pressure in the chamber is increased to atmospheric pressure As the wavelength during this time is effectively shorter, the length of the cell will hold By equating The difference in wavelengths between the beam that reflects from", "label": 1 }, { "main_document": "of the new viceroyalty of New Granada in 1740 and the viceroyalty of the River Plate in 1776. These new administrative subdivisions resulted in a massive shift in the economic orientation from the Pacific to the Atlantic, as the silver mining trade route, based in Upper Peru, was made more efficient. However at the same time the numerous local economies were dislocated and but more importantly the reform \"encroached the Hispanic concept of freedom\" Carlos O. Stoezter, The scholastic roots of the Spanish American Revolution, (New York, 1979),p.115 The introduction of the This was a French administrative institution, which was a reaction to the decentralisation of the Habsburgs, which was aimed to correct the anarchy, modernise the Latin administration and end the alleged abuses of Indians. This institution was a symbol of the French spirit of order, which contrasted to the Spanish American one, and it also replaced the This great political error caused the crown to make unnecessary enemies as the Creoles were discriminated against as all the administrative vacancies were filled by The institution was successful for a brief period in the finance and military field, however it backfired at the turn of the century since it had reawakened the political consciousness of There was resistance towards the Spanish tutelage as the Creoles in the River Plate, Cordoba and La Paz called for the defense of municipal freedoms. The Bourbons wanted to better utilise the Indies via the development of the wealth and growth of population. \"They aimed at an increase of trade, production, consumption, and navigation, and at the centralization of revenue, protection of national industries, and a more equitable distribution of wealth.\" Consequently as the Bourbons advocated and implemented the New areas of trade where open to colonial centres as Buenos Aires, Caracas, Cartagena and Havana could trade with other ports in Spain. However it was against some of the monopolistic positions of the Spanish American merchants who had greatly benefited from the monopoly of Cadiz and Seville. Furthermore it was the existence of the Spanish policy of an interventionist state where an artificial economic exchange continued between Spain and Indies that infuriated the Creoles the most. Other sectors of society \"exacerbated anti-Spanish resentment\" Indians, Creoles and the clergy were all harassed to \"donate\" to the Crown, whilst also suffering the dreaded Merchants were banned from trading with other nations, however their discontent was so zealous that free trade was allowed in 1789. Immediately this improved the economic situation as it \"unleashed productive energies, opened broader markets and capitalized the mining industry.\" The fact that the state had to give in to Creole demands represented the slowly diminishing Spanish control and correspondingly the increasing self-awareness of the Creole population. According to Stoetzer the tax burdens and the unfair economic system \"engendered a climate of resentment and a desire for some degree of local autonomy.\" Carlos O. Stoezter, The scholastic roots of the Spanish American Revolution, (New York, 1979), p.117 Translation from Spanish: free trade under the protection of the state Edwin Williamson, Sales tax introduced to increase the", "label": 1 }, { "main_document": "itself to straightforward implementation. The amplitude of each point along the in-phase axis is used to modulate a cosine wave and the amplitude along the quadrature axis to modulate a sine wave. In PSK, the constellation points chosen are usually positioned with uniform angular spacing around a circle. This gives maximum phase-separation between adjacent points and thus the best immunity to corruption. They are positioned on a circle so that they can all be transmitted with the same energy. In this way, the modulation of the complex numbers they represent will be the same and thus so will the amplitudes needed for the cosine and sine waves. Two common examples are BPSK, which uses two phases, and QPSK, which uses four phases, although any number of phases may be used. Since the data to be conveyed are usually binary, the PSK scheme is usually designed with the number of constellation points being a power of 2. To data, we have considered only single property modulators using either phase, amplitude or frequency symbols for conveying the data. We may consider that a modulation combining two or more symbol types could give improved performance in the inevitable tradeoff between bandwidth efficiency and noise performance and this is indeed the case. The most commonly one is sometimes classed as Quadrature Amplitude Modulation (QAM), a combination of PSK and ASK. The scheme modulates the signal onto a sequence of complex numbers that lie on a lattice of points in the complex plane, called the constellation of the signal. In QAM, the constellation points are usually arranged in a square grid with equal vertical and horizontal spacing, although other configurations are possible. Since in digital telecommunications the data are usually binary, the number of points in the grid is usually a power of 2 (2,4,8...). Since QAM is usually square, some of these are rare - the most common forms are However, if the mean energy of the constellation is to remain the same, the points must be closer together and are thus more susceptible to noise and other corruption; this results in a higher bit error rate and so higher-order QAM can deliver more data less reliably than lower-order QAM. If data-rates beyond those offered by 16-PSK are required, it is more usual to move to the better alternative---QAM since it achieves a greater distance between adjacent points in the I-Q plane by distributing the points more evenly (Figure 1.2). The complicating factor is that the points are no longer all the same amplitude and so the demodulator must now correctly detect both phase and amplitude, rather than just phase. In telecommunications, an error ratio is the ratio of the number of bits, elements, characters, or blocks incorrectly received to the total number of bits, elements, characters, or blocks sent during a specified time interval. The error ratio is usually expressed in scientific notation; for example, 2.5 erroneous bits out of 100,000 bits transmitted would be 2.5 out of 105 or 2.5 The most commonly encountered ratio is the bit error ratio (BER) - also sometimes", "label": 0 }, { "main_document": "The banks also provides over wintering habitats for insects and the smaller rodents. Insect diversity is also increased with the addition of field margins around the largest fields. This also provides suitable habitats for shrews, mice and voles that take refuge in heterogeneous grass and flower species. The layout of the farm, with the added hedgerows, field margins, and beetle banks is shown in Figure 3. For the farm to sufficiently monitor the population abundance of mammal species, it is important to get volunteer involvement. There is a large number of people in the UK that want to take an active part in collecting information on the flora and fauna of the British Isles. It must be noted what type of surveys are to be monitored and whether the volunteers need to be taught how to use specific equipment (eg. bat echolocation devices). Health and safety of the volunteers is more important than conserving mammal species, so procedures must be taken not to put the participants in unnecessary danger (e. wading in deep water, or climbing trees with no protective equipment). Volunteers and skilled surveyors must use a variety of sampling methods and background knowledge or to successfully estimate the numbers of wild mammals present on the farm. For example bats are primarily nocturnal, so counting bat numbers during the day is inefficient at monitoring their numbers. Field surveys using echolocation devices can be used, as well as colony counts as brooding females leave their roosting sites at night to feed. The Mammals on Roads (MOR) survey identifies road casualties along certain stretches of road, and can correlate this to the abundance of the species present in the local environment. This may be possible to introduce to the farm, except that there is only one major road that travels past the farm, so species aren't likely to cross the highway. A suitable number of deaths need to be recorded before statistical analysis can be achieved. Therefore only species that are common (eg hedgehog, rat, fox and badger), can be suitably correlated to abundance. A more efficient way in counting abundance is by using clay tiles that create footprints as the animal treks across the tile (Game Conservancy Trust). Hair trap tunnels can also provide hair samples for identification either by using guides or keys. A DNA library of hair samples (cytochrome B sequence) is being established that will allow DNA identification of hair samples (Battersby, 2003). At the present time there are 24 mammals sequenced, but a quicker way to analyse samples is needed for this type of surveying to be used nationally. This will in turn allow for the recruitment of more volunteers that are not required to have special skills in identifying the type of hair from species.", "label": 1 }, { "main_document": "roman town walls. Aqueducts and the movement of water were an essential part of urban living. All human beings need water whether for cleaning, drinking and in the cases of the rich for its aesthetic pleasure (e.g. Fountains). The function of Aqueducts is highly important to urban life and the physical remains of such structures are still highly prolific in the archaeological record as standing structures, enabling us to study and realise their function. The use of roads was highly important during the roman period much as it is today. Roads enabled the towns to trade with both merchants and farmers. They also assisted travel and established connections between culturally diverse areas. This map of the Roman town of Silchester provides us with proof of the formal layout of Roman towns that enables us to use a model to identify its functions. The forum basilica is to be found, as expected, at the centre of the town with right angle roads radiating from this centre. The presence of a clear outline to the town produced by the presence of town walls enforces the idea of a developed infrastructure. And if one consults the page - And again if we look at this map of Roman London the relative features exist. This forming of a model enables archaeologists to identify the functions of particular buildings just by looking at their foundational structural layout and to an extent (this works particularly with the constant central position of the forum) by their position in the town. Archaeologists also need to consider the regional and cultural diversity and chronological change. 'One has only to compare visually a modern village in Kent with one in Northumberland to appreciate the differences, which must have been equally marked in the roman period, caused by variation in building materials and local vegetation.' (Grew & Hobley 1987) We must take into account when comparing different sites the differences regionally, we cannot assume that just because Roman towns were occupied at the same time that they will necessarily have identical functions in the same places. Similarly we must take into account chronological differences; a town from republican Rome in Italy will again differ from one that was constructed in the late imperial occupation. The city of Rome's physical appearance was immortalised in Virgil's work 'Aeneid'. In book eight he describes Aeneas' arrival in the city and greeting and guided tour by the king. 'It is a poetic tour de force, a journey through time as well as space' (Coulston & Dodge 2000) The picture of Rome that Virgil describes in his work can be matched up to the archaeological remains of the city to create a full picture of the site as it was in the Rome of the Kings. Indeed ancient documentation is highly valuable when assessing the functions of different sites and areas. 'Trawling through the byways of Ancient literature has yielded a rich trove of abstruse knowledge which has been used to reconstruct everything from the history of the early Roman calendar to the topography of the Roman forum'", "label": 1 }, { "main_document": "pluripotent stem cells divide symmetrically (in culture) and give rise to 2 daughter cells (Donovan and Gearhart, 2001). These cells are exact copies of one another. When symmetrical division goes on for a certain period of time, the cells are said to show long-term self renewal. Figure 5 shows the process of self renewal. The state of long-term self-renewal is controlled by extrinsic and intrinsic factors (Watt, and Hogan, 2000). It is believed that the intrinsic factors make up the stem cell niche. A niche is defined as the combination of the cellular microenvironment and extra-cellular matrix that serves as a \"shelter\" for an indefinitely long period of time (Musina, It produces various factors promoting the survival of the stem cells and the maintenance of their undifferentiated state. The specific factors and conditions of this niche are of great interest to researchers; but, unfortunately, few factors are known for pluripotent stem cells. These factors are believed to be secreted factors and they are the same factors associated with the cells that have just been derived from their precursors (LIF, STAT 3, bFGF, SSEA4, SSEA3) (Donovan and Gearhart, 2001). This discovery (which took 20 years of trial and error) allowed pluripotent stem cells to be expanded in culture before any induction of differentiation. For example, human ES cells rely on embryonic fibroblast feeders to maintain their undifferentiated phenotype (Przyborski, It is also believed that the self-renewal process requires cell to cell contact mediated by integral membrane proteins (e.g. the receptor Notch and its ligand Delta) and integrins adhering to the extra cellular matrix (they keep the stem cells in their niche and can directly activate growth factor receptors) (Watt, and Hogan, 2000). However, these last 2 concepts have only been demonstrated in stem cells other than the pluripotent. Figure 6 shows a schematic diagram of the differentiation process. The exit from their niche (in vivo) or the removal of those factors (in vitro) is correlated to the transition to irreversible differentiation (Musina, It is not known whether it is the exit from their niche that sets off the differentiation or whether it is the spontaneous differentiation that causes the exit. This relationship has been investigated in laboratories, primarily using mice and by testing with 3 independent assays (carried out by Gearhart and Donovan): in vitro differentiation in a Petri dish, differentiation into teratomas when placed in histocompatible mice and finally, in vivo differentiation when introduced in the blastocoel cavity of a pre-implantation embryo (Donovan and Gearhart, 2001). These experiments have shown that all the pluripotent stem cells can differentiate in vitro and give rise to specialized cells, which represent the 3 primary germ layers in the embryo. During their differentiation, a decreased level of the markers and increased level of new markers were noticeable (Przyborski, It is also believed that the differentiation is dependant on chemicals secreted by other cells. For example, cultures of EC stem cells exposed to retinoic acid or HMBA lose their cell surface markers SSEA3, SSEA4, and TRA-1-60 and simultaneously acquire a variety of new antigens (Przyborski, The stem", "label": 0 }, { "main_document": "written about this particular visitor sector (Bywater, 1993, Gherrissi-Labben and Johnson, 2004) and there are no exact statistical and reliable data about the size of this market. However it can be estimated that young tourists count for as many as 20% of all international travellers. Foremost markets of origin for the segment analysed used to be the UK, Germany, the USA, Spain, Austria and Ireland according to Bywater (1993) and there was and still is a growing demand from the Eastern European countries. Nonetheless there is a significant change in the primary target origin market. There is a growth of arrivals from East Asia and the Pacific, including China and Africa (WTO, 1999) and these markets will count for majority of international arrivals to Europe by the year 2020. The latter data is not specified for the youth segment but can be assumed that will affect it in the same way as it has been already predicted by Bywater (1993) that in the future the rising markets will be Eastern Europe, South America and Asia Pacific, including China. Gherrissi-Labben and Johnson (2004) state that one of the reasons that this market sector has not been well researched might be that the centre of attention is the aging population in many western economies however generally the young people are essential for the society from commercial viewpoint. Another reason for the lack of attention might be that there is a slightly negative picture of them in the mind of the tourism trade. The general image of young and student travellers is that they are 'a stereotype of budget traveller, staying only for a short period, unreliable by nature, and behaving irresponsibly' (Carr, 1998, Horak and Weber, 2000 cited in Gherrissi-Labben and Johnson, 2004, p. 26). Nevertheless the reality appears to be different as stated by number of researches. According to Horak and Weber (2000 cited in Gherrissi-Labben and Johnson, 2004) young travellers have average or above-average expenditure habits, stay for a rather long period, they are much likely to try new tourism products, they are very familiar with information technology and use it easily, their behaviour is sophisticated and most significantly from commercial perspective they could be a potential clientele for the coming fifty or more years. They might possibly be a tremendous supply in the future for the tourism industry as loyal customers if their expectations are met. If they have a good experience about a destination or an activity during their vacation or trip and they leave with a positive memory they are very likely to return as they will be present at the market for many coming years. On the other hand youth tourism is mostly connected with education and willingness to learn (Gherrissi-Labben and Johnson, 2004). In the case of the Scottish Parliament this fully applies as students are more than welcome to experience the parliament on a business day or non-business day depending on what exactly they wish to gather information about and which part of the Parliament they want to learn more about. There is a wide range", "label": 0 }, { "main_document": "and seconds. The function which was used in the program for the speaking of the clock was the speak(int sound, int delayTime) function .The function took 2 parameters one was used for the sound while the other was used to give a pause in the function by the calling of another function which was the delay function which provided pauses in the program whenever required. There was the use of the if and while statement in order to get the function work appropriately .The if statement was if (!(sound == 0 || delayTime == 0)) .The control entered in the function only when the value of sound was not zero or the value for delayTime was zero. The next condition that was implemented in the function was the while statement which was checked when the if statement was true .The condition for the while statement was while (!(*VIA_IFR & 2)) ; .When the while condition was true the control entered the loop and the value of sound was given to the register and after that the other parameter was passed to the delay() function which produced the delay and then control was back again and the while statement executed the condition of !(*VIA_IFR &2)). The next thing to be done was sending parameters after checking the value of the time input on the terminal. Many different if statements are implemented in the programming for checking the range of the time input by the user as weather it lies between 10 to 20 or 20 to 30 and so on .By checking the right value the parameters were sent to the speak () function . If the value of minutes or seconds was greater then 20 then the function speak() was called two times, because at that stage two distinct words were required to be spoken by the clock .For example if the minutes input by the user was 22 then when the first time the speak() function is called , parameter for speaking twenty was given .Then the other value was checked by subtracting 20 from input .And then sending that value array index to the function .Since the array index was 23x14 so a for loop was used to send parameter to the speak function as it will take values from that array index starting from the first and 14 different values were send to the function when checking the range in the programming. The same programming was implemented for the seconds as well as they also needed to be considered in the same way as minutes in the programming. Through out the programming the minutes and seconds programming carried the same logic with them. When the clock spoke the minutes pause was provided and the and() function was called at that time. Everywhere in the programming pauses are provided by calling of the delay() function by passing suitable parameters to the function as per requirement. When we are sending the parameters, the speak() function takes two parameter one is the sound value and other is the delay time. In the", "label": 0 }, { "main_document": "from Roach's: // The above symbol certainly leaves a stronger impression of segregation through its \"doubled\" character than Roach's simple '|'. One could argue that a double character does not make sense if there is no simple version of it, perhaps marking a less \"important\" boundary. But as Brazil does not introduce another borderline concept, neither \"stronger\" nor \"weaker\" than the tone-unit boundary, his choice might be justified by the nature of the publication, a student handbook. To have a bold notation might make the issue easier to grasp. Instead of splitting the tone unit up in pre-head, head, tonic syllable and tail, Brazil only differentiates between the tonic syllable and other 'prominent' syllables. He defines 'prominent syllables' simply by stating that they are \"more noticeable than the others\" (Brazil 1994:8). Prominent syllables are marked by upper-case letters: The tonic syllable, which he describes as being noticeable in a different way, for example a fall in pitch (Roach 1991:9), is marked by underlining: Brazil points out that if there is only one prominent syllable in the tone-unit, it must be the tonic syllable. The tones he introduces are 'falling', 'rising' and 'fall-rise'. The fact that the 'level' and 'rise-fall' tones are not included in the discussion (they are merely mentioned in the final summary) is understandable considering they are not as common and therefore not a vital issue for Brazil's handbook. A useful distinction, very much in line with his functional approach, is his division in proclaiming tone (fall) and referring tones (rise, and fall-rise), the members of the latter group referring back to knowledge shared by speaker and hearer to different extents (Brazil 1994:33). As indicators of tone, Brazil introduces diagonal arrows: However, these arrows are not placed directly in front of the tonic syllable like Roach's tone symbols; instead they are positioned right at the beginning of the tone-unit, succeeding the tone-unit boundary symbol: This might be a rather unfortunate choice, especially for long tone-units. Having an arrow pointing downwards right at the beginning of a tone-unit is likely to confuse some students as to where the fall in pitch actually occurs, which is towards the opposite end of the tone-unit in most cases. It may not be a huge cognitive obstacle because the place of pitch change is marked by underlining, but certainly it is slightly less convenient than Roach's system. A good choice on the other hand is the elision of terminology that is not essential for the purposes of the handbook. Without introducing the terms 'pre-head', 'head' and 'tail', Brazil still manages to convey effectively how and where, apart from the tonic syllable, intonation takes place in the tone unit. He points out that the tone begins on the last prominent syllable and is continued over the remainder of the tone-unit (Brazil 1994:20, 36), and although he does not talk about 'high head' and 'low head', he works with the notion of 'key'. It is noteworthy however, that instead of the dichotomous parameter Roach applies (a 'head' is either 'high' or 'low'), Brazil assumes a default 'mid", "label": 0 }, { "main_document": "out, although it was quite difficult to see the effect. Apple juice gave a positive result while the other juices appeared to give a negative result. Apple is known to be rich in pectin while the other fruits used; grapefruit, lemon and orange, the main pectin source is in the peel which is not involved in this process. The apple juice also gave a positive result for starch when tested with iodine. This may show that starch is contributing to the cloudiness of the fruit juice as it was hazy, although the haziness can be caused by many factors so it is difficult to identify the main cause. It was not possible to make a comparison with commercial clear apple juice as there was none available. It would have been interesting to see if there was starch still present in the clear apple juice as this would show that starch can be present without contributing to the clarity. The other fruits showed negative results for starch. The pH of the freshly squeezed and commercial orange juice is very similar indicating that processing does not have much of an affect on the pH of this juice. The commercial grapefruit juice is very slightly more acidic than the freshly squeezed but there is not a major difference. The lemon juice is more acidic than the other citrus fruits which can be expected but interestingly the apple juice has a lower pH than the orange and grapefruit juices. The low pH could be because cooking apples were used and they are slightly more acidic than eating apples. When producing apple juice commercially it is beneficial for the apples to be sweeter in flavour, and so the type of apple and point of ripening is important. The sweeter they are then there is less need for adding sugar or sweeteners instead will give a naturally sweet taste. The total solids content was examined using a refractometer. The most interesting result was between the commercial and freshly squeezed grapefruit juices. The commercial juice had a total solids content of 10% while the fresh juice had only a 3% content. There did not appear to be an obvious difference in clarity and if there had been time it would have been good to do the readings again to make sure they were accurate. An expected result would be that after processing the juice would have a lower total solids content than it started off with, especially if clearing agents had been added. When comparing these juices though, it must be taken into consideration that they have different origins. They were probably made using different types of the fruit and could originate from different countries, with different compositions of sugars for example. Both the freshly squeezed and commercial orange juice show a similar total solids contents while the apple juice sample gives a reading lower than this. All of the clearing agents resulted in a high clarity except for Peelzym which still had a slight haze even at 0.3ml. It could be possible that more is needed for the", "label": 1 }, { "main_document": "(Tannen 1984 as in Schiffrin 1994: 106), considering whether different subcultures, such as gender, class or ethnicity; use different speech styles and, if so, what effect do these speech style have on others (Gumpertz, 1982: 1)? Do these different styles cause miscommunication? The interactional approach to sociolinguistics highlights the significance of small differences in spoken discourse (Cameron, 2001: 106) with Maltz and Borker's (1984) suggesting that males and females belong to different subcultures leading to the decision to look at an all male and an all female transcript. The approach will allow me to investigate whether certain characteristics, noted in the previous literature review, explicitly characterise a speaker as one gender or another, or whether formality and public speech acts as a variable. Tannen (1990) felt men were more comfortable in public talk, and women more comfortable in private talk due to each genders dominance in each arena (Tannen, 1990: 76/7). The approach looks at, both, non-verbal and verbal features. Non-verbal features include prosody (stress, rhythm, intonation etc), paralanguage (voice quality, hesitation, speed etc), body language and vocalisations such as laughing, crying or coughing (Cameron, 2001: 109). Verbal features that are studied within this approach include hedges, minimal responses, tag questions, commands and directives, interruptions, overlaps and simultaneous speech, swearing or taboo language and so on. However, the data includes no visual aids and therefore aspects of non-verbal features will be impossible to investigate e.g. body language. For this reason verbal features will be the focus of the analysis. The key features that will be analysed will be interruptions and overlaps, minimal responses, hedgers, use of questions and tag questions, due to their significance in gender research; however other features may be commented on briefly. The first transcript is an all female sample of informal speech recorded and transcribed by the students who feature in the transcript. The participants are all university students in their early twenties and were all participating in the recording as part of an assignment. The second transcript is an all male sample from a political interview. The four participants adopt the roles of interviewer, interviewee and two members of the public who act as challengers to the interviewee's responses, via their recalling personal experiences. The transcript source is BBC 2's The reason for analysing a public male transcript and a private female transcript is to look at extra linguistic features in the settings which both genders are said to be most dominant in (Tannen, 1990: 76/7) . Results for hedgers considered the context as \"like\" is commonly used as he demonstration of speech Research question and hypotheses Men use a greater or equal number of hedging devices in public conversation as women do in private conversations. Men were found to use fewer hedging devices than women in these transcripts, with women using hedging devices 8 times and men using hedging devices 6 times. Men use a greater or equal amount of tag questions in public conversations as women do in private conversations. Men were found not to use tag questions in this transcript, whereas women were found", "label": 1 }, { "main_document": "countries and found female education, debt dependence and economic disarticulation to play a significant role, whereas factors like industrialisation and state strength did not have very sharp effects. Strauss, Gertler, Rehman and Fox (1993) investigate the socioeconomic determinants of adult ill health across Bangladesh, Jamaica, Malaysia and the United States and find strong effects of education and to and extent per capita household expenditure, they also find considerable life cycle effects causing gender differentials in reporting health problems, with women reporting problems at an earlier age. Tulijapurkar and Boe (1998) have examined the impact of several factors like education, gender, marital status, race and ethnicity on health; they have also studied the development of mortality and found mortality rates to be declining over time and since life expectancy is negatively correlated with mortality, it would be found to be increasing. Parsons (1982) showed through a probit model that factors like mortality rate, age, prior unemployment experience, and social security benefit and local welfare are negatively related with labour force participation which is positively related with hourly wage rate. Few other studies which could be mentioned are those on life expectancy by Carnes,al. (1996); Finch and Kirkwood (2000); Manton, Stallard, and Tolley (1991), Olshanskyal. (2001). J. Strauss and D. Thomas (1998) followed Grossman (1972) and defined the health production function as: Where health H depends on a vector of health inputs N; labour supply L; socio economic characteristics like gender A; family background B' including parental health and local health infrastructure; the disease environment D; and unobservables Of all these health is decreasing only in labour supply as greater work would require greater energy. In another study they found that taller men earn more money in Brazil even after controlling for important determinants like education and experience. They conclude that \"the balance of evidence points to a positive effect of elevated nutrient intakes on wages, at least among those who are malnourished\". Strauss (1986) found a strong positive impact of better nutrition on farm productivity in Sierra Leone using the agricultural production function and average caloric intake per adult in rural households. There are several studies suggesting the indirect effect of health in increasing productivity of the labour force for instance Harvey Leibenstein (1957), Mushkin (1962) and Sahn and Alderman (1988). Several empirical studies have been undertaken to determine the close relationship between health and education in economic development. A longer life raises the return to investments in education in several ways while greater education capital raises returns to investment in health. However this is more common sense knowledge than is proven by some literature. Behrman (1996) argues that associations do not necessarily indicate causality, he emphasises on the role of unobservables depending on the choices made by individuals and their families such as parental care and guidance. He incorporates variables like health, gender, pupil teacher ratios, parental education among others and concludes that the effects may be understated or overstated implying an downward or upward bias owing to correlation with the dominant unobserved factor. Pollitt et.al. (1989) found contradictory results for the", "label": 0 }, { "main_document": "be linked directly with the species absence in parts of East Africa today (Cowlishaw and Dunbar, 2000). All five sub-species of gorilla are currently under threat largely as a consequence of being hunted for body parts (Rowe, 1996). In Cameroon alone eight hundred individuals are taken each year. Due to their large size and slow reproductive rate it is near impossible for this species to recover from heavy hunting pressures. Most of these sub-species now exist in small isolated populations resulting in devastating effects on population numbers. As a result of hunting the mountain gorilla ( The destruction of the rain forest increases access to primates. Thus habitat destruction exacerbates the effects of hunting and can have a dramatic impact causing primate populations to plummet (Dunbar and Cowlishaw, 2000; Struhsaker, 1997; Wright and Jernvall, 1998). Therefore primates living in disturbed, fragmented or modified forest areas are likely to be more vulnerable to the pressures of hunting. For example there have been a high number of local extinctions of the woolly monkey ( Forest size and habitat heterogeneity also influence primate group size and distribution (Cowlishaw and Dunbar, 2000). Hunting and habitat loss will cause populations to decline in size. Small isolated populations are increasingly vulnerable to intrinsic factors, such as demographic and environmental stochasticity and the loss of genetic diversity (Leakey and Lewin, 1995). Except for a few exceptional cases extinction is a multi-stage process occurring patchily as populations are lost at different times and locations (Struhsaker, 1997). The extinction of a species occurs when its final population has vanished (Andrewartha and Birch, 1954) thus, species existing in small unstable populations stand a high chance of becoming extinct. The ability of a primate species to adapt to a new environment will strongly influence to what degree habitat degradation intensifies the effects of hunting pressure on that species. Primates such as red colobus monkeys and chimpanzees prefer old-growth forest areas and so are less able to adapt to habitat change and thus are more vulnerable to hunting (Struhsaker, 1997). On the contrary the cercopithecines tend to be highly adaptable colonists, equipped to survive in a wide variety of habitats, and therefore often escape the limitations of living in a damaged habitat where hunting takes place. Primates that live near human settlements often become agricultural pests (Boulton and Horrocks, 1996; Else, 1991; Sprague, 2002). Traditional control strategies focus on attempts to eradicate pest species (Else, 1991) leading to the loss of some species from certain areas altogether. This can be fatal for primates in need of conservation. For example, it has been reported that pest control in the Arabuko-Sokoke Forest in Kenya, is occurring at a level beyond the sustainable extraction rate for both the blue monkey ( In addition primate species that humans feel threatened by may be specifically targeted and hunted to such a degree that extinction becomes plausible. This has been witnessed in Africa where gorillas believed to be responsible for taking human babies have been actively hunted (Dunbar and Barrett, 2000). From the examples discussed in this paper it appears", "label": 1 }, { "main_document": "A considerable effort has been made in economics in the last thirty years with the aim of answering important issues regarding monetary policy: why an economy ends up with positive inflation rates without any remarkable positive effect on output, and why it seems difficult to achieve a final outcome with zero inflation and, at least, the same output level. To answer these questions the seminal papers by Kydland and Prescott (1977) and Barro and Gordon (1983a) (henceforth KP and BG, respectively) first developed an analytical framework based on the so-called time consistency problem (TCP). The policymaker faces a trade-off between unexpected inflation and output but, as the public incorporates that, the expected inflation is set at a sufficiently high level so that the government's best response is to validate this inflation and the output is not affected. Many suggestions have been assessed about how to improve this suboptimal equilibrium. For instance, KP and BG highlight that by imposing strict rules on the policymaker the TCP might be eliminated. Rogoff (1985), on the other hand, points out that by delegating monetary policy to a conservative central banker the TCP is reduced. Furthermore, this framework gives analytical support to the explanation about why modern Central Banks (CB) tends to be more independent The paper has two main objectives: firstly, address critically the TCP in monetary policy; secondly, answer why by delegating monetary policy to an independent CB it is possible to reduce the TCP. Some evidence is also introduced to overview to what extent independent CBs have reduced the TCP from an empirical perspective with emphasis on the UK experience after 1997. One of the main conclusions of the paper is that the TCP seems to arise only under institutional setups where the policymaker faces some degree of discretion. In this case, the policymaker faces a marginal benefit of generating inflation which exceeds the marginal cost when the inflation rate is announced below the equilibrium level, which may induce him to not validate such announcement. Furthermore, developing this particular insight to the problem I show that delegating monetary policy to an independent CB helps to reduce the TCP since, for a given marginal benefit, the marginal cost of generating inflation rises compared with the pure discretionary case. The remaining of the paper is organized as follows. Section I critically addresses the TCP in monetary policy. Section II assesses how an independent CB helps to reduce the TCP. Section III analyses some empirical evidence emphasising the UK case after 1997. Section IV presents concluding remarks. The TCP Kydland and Prescott (1977) (KP) first explained the TCP the monetary authority may suffer when conducting monetary policy. In particular, KP points out this problem considering a situation where the policymaker optimises a social objective function in each period, in a world where agents do not have mechanical A further refinement of this original idea was introduced by Barro and Gordon (1983a) (BG) in an attempt to include a theory of expectations formation (op cit., p. 592). In the remainder of the section, a detailed and formal explanation", "label": 0 }, { "main_document": "autonomy of the corporate form established in One argument for upholding the doctrine of corporate personality is that creditors should \"look after their own skin\". Potential creditors should obtain guarantee from the parent company before dealing with the subsidiary. However, it is doubtful whether this demand is available to small creditors or involuntary creditors such as victims of tortuous conduct by the subsidiary. Little legislative protection is provided to these claimants. P.L. Davies, Gower and Davies' As mentioned above, the main provisions are the fraudulent trading provision (s.213 Insolvency Act 1986) and the wrongful trading provision (s.214 Insolvency Act 1986). However, these provisions are difficult to establish in practice. See S. Griffin, \"Limited liability: a necessary revolution?\", (2004) Company Lawyer Vol.25(4), 99; see also P.L. Davies, Gower and Davies' For a discussion of the problem with the principle of limited liability in relation to involuntary creditors, see A. Ashwin, \"Tortious liability of company in winding up: an analysis\", Comp. Law. 2005, 26(6), 163 See, for example, the Bhopal Gas Tragedy in India, where thousands of people died from the poisonous methyl isocyanate gas leaked from the Union Carbide pesticide factory in the city, S. Chakrabarti \"20th anniversary of world's worst industrial disaster\", The World Today - Friday, 3 December , 2004, P. Muchlinski, \"Holding Multinationals to Account\", Comp. Law. 2002, 23(6), 168, at 174. Another argument for upholding the separate corporate personality is the \"asset partitioning\" rationale. In other words, separate corporate personality is a two-edged sword that also works to the benefit of creditors. While the asset partitioning rationale is certainly valid and worth considering, it is also arguably true that limited liability did not anticipate shareholders to be legal persons in the first place. It is arguable that a parent company should not be entitled to the same disparity in the risks and rewards as shareholders who are natural persons. P.L. Davies, Gower and Davies' P. Muchlinski, \"Holding Multinationals to Account\", Comp. Law. 2002, 23(6), 168, at 174. The corporate veil test is clearly not fair. While the legal position treats both types of shareholders, that is parent companies and natural persons, equally by providing the same legal protection, the different nature of the two works to favour corporate groups. The \"agency\" argument fails because the holding company-subsidiary relationship usually falls short of a full agency relationship. In contrast, Canadian jurisprudence built upon the \"agency\" ruling in Law. 2007, 28(2), 58-62; see also J. Neyers, \"Canadian Corporate Law, Veil-Piercing, and the Private Law Model Corporate\", University of Toronto Law Journal, Volume L, Number 2, Spring 2000; S. Griffin, \"Limited liability: a necessary revolution?\" (2004) Co Lawyer vol. 25(4), 99. Without clear guidance as to The effect of the current position is to allow corporate groups to shift risks to overpowered or involuntary creditors, thereby running serious risk of corporate groups \"externalizing\" liability in negligence. This runs contrary to recent calls for greater corporate social responsibility and is clearly not desirable. P. Muchlinski, \"Holding Multinationals to Account\", Comp. Law. 2002, 23(6), 168, at 178. In sum, the injustice arising from the", "label": 0 }, { "main_document": "however this does not explain the reason we see a pattern; values below, then above, then below. There is correlation between the residuals, the goodness of fit of the last value will affect the next because they are happening over time, they will be influenced by previous events, the higher the GDP value in one year the higher the value of predicted GDP the next year In India we find a similar situation to that which we found in the US. Imports and GDP are positively correlated, in fact marginally more so with a correlation coefficient of: 0.979607. We see that the line of fit and residual plot are again substantive to this conclusion. The line of fit is closely followed by actual values and the residual plot shows small variance. It appears that once more imports and GDP are linked in the way our model predicted. In contrast to our previous observations we find a very different situation in Sierra Leone. As Figure 7 portrays, there appears to be no consistent relationship between imports and GDP. The correlation coefficient is low; 0.133331 The line of fit (see Figure 8) illustrates how widely dispersed predicted and actual values are, and how the relationship is only just positive (very small upward slope). What is also interesting is how different the figures are in size in comparison to the other two countries we have investigated this may be the key to understanding the lack of success of our model for Sierra Leone. Sierra Leone was ranked 177 Clearly this is going to cause a severe hindrance to economic growth and development. The civil war in Sierra Leone from 1991-2002, can provide an explanation for the downturn in GDP since 1991. Together with the general hampering of economic development specifically the UN sanctions imposed on Sierra Leone in 1997 caused a 20% drop in GDP, bauxite and rutile mines (main manufacturing outputs) have been shut down by civil strife. Canadian International Development Agency The path imports has taken is less easily explained, there has been a generally steady decline in imports, after the civil war this is explained by the fall in GDP but previously other factors were at play. The essential cause of the downward trend in imports is explained by the fall in purchasing power. In order to import money is required and money is earned through exports, if exports decrease then so does your power to purchase. There has been a fall in exports over this period, this is because Sierra Leone's main export, diamonds, is increasingly exported illegally, (this trend was accelerated by the rebel occupation of the diamond production area, during the civil war) they therefore earn the country no official money, imports have, as a consequence been falling. This illustrates the importance of other factors on imports, in addition to GDP. The case of Sierra Leone is especially interesting because it is an under developed country, other factors are often weighted in their importance in these types of situations, where society and economy is unstable. We are not only", "label": 0 }, { "main_document": "public debt, the solvency of the system depends indirectly on the solvency of the public sector. So, when confidence in the sustainability of the public debt decreases the price of public bonds diminishes, thus negatively affecting the BS balance-sheet. In an extreme case, when the government defaults on the public debt the value of the banking assets may collapse The lack of transparent information in the BS has also been highlighted as an important element in explaining BS vulnerability (see Goldstein and Turner 1996, pp. 21-24). Many experiences have shown that prudential measures in terms of liquidity requirements were not in accordance with the best international standards. The lack of transparency in banks' information affects the capacity of customers and monetary authorities to accurately determine each particular bank's situation, thus enlarging the asymmetric information problems in the system and the likelihood of a crisis. During the 1980s a typical external shock which ended in banking crises in developing countries was the sudden reduction in the terms of trade (see Kaminsky and Reinhart, p. 485). This shock negatively affects the profitability of the tradable sector thus diminishing its capability to perform its loans. The evidence also suggests reductions in the terms of trade have been more severe in those countries with a relative concentrated exports structure (Goldstein and Turner, p. 9). As the international financial markets have become more interconnected, especially during the 1990s, it has been relatively easy and cheap for international investors to mobilize financial resources among national States. This mechanism has had an important role in the genesis of banking crises, being more important in those countries with not well-developed capital markets The contagion mechanism usually works as follows: when a large international investor suffers a direct loss in a particular country he tends to reduce overall exposure in countries with similar characteristics, thus precipitating the likelihood of a banking crisis. It has been also argued that crises may be driven by sudden changes in expectations, under what is known as the multiple-equilibria scenario, thus yielding a capital outflow and in turn triggering the crisis Either the intervention or the liquidation of banks presents controversies in the economic literature. Scott (2002, p. 14) highlights that this policy helps in reducing the BS uncertainty, enhancing the natural adjustment process of the system. On the other hand, Hawkins and Turner (1996, p. 37) argued that this policy might have a negative effect since it may undermine the public's confidence in banks, provoking a generalised bank run. As a rule of thumb, and considering the Argentine case, I sustain that avoiding banks' liquidation during a crisis seems to be an adequate policy when the distress has systemic characteristics. In the economic literature exists an interesting discussion around the moral hazard problems involved with the deposit insurances (DI) (see Scott 2002). However, Hawkins and Turner (1996, p. 45) sustain that in most banking experiences, monetary authorities explicitly or implicitly implement a DI to avoid a negative perception of the BS by depositors. They also argue that the DIs have shown important differences in the", "label": 0 }, { "main_document": "Yahoo is an internet communications and media company that offers online navigational directories. It is a market leader in generating branded advertising from traditional marketers, which contributed 88% of total revenues in 2004. This advertising appears on a range of its internet and mobile services, of which the major ones will be discussed in the context of the essay. The market for internet products and services is characterised by rapid change, converging technologies, and constant competition. In such a competitive environment, an understanding and continuous scanning and analysis of the macro and micro marketing environment for strengths, weaknesses, opportunities and threats (SWOT analysis), and the lookout for strategic windows, are essential for the company's survival and success. The analysis of the marketing environment should result in the formation of a marketing mix programme to implement the marketing strategy. The macro and micro marketing environment forces affecting the company and consumers will be analysed in turn, and the likeliness of their influence on the development of a suitable marketing mix will be considered. The marketing mix will be considered in terms of the influence of these environment forces on the \"5 P's\" of the marketing mix (Product, Place/Distribution, Promotion, Price, and People). The internet is a recent technology and is expanding and changing faster than the rate at which the laws governing it can be produced. Nevertheless there are several laws in the EU and US which are likely to influence the development of a suitable marketing mix. Surveys show that the vast majority of websites collect personal information. Public concerns about on-line privacy remain, and many in the industry are urging self-policing on this issue to head off potential regulation. Few laws specifically address personal privacy, but the most serious strides in this regard have come from the EU with the 1998 EU Directive on Data Protection which requires the company to explain how the information will be used and to obtain the individual's permission to use their information. Since Yahoo relies on customisation of its services (email, instant messenger, personals, advertising) based on user profiles (which include personal information), such legislation has far reaching consequences for the level of customisation of its products/services and for the associated customised promotions that can be targeted at customers. Thus the Product (service) and Promotion variables in the marketing mix are very likely to be influenced by the laws and regulations surrounding personal privacy. In the US, the Children's Online Protection Act and the Children's Online Privacy Protection Act are intended to restrict the distribution of certain materials deemed harmful to children. This restriction on distribution is likely to influence the Place/Distribution variable of the marketing mix. In addition, the Protection of Children from Sexual Predators Act of 1998 requires online service providers to report evidences of violations of federal child pornography laws. Such legislation may impose significant modifications to the services, and is thus likely to influence the Product variable of the marketing mix. Since many parties are actively developing search, indexing, eCommerce and other Web-related technologies, Yahoo may be subject to intellectual property", "label": 1 }, { "main_document": "long chain of causation. A reasonable man would most likely not have foreseen the full length of this chain of events, however, this would be decided by the particular judge. The risk that Debbie would develop nervous shock from viewing the news report on television is very small since this is not even normally considered to be reasonably foreseeable as it is not true shock. Obviously if David had taken greater care to avoid hitting pedestrians, at no extra cost to himself, then of course Debbie would not have suffered. However, since there has been established a duty of care it has either been breached or it hasn't and in this case it clearly has if she developed nervous shock, not just suffered grief for example. Clearly from the question it states that as a result of hearing about the accident Debbie suffered nervous shock and so the 'but for test' shows that she would not have suffered this nervous shock but for the taxi drivers negligence. The proof of this cause will be difficult to prove and this is why she will not win because it will be impossible to show that a normal person would have suffered in the same way. The result in Despite this all the evidence and severity of the suffering of Debbie I think the damage too remote to allow the driver of the first taxi, David, to be held fully liable. Therefore I would expect this claim to be unsuccessful. It is reasonably foreseeable that if the taxi driver drives fast in a built up area there may be an accident involving collision with any property on or near the road. Therefore it was foreseeable that collision with a lamppost and/or a parked car may occur due to negligent driving. The proximity between Colin and the owners of the lamppost (the local council authority) and the parked car do not have any relationship other than that created by Colin since he is now known to them as the person who damaged their property. Although there is effectively no proximity it is fair, just and reasonable to impose a duty of care on the driver of the second taxi as he is definitely liable to the claimants through his negligent driving as this is a road traffic accident The reasonable man in this case would not have raced in a built up area where there will obviously be property that he could hit. So Colin fell below the standard of care expected from a reasonable man and failed to see the harmful consequence and he is therefore negligent because he failed to drive more carefully. The chance of damage occurring to the property in question, the parked car and lamppost was quite high given that racing was taking place. Once again the cost of avoiding this would have been nothing and so Colin was in breach of his duty owed to the owner of the parked car and the owner of the lamppost. Clearly, but for Colin's negligence the parked car and lamppost would not have", "label": 1 }, { "main_document": "\"stoppages in production occurred and many enterprises overextended their productive capacity\" Joseph, \"A Tragedy of Good Intentions.\" p. 436. Saich, p. 37. In addition, during the years of the GLF, China was further disadvantaged by poor harvests as a result of bad weather. The \"summer harvest of 1959 was poor, and the harvests of 1960 were disastrous\" with one third of China's arable stricken with drought and another sixth being flooded. So aside from Mao's insatiable ambition and the Party's co-operation with such extreme policies it is also necessary to consider that drought, flood and Soviet betrayal may have complicated the situation. Gray, p. 315. Lieberthal, \"The Great Leap Forward.\" p. 102. However Mao was the dominating figure who was determined to go through with the GLF and without his strong desire to beat the West and prove that Socialism could provide a stronger economy than capitalism then the GL would not have been such a tragedy. If you take the role of the leadership as well (and the record makes clear that Liu Shaoqi, Deng Xiaoping, and most other leaders supported the GLF wholeheartedly throughout The poor harvests and natural circumstances only served to deepen the crisis which was already occurring due to the unrealistic targets for grain and steel and unwillingness of the Party to accept criticism even when they realised that it was not going according to plan. Lieberthal, \"The Great Leap Forward.\" p. 99.", "label": 1 }, { "main_document": "us to understand the relationship between the organization/IT process and strategies required in organization. This is a leading principle both for research (Barclay et al 1997) and for practical (Luftman and Oldach 1996, Brier et al 1999) purposes. However, the model presumes that management is always taking full control of what the situation is and clearly understanding what is going on, meanwhile, information infrastructure can deliberately be aligned with emerging management insights (Ciborra and Hanseth 1998, Ciborra 1998, Maes 1999, Galliers and Newell 2003). As Earl (1996) states, it takes a considerable time and effort to examine and investigate the processes and applications in the organization. This can be done in a future plan, but it is not an immediate panacea. Also, this model might become ineffective particularly in the rapid changing environment because flexibilities can be gained by allowing certain misalignment within the organization (Ives and Jarvenpaa 1994). For example, paper-based fax machines are still widely used in the organization as people can check and read the fax documents in his own time, though video-conferencing technology can greatly reduce the processing time for communication and provide streamline business process. What's more, according to Maes's (1999 p.5) study, a 'lack of balance' (the existence of non-alignment) between business and IT is often a source of innovation and success in the organization. Social dimensions are another aspect being neglected in the model. This lack of attention resulted in a misinterpretation of alignment - solely integrating business and IT s' strategies and infrastructures (Benbasat and Reich 1998), whereas ignoring the impacts of 'organizational learning'. (Ciborra 1998) Brier and Luftman (1999) develop the ideas of alignment enablers and inhibitors rested on several antecedent assumptions, as summarized here. A successful IT track record in department tends to improve its relationships with other business units (Earl 1996). Benbasat and Reich (2000) argue that the communication between business and IT executives can be enhanced by the degree of IT successful implementation. Brier and Luftman (1999) find that lack of IT track record - 'IT fails to meet its commitments', ranked third in the list of inhibitors, contributes to the failure of alignment. They presume: Successful IT History facilitated the business - IT alignment. Prior researches on strategic business IT alignment highlight the importance of knowledge sharing between business and IT executives (Carrico and Johnston 1988, Venkatraman 1989, Gurbaxani et al 2000). This importance of knowledge management and sharing mechanism within organization - 'IT understands the business', ranks third in the top six enablers of business IT alignment (Brier and Luftman 1999). Therefore, the assumption of the study is: Knowledge sharing between business and IT executives enhances the business IT alignment. Strategic business IT planning has been widely accepted as a core Brier and Luftman (1999) tool of managing IT resources and business strategy (Ives and Jarvenpaa 1993). conclude business IT planning in his findings as second most important factors both in enablers - 'IT involved in strategy development' and inhibitors - 'IT does not prioritize well'. Thus, the second assumption here is: Well-defined and comprehensive strategic planning process", "label": 0 }, { "main_document": "self neglect because neither have much subcutaneous fat. Good sites are the back, upper arms, abdomen, upper surface of legs, below shoulder blades (best), also just above the backside. It is essential to make sure there is a good fat pad underneath the skin. Problems can occur if the needle gets too deep into the muscle. Intramuscular (IM) This is the most common method of injection. The drug is injected deep into muscle tissue where it is absorbed into the capillaries and enters blood stream. You must always pull back the plunger to check that it is not in the blood vessel, if any blood is evident, another site must be found. The drug seeps out across muscles into capillary walls. Doses are up to 3ml commonly given via 21-25g needle, 1-1.5 inch in length. Larger doses (more than 3ml) can be given using multiple sites, it is best to spread them out. Rate of absorption depends on the physical condition of the patient. Muscular damage (age and malnutrition) leads to poor absorption rate, otherwise the rate is fairly predictable. Not used where there is inadequate muscle mass or decreased peripheral perfusion. The practitioner should have a good look and feel of the site, feel the muscle to check that it is adequate. Sites : deltoid (top of arm), dorso gluteal (upper outer buttock), ventrogluteal (below hip) - need to be careful of pelvis with this one, vastus lateralis site (upper outer front leg). The needle should be withdrawn at the same angle and at the same rate as it went in. When a situation calls for a rapid therapeutic effect this is used. It is direct administration of the drug directly into the blood stream. IVs can be either Bolus or infusion. Continuous infusion IV is a way of keeping the amount of drug available to body tissues at a constant level. Bolus IV : a rapid intervention, no barriers to cross prior to blood access. Some drugs can be given as Bolus to establish rapid therapeutic effect and this effect can be maintained by infusion eg. Lignocaine to treat ventricular fibrillation. Once in the vein/vessel the angle of the needle should be flattened to make sure it does not go straight through the other side.With infusion, needles can become dislodged and damage the vein. Also they can move inside the vein. Some cannulas have 2 parts, one for infusion and one for bolus at the same time. If IVs can not be continued, some drugs can be given via an Drugs are absorbed rapidly via capillaries of the lung. When this is done correctly, the rate of drug absorption can be the same as that of IV. It is rarely used in our setting, only as a last effort to save someone's life. In this essay I will explore the way Alzheimer's disease damages the human brain, how this manifests itself in the sufferer's behaviour and how this can inform practice for the mental health nurse. Alzheimer's disease is the most common form of Dementia, accounting for up to 60%", "label": 1 }, { "main_document": "someone your wholehearted attention. (Gable, 1997) This will result in that person feeling more open, therefore, more likely to want to communicate. According to Gable, when an individual is engaged in active listening they should focus on the other person by observing their verbal and non- verbal communication. (1997) To help people to succeed at active listening a method has been devised using the word; SOLER. The 'S' represents facing the other person as The 'O' refers to adopting an The 'E' refers to maintaining (Egen (1986) cited in Gable, 1997) According to Rungapadiachy, learning how to respond lies at the heart of effective communication. (1999) Also, that responding with understanding suggests that; a person listens carefully, recognises their feelings and demonstrates empathetic understanding. (Gable, 1997) The skills of listening and responding are clearly important for health professionals to learn to be able to communicate effectively. Another essential factor for effective communication is the ability to accept patient feedback. (Niven, 2000) A health professional who uses their communication skills effectively will be able to build a relationship with their patients or clients, therefore, increase the chance of achieving their therapy goals. The ability to communicate effectively is clearly a difficult skill to learn and one that many health professionals do not feel confident about. For instance, dieticians report difficulty with; coping with patient's emotions, dealing with patient's reasons for not changing eating behaviour and knowing what to say when a patient raises a non- dietetic problem. (Gable, 1997) Similarly, many clinical staff are reluctant to deal with emotional issues such as, anxiety and depression because they feel they lack the skills to deal with them competently. (Marks (1997) cited in Russel, 1999) Although knowledge of verbal and non- verbal communication, listening and responding will assist health professionals to communicate more effectively with their patients or clients, there are other skills involved. To be an effective communicator health professionals require five skills according to Berkoal. (1997) These are; to act appropriately, balance opposing communication goals, be adaptable, recognise obstacles to effective communication and to be ethical. However, it may be difficult to transfer knowledge of these skills to practical ways of communicating effectively. Therefore, there are various guidelines that health professionals can follow, these include; providing patients with clear, honest explanations and advice and being sensitive to their individual needs, avoid technical jargon and ambiguous instructions. (Russel, 1999) It has also been suggested that information is given in a clear and concise manner and that important instructions are given at the start of a consultation and then repeated at the end. (Russel, 1999) It has long been known that effective communication will benefit therapy, as people working at all levels in health care are being encouraged to improve their communication skills as this is seen as fundamental to improving the quality of services. (Cameron (2000) cited in Robbal. 2004) In conclusion, the ability to communicate effectively is a necessary skill for all health professionals, however, it is not an easy skill to learn. To do this a health professional must be aware of", "label": 1 }, { "main_document": "good opportunity for tourism organizations to promote their brand to a wider geographical area than any of their advertisement could reach (Buhalis, 2003). On the other hand, the internet has enabled new organizations to be developed and brought several new brands (Buhalis, 2003). Some of the prominent companies spent considerable amount of money establishing their online presence , hoping to capture a big market share of electronic market and therefore bringing more reservations and make profit . Expedia is a good example that has successfully builds its brand image in customers. McCole (2002) pointed out purchase behavior is more likely with branded (trustworthy) site .Therefore consideration should be given to identifying strategies for increasing consumer e-loyalty with brand website. In spite of the rapid development and convenience of internet ,many barriers still exists for travel consumers which prevent them from purchasing travel via internet .One explanation for this phenomenon might be some consumer preferred the \"human interface \"and \"personal advice\" offered by travel agent (Lang , 2000).However, one of the biggest barriers, that has been well-documented in the literature was the lack of lack of security when booking via the internet, especially when it comes to credit card payments (Lang, 2000; Shelton, 1997). Although advanced encryption techniques are applied to protect the online purchase, most prospective customers still reluctant to give their credit-card details on internet .Therefore transaction security over the Internet remain a big concern to online travel companies and encryption systems need to be applied to protect on-line transaction security to minimize problems such as personal information leakage, online fraud, and so forth (Buhalis, 2003). In addition to above possible barriers, Weber & Roehl (1999) pointed out the following concerns: technical difficulties and no assessment of product quality and privacy issues. So a measure of Web service quality is important to successful marketing as it helps to estimate efficiencies of the Web (Kim and Lee, 2004) .Many studies have been conducted by researchers to identify the crucial factors of Web service quality that affect customer satisfaction .Hanna and Millar (1997) suggested that page design and information content as two important factors . They also pointed out that every Website should keep Web information current and respond to customer information enquiries in a timely manner. In addition to that, as researchers found that website features is crucial for e-customer's information search behavior and purchase decision , marketers should pay attention to information content such as \"the activities on the trip, travel regions /cities , sightseeing , maps ,special events/festivals, and reservation in order to build a competitive website (Chu, 2001). Therefore tourism marketers should take all these concerns into consideration when they design their website for online marketing. Moreover, Kim and Lee (2004) pointed out providing high-quality information and user-friendly functions will help to build customer loyalty. In addition, researchers studies also indicate that online information users are young, well-educated, and have well-paid occupations. (Bonn, Furr & Susskind, 1999). These findings present two facts: one is that the online customer's information search is still limited to a small segment of young", "label": 0 }, { "main_document": "certain stages than others. Painter (2001) states that: \"the purpose of a genre determines its shape (...) Thus the structure is not so much an arbitrary prescriptive formula as a facilitating convention.\" Genres are facilitating because in using and recognising the genre (or the purpose) the reader is freed from the burden of repeatedly creating and understanding new patterns of layout, stages and language (Eggins, 2004). This conformity to a generic type allows us to predict features. In any correspondence one would expect to see a specific layout and functional stages such as Opening, Closing. Furthermore in complaint letters one would expect such stages including: Situating (including background information), Statement of Problem possibly embedded in reason for correspondence, optional Further Explanation of the problem and/or the background information and optional Request. It seems that the first two stages could come in either order though Situating is likely to be first, unless the problem is embedded in the reason for writing. At this point the stages diverge significantly warranting different analyses. There appears to be a degree of instability in genre definition and classification (Bhatia, 1993; Fairclough, 2003; Paltridge, 1996), however it seems clear the texts differ enough in their communicative purpose to be separate genres. There are a few stages which are common to both and seem obligatory for the category of letter and some for the genre of The first is that of layout; the position of each part of the initial referencing stage is specific and can be mapped onto a cline of degree of flexibility wherein the recipient's name in left position is inflexible, but the date, or addresses depending on formality, might be located on the left or right or missing. There is \"a question concerning the obligatory nature of these features when writing to a close friend\" (Mortensen, 2005). Text A is typed; more referencing information seems probable the more formal the letter whereas text B is handwritten and has only a date. Other stages which are common to both texts and seem obligatory are: Situating and Problem. Using Eggins' (2004) symbols to describe the schematic structure (see appendix 1) we arrive at: Paltridge (1996) observes virtually the same structure for a formal 'Problem- Solution' letter: Similar structures for complaint letters have been described as: as a And: in complaint letters to the Editor (Hartford, 2004). Although text A mixes genres (Bhatia, 1993) to include Legal Demand, apart from the 2 In mixing genre and including this stage, the writer is showing his ultimate authority. Referring to speech, Bakhtin (1997) comments on the diversity of genres, and the role of intonation and genre-mixing in expressing emotional intent, a point which might be pertinent for line 15. As personal letters are informal with parallels to conversation, the register is more \"open\" (Hasan, 1996 cited in Mortensen, 2005) thus allowing for more diversity in structure and semantic realisations. Text A should likewise be more constrained in these aspects. The interpersonal resources will reveal the nature of the relationships between writer and reader, and the attitudes of the writer towards", "label": 1 }, { "main_document": "On the contrary, the advocates of PTAs state that trade liberalisation at a regional level is complementary to multilateral free trade order and that, ultimately, it can jump-start the stalled multilateral trade negotiation (Bergsten and Noland 1993, Lawrence 1996, Mansfield and Milner 1999). They stress the non-exclusiveness and openness of contemporary regional PTAs which are willing to accept outsiders if they once want to join. A seemingly self-contradictory term, 'open regionalism' was coined under such underlying assumption. This article begins with an historical approach examining the trajectory that regional PTAs have created. The development of regionalism in this article is largely divided into three phases. The first phase starts from the second half of nineteenth century, the heyday of liberal trade order, and this lasts up to the outbreak of World War Interstate exchange of manufactures to maximise national interest under the circumstance of the political disorder in this phase clearly indicates the tension between bilateralism and multilateralism through the episodes of success and failure occurred by turns. The second phase ranges from the late 1950s to 1970s. The demise of American hegemonic power with the collapse of Bretton Woods system facilitated the moves of regional PTAs in this period. However, the main actors in this wave were still less-developed and small-sized economies, thus, these show insufficient evidence in explaining the sharp conflict between bilateralism and multilateralism. Vehement debates were not deployed until the post-Cold War economic giants, including the United States and European Union (EU), struggled with each other towards an initiative within multilateral negotiation with eliminated trade barriers in the 1990s. In this period, the traditional supporter for non-discriminatory multilateralism, the United States, dramatically changed its underlying tone of trade policy from multilateralism to bilateralism. Further, the United States has actively led the recent wave of bilateral PTAs. After the chronological survey to trace the meaningful evolution of postwar multilateral trade regime, the relevance of the concept of 'open regionalism' which represents the liberal view over the increasing regionalism will be examined. One of the most accessible conceptualisations of this multilateralism-bilateralism conflict seems to be the 'building block That is, the former views the role of PTAs as a facilitator to achieve worldwide multilateral trade liberalisation, while the latter regards them as impeding the non-discriminating trade liberalisation. In conclusion, the historical scrutiny and theoretical examination in this article will show the outcome : that current PTAs led by the United States are in a state of considerable tension with the multilateral and liberal trade order that GATT architects envisaged. Development of regionalism has been undertaken since the second half of nineteenth century. A vast number of custom unions, which culminated in the formation of the Zollverein in 1834, were emerged and bilateral commercial agreements hastened by Anglo-French commercial treaty in 1860 also contributed to the enhancement of trade. However, it should be noted these trends were a largely European phenomenon (Irwin 1993, 92, Mansfield and Milner 1999, 596). At this time, the broad network of bilateral commercial agreements in Europe was heavily linked by unconditional Most Favoured Nation (MFN) rules.", "label": 0 }, { "main_document": "poems, and the contemplation of it is seen in the poem Blake, W, The poem, with its simple AABB rhyme scheme and its four beat rhythm is easy to follow, and takes the form of the narrator asking a tiger if it knows of its origins. The narrator seems in awe of its majestic presence, and the 'immortal hand or eye' appears to be God, as he is the only thing immortal to Blake and so the poem would seen to be questioning the nature of the tiger's creation. Friedlander (1999) believes that Freidlander, E, Retrieved Oct 27th 2004 from the World Wide Web: ibid However, the poem is, as we have seen with This becomes apparent upon viewing the poems accompanying imagery, showing a tiger that appears anything but 'fearful'. The spelling of tiger, spelt 'Tyger' is purposeful and this leads to a number of different assumptions of what Blake meant the As with It would be wrong to accept Friedlander's views, because of our knowledge of Blake's spiritual views of the world and the importance that he places in his reader's imaginative powers of creating meaning through his text. It appears that Friedlander has read the explicit meaning of the poem and not taken into full account the importance Blake placed on symbolism. It seems that Frye (1947, 230) would have a more imaginatively perceptive view on the poem, as he describes its theme as 'the struggle to create, and the loving contemplation of what has been created'. This would assume that the Eugell (1981, 246) elaborates on Frye's idea of creation and contemplation by explaining that for Blake, 'imagination is an energy or force that creates and transforms as it perceives'. This idea of an 'energy or force' associates itself with Blake's religious belief that God, although separate from the material world, in the act of creation has placed an eternal and divine force within each of his creations, and that the 'divine' force within humanity is the imagination. This makes it clarifies the notion that the object of the narrator's thought is internal, rather than external. Eugell goes on to say that in 'seeing nature with imaginative vision, we enliven and rescue it, and save our own souls' (247) and that 'imagination creates reality'. Hence, as the narrator is contemplating the tiger in it's imaginative form known as This is what Blake wanted his poetry to achieve in the reader, the deep symbolisation inciting the imagination into a dimension of perception, where reality is perpetually being redefined. This refreshed state of mind leads to a clearer outlook towards the material world, opening the reader's eyes and raising questions about the previously unquestioned and given rules of society. Even if the reader is not physically able to change the deeply hierarchal structures of society, the re-ignition of imaginative passions helps to 'save our own souls'. Redemption and emancipation ran through many of Blake's poems in his prophetic bid to prepare humanity for an eventual return to what he believed was the infinite world of the divine. Readers of Blake's", "label": 1 }, { "main_document": "Corinthians 15:22 The utopian ending in the Conclusion describes future wealth and prosperity through nature, love and the continuing harmonious working community. There is no possible return to the oppressive and dimly lit Lantern Yard or dogmatic Christianity; instead the philosophies of Feuerbach and Comte are ringing in the readers ears as the future looks bright and glorious. Eliot's overall use of the Bible in Thomas Hardy had a not so dissimilar upbringing to Eliot as he also went to church in his youth and he even considered going into church ministry at the age of 25. But as he read more contemporary science, especially Darwin, he became sceptical of Christian dogma and subsequently lost any faith that he might have had. He became an agnostic and he saw the church as an oppressive institution. He was judgemental of some Christian doctrine; especially its strict laws about sex, Hardy had an extensive knowledge of the Bible and used it often in an ironical way to attack Christianity. Hardy's bleak novels never provided a substitute religion; he believed the universe to be indifferent and only had hope in, along with the positivists, the power of human aspiration and feeling. See Exodus 20:14, Leviticus 18, 20, Matthew 5:27-32, Mark 10:6-9, 1 Corinthians 6:9-11, Ephesians 5:3 Tess's downfall starts before the reader even meets her, her ancient family's legacy is reanimated by the Parson, who reminisces with Tess's father and pronounces This clich It immediately places Tess in an ill-fated position and perhaps Hardy uses this allusion to state that it will be through the eyes of the Bible or religion that she will be doomed and condemned. The family are later compared to having fallen like Babylon (100), which is an ironical reference to Revelation 14:8, where Babylon is referring to the adulterous nation who rejected God. Although the Bible is using the word to mean the nation's rejection of God in terms of a marriage relationship, I think Hardy is using it to suggest that literal adultery might be at the root of the D'Urbervilles decline and therefore God's revenge upon adultery and sexual promiscuity. See 2 Samuel 1:19, 25, 27 Tess's mission to re-establish the family name and re-acquaint with the D'Urbervilles is akin to the Old Testament the idea of a kinsman redeemer, Tess is similar to Ruth, who followed her mother-in-law Naomi back to Israel and married her kinsman redeemer Boaz. Tess is compared to Ruth later in the novel (89/Ruth 2) as she works in the fields, but before this she expects Alec to play the part of Boaz, as she cries out (56) Hardy uses this biblical allusion yet Alec, rather than playing the role of Boaz, destroys her. He is instead compared to the By using this image Hardy is in fact condemning any guardian of Tess as negligent or cruel as they render Tess helpless. Hardy asks the question The Tishbite is Elijah, who in 1 Kings 18:27 proposes that Baal might be sleeping and not responding to his prophet's calls. This passage is a triumph", "label": 1 }, { "main_document": "Centre, the Pentagon and a Pennsylvania field, were believed to have used the Internet--often in public cafes and electronics stores to communicate without being detected by traditional law enforcement methods such as telephone wiretaps. Computer-based attack could result in widespread death and destruction as terrorists use laptops, the Internet and other high-tech tools to take down power grids, communications networks and other parts of the so-called critical infrastructure. By using the internet the terrorist can affect much wider damage or change to a country than one could by killing some people. From disabling countries military defences to shutting off the power in a large area, the terrorist can affect more people at less risk to him or herself, than through other means. As we know the most known terrorist As information given by the He could have done more havoc to civil society. Osama bin Laden's enterprises, on the other hand, are supported by sophisticated satellite uplinks. Encrypted messages have built for groups that have the capacity to direct terroristic operations outside of his home base in Afghanistan .He recently avoided death from assassins. The current news of hacking was that the website of Arab satellite TV network So anywhere in the world anyone can hack any computer .This hacking was done by a US hacker(1). Attacks launched in cyberspace could involve diverse methods of exploiting vulnerabilities in computer security: computer viruses, stolen passwords, and insider collusion. Attacks could also involve stealing classified files, altering the content of Web pages, disseminating false information, erasing data, or threatening to divulge confidential information. A senior fellow at the Institute of Security and Intelligence at Stanford University, Califonia, Collin said that every upgrade and expansion in technology brings with it-increased threats from Cyber Terrorism. This new threat is developing as terrorists exploit the latest technology to commit violence acts through computer systems(2). Cyber Terrorists and hackers gain access to otherwise secure systems through carefully gathered intelligence. The coming of the After having gained some level of access to a computer, the intruder installs a Rootkit, which can help him maintain his ability to access, the hacked computer, help him attack the hacked computer or use it to remotely attack other computers and help to cover his tracks. So its not always right to say that the person from whose computer other computer is hacked is actually that person as with the help of the root kit it turns not that simple to catch the person who is actually playing the game of hacking with others. Intelligence experts worry that the next terrorist strike on the United States will be what they call a \"swarming attack\" - a bombing or suicide hijacking combined with a hit on computers - that will make it tougher for law enforcement and emergency teams to respond. (3) Computing professionals all over the world need to be aware of possible areas of weakness to such terrorism, in order to better protect their computer systems who contains the information which is very important for the country and help to put an end", "label": 0 }, { "main_document": "tourism destinations. In the past, they both used to have a poor image due to the recession and the lack of investments These similarities tell that Greenwich may be a good tourist destination so that Liverpool can produce strategic plans on the basis of its success and failure. Based on the Millennium Dome exhibition in 2000, Greenwich has put huge efforts in developing its tourism industry. However, it was an completely failure as by February 2000 \"the much maligned Dome had failed to meet its visitor targets, attracting only 3% of the 12 million visitors it need to break even\" According to Smith (2000), it says that \"tourism development in Greenwich has traditionally been piecemeal and lacking in coordination.\" Thus, from the failure of the dome, Liverpool has to make sure that the benefits of its designation and status are to be maximised. It should have a clear scope for developing a more integrated approach by increasing communication and cooperation. In terms of the service sector, the transportation network in Liverpool has to improve connections by examining the example of Greenwich where transport congestion and the lack of parking area are seen. As the other important service division, it is critically identified that Liverpool needs to improve accommodation and restaurant sectors which cover the development of human resource management too. First of all, the capacity of the accommodation in Liverpool may need to be developed. It is believed that Liverpool could attract extra 1.7 million visitors during the 'Liverpool 08 project' Nevertheless, The Mersey Partnership shows the full list of accommodation available in Liverpool, but there are only 107 accommodations totally and this figure includes Bed and Breakfasts, hostels, campuses and guest houses. Actual hotel number in Liverpool is 62 However, it is not the ideal figure for the city which has potential to be one of the top tourist destinations with 'Liverpool 08' project. Therefore, Liverpool should be able to increase its capacity to meet the huge demand of customers. It is also critically identified that Liverpool needs to improve accommodation and restaurant sectors and this covers the development of human resource management too. As regards the service which they offer, the experience from the field trip describes a disaster which is actually happening in Liverpool now. It critically shows the lack of training and development of employees. In order to enhance this situation and succeed 'Liverpool 08' project, the human resource management sector has to put more effort. Employees need to understand working hospitality industry and what is more, practical training should be given. Practical training aims to increase the commitment of teams to rapid action and appropriate decision-making for customer needs. It would ensure that the superior quality of facilities and services and thus earn the loyalty of guests. For this reason, although it has minimal effort to this industry, education system is also important as another stakeholder, because higher education creates skilled employees. As the important aspect which Liverpool should consider is redevelopment the image of Liverpool to achieve the sustainable tourism. In general, the image of Liverpool", "label": 0 }, { "main_document": "and nitrogen stable isotope analysis have largely contributed to the understandings of human diets in Mesolithic and Neolithic periods from protein portion of human skeletons. Generally, the The basic estimated values for evaluation between marine and terrestrial based diets are represented in table 2 (Richards and Hedges 1999:891-897). However, the The perspective of a rapid dietary shift at Mesolithic Neolithic transition has been mainly came from the early work of Tauber, who firstly applied stable isotope value of radiocarbon dated human skeletal remains in Mesolithic and Neolithic Denmark, and observed a rapid dietary shift (Richardsal. 2003: 288-293). Tauber concluded that there is a rapid change in diets associated with the introduction of the Neolithic culture into Denmark approximately 4000cal BC (Richards 2003: 31-36). Richardsal. (2003: 288-293) quote Tauber's data in figure 3(Richardsal. 2003: 288-293), although this idea of a rapid shift had little impact on archaeology. The source of error is known as choice of samples from archaeological contexts, using coastal Mesolithic samples and interior Neolithic samples, the sample size of human skeletons tend to be small, and results are confined to Denmark (Milneral. 2004:9-22). Moreover, this Tauber's work is chronologically dispersed due to the absence of applying \"marine-reservoir effect\", which causes approximately 400 radiocarbon year gap in samples (Richardsal. 2003: 288-293). Recently, Richards et al (2003:288-293) have reassessed this previous work of Tauber, and have reported on new carbon and nitrogen stable isotope values and radiocarbon dates from Danish Mesolithic and Neolithic humans, and have been confirmed Tauber's findings (Richardsal. 2003: 288-293). Table 3 and figure 4 represent carbon and nitrogen isotope values of Danish Mesolithic and Neolithic individuals (Richardsal. 2003: 288-293). This Tauber's work has also been supported by data from the coasts of southern Wales, the Mesolithic shell middens at Oronsay in Scotland (Richards 2003:31-36), Ireland, British inland and coastal sites (Richards and Hedges 1999:891-897), Ukraine (Lillie and Richards 2000:965-972), eastern Denmark, Portugal and western Sweden, except the samples from Baltic Sea due to its complex history (Rowley-Conwy 2004: 91). Thus, the human remains from these Mesolithic sites have been representing a primarily marine-based diet (Richards and Hedges 1999:891-897). However Richardsal. (2003: 288-293) suggest no distinct Mesolithic diets are identified by stable isotope data. It represents complex ranges of dietary throughout the Mesolithic Britain, Scandinavia, and Denmark, compared narrow diet range with some overlap data from the Mesolithic-Neolithic transition (Richardsal. 2003: 288-293). The isotope data are represented in figure 5, individuals are from Britain and Southern Scandinavia with radiocarbon dated (Milneral. 2004:9-22). The complex range of diet throughout Mesolithic is confirmed by the study of oyster consumption, in which introduced Milner (2002:89-95), that the oyster contributed very little to the overall Erteb Moreover, Richards and Hedges (1999: 891-897) also have sampled from British Early and Middle Neolithic coastal and inland sites, include Neolithic tombs, ritual monuments, enclosure, as well as cave. The site locations are plotted on a figure 6 (Richards and Hedges 1999:891-897). These human remains from the early Neolithic have represented no evidence of significant consumption of marine foods after the introduction of Neolithic. However it cannot", "label": 0 }, { "main_document": "The new English Penguin dictionary defines a computer as a 'programmable electronic device that can store, retrieve and process data' Outlined in this way it means that when talking about what computers cannot do one has to take into account that there are computers in most electronic equipment. Electronic chips are devices that have been designed to store different input and output voltages. Hence, all silicon chips are forms of a computer and these are placed inside televisions, video recorders, microwaves etc. Another point to make is that when discussing what computers cannot do some boundaries have to be defined. It will be very easy to say computers cannot change into frogs and such but this would lead to very disorganised arguments and structure. In this essay when discussing what computers cannot do, the computers will be mainly compared to humans. Discovering what computers cannot do will have to be tackled in a different way. Firstly it may be important to see what computers can do in order to find out what they cannot do. Computers were designed to solve mathematical problems much faster than humans could. This slowly progressed onto storage of data and then became the tool of our everyday lives. The typical 'personal computer' can perform activities, which entertain, organise and help the lives of people. These computers can run software, such as games, and can assist with normal daily activities, such as writing. At a basic level computers can be described as fast processors of information. However remembering the definition of the word 'computer' means that computers are in most electronic items. Computers are integrated in to the lifestyle of humans in such a way that computers are designed to help in daily activities. Computers can do most actions for example, vacuuming or fly planes. Computers can perform most operations involving electronics. However not all actions involve electronics. Some actions that do not involve electronics involve some form of simple movement. Eating requires the contraction and dilation of muscles in the arm to move food to the mouth; running requires the contraction and dilation of muscles in the legs; movement in general requires the contraction and dilation of muscles. Computers do no have any muscles and so cannot move in this way. They do not have other forms of movement either. Cars move by converting the chemical energy stored in petrol to kinetic energy. This energy drives the engine, rotating the wheels, moving the car forward. There are many types of movement but computers do not have any of them. Since computers do not have muscles, or their equivalents, they cannot physically move by themselves. Other actions that do not require electronics are actions of the mind. At this moment in time computers cannot 'think' in the way that humans do. Computers are designed by humans to take inputs, process these inputs, and then carry out an action based on the input. Computers do not act without some form of input. When humans think they can think creatively and produce new ideas without any real input. Computers can", "label": 1 }, { "main_document": "explored the investment opportunities in the mainland, rather than a tour that served an educational purpose. Based on the above peculiar but not uncommon observation, the present research would like to investigate on the possible \"third way\" in nationalistic education - the way which is not based on political or cultural attachment, but purely based on market economy. Mingpao is a Chinese newspaper in Hong Kong, which runs a daily education page. The above presumption is supported by Eric Ma. In his ethnographic research (2004), he argued that a \"bottom-up\" nationalization process has been taking place effectively on a day-to-day basis, with the increasing economic exchanges between Hong Kong and China. Is his argument still effective when applied to education? This is the new realm that the present research is going to explore. The past researches tended to base their investigations on the guidelines provided by the Education Department (Fairbrother 2003, Fairbrother 2005, Lam 2005, Morris, Kan & Morris 2000), or the textbooks and curriculum available in the schools (Wilson 1970). These approaches kept the studies on the institutional level. However, school by itself should be an organic and open institution where teachers have the autonomy to influence the process and to make a difference (Apple 2002, Young 1971). Given that the Education Department proposed that nationalistic education would be developed as a school based programme (Leung & Print 2002), teachers are the key players in the development of nationalistic education. It justifies a research approach which is different from most of the previous researches - instead of an institutional level of analysis, the present research will investigate schools by observing and talking to teachers, in order to capture the delicate mechanism of the implementation of nationalistic education, as perceived by the teachers, and inside the schools. The aim of the present study is to investigate the perception and implementation of nationalistic education since 1997 among Hong Kong secondary schools, in the hope of suggesting the direction of nationalistic education in the future. The objectives of the research are: To present the changes of nationalistic education in schools since 1997. To locate the effective means of cultivating the sense of national belonging among students. To understand teachers' experiences in carrying out the task of nationalistic education. To evaluate factors that influence the implementation of nationalistic education in schools. A combined application of quantitative and qualitative methodology will be adopted in the research. The advantages of a multi-strategy research design in this case are as follows, To answer the research question on different levels (Bryman 2004) -- Quantitative methodology will be used to understand the overall picture of implementation of nationalistic education in Hong Kong (the macro level); while qualitative methodology allows an in-depth account on the individual school's mechanism (the micro level). A holistic approach to the research problem (Brannen 2005) - The qualitative data can be contextualized by the quantitative data and it allows the materials to be discussed in a larger social framework. Both of the cross-sectional and the case study designs will be employed in the study (Bryman 2004).", "label": 0 }, { "main_document": "$8,500 in 1999 then straight away if a UBI of less than half of that was considered it would mean for the median single mother an 18 percent increase in income - for the median elderly woman, a 40 percent increase. Groot, L. & Van der Veen, R. (2000) \"How Attractive is a Basic Income for European Welfare States?\" in Groot, L. & Van der Veen, R. (eds.) Alstott, A. \"Good for Women\" Alstott \"Good for Women\". I believe that all of the above advantages generated by a UBI can be justified in terms of Rawlsian justice which states that everyone should have equality of opportunity to achieve what they wish. This concept of justice as fairness is qualified by Rawls in a situation where everyone is under a so-called veil of ignorance in the original position. Rawls argues that in the original position where no-one knows anything of themselves or their place in society then they would choose the best conditions possible for everyone in case they were the one that ended up in the worst position. This certainly supports the idea of a UBI as a basic income helps the prospects of the majority of the most disadvantaged members of society not least by freeing \"individuals from being bound to accept paid employment in order to meet their fundamental needs,\" Williams, \"Basic Income and the Value of Occupational Choice\" p.2. Rawls continues that it is unfair if some inherit more than others or are born naturally stronger or more talented and therefore \"from the standpoint of the original position, desert has no place.\" Rawls further qualifies this with his conception of the political community as a system of social cooperation and the understanding of social justice as \"the fair organisation of such a cooperative venture and fair allocation of its joint products.\" Despite critics being quick to point out that social cooperation would invariably mean having a job to contribute to society it could equally mean helping out voluntarily or providing entertainment through possessing a particular skill. Moreover, Rawls's second principle of justice \"gives priority to protecting individuals from being disadvantaged in the competition for jobs by class origin, and other misfortunes in the social lottery, over maximising the income and wealth of the least advantaged.\" This in fact lends itself to a more radical notion that a UBI is preferable in society even if the position of the least advantaged is not maximmined. Knowles, D. (2002) London: Routledge. p.233. Galston, W. \"What About Reciprocity?\" Williams 'Basic Income and the Value of Occupational Choice' P. 2. In reasoning that everyone has an equal entitlement to society's natural resources it reduces the problem of having to work out and favour a particular type of the 'good life' which as was mentioned earlier has become symptomatic of our society. This advantage is what Loek Groot has termed the neutrality postulate and is essential when trying to decide upon what is fair and what is not as Richard Arneson points out \"Van Parijs would quite reasonably ask on what basis I claim to", "label": 1 }, { "main_document": "events and occurrences by looking back on them, thinking about them, and about what we could do better in the future. We may reflect by thinking about things autonomously, writing a reflective log or diary, or speaking with others. Ghaye The authors stress their belief that more thought needs to be put into reflection in practice, and emphasise the importance of reflection on the quality of practice. Ghaye and Lillyman (1997) talk about five models of the reflective process. I feel that the first model, 'Structured', gives a good overview of professional reflective practice. It describes reflective practitioners accessing, making sense of, and subsequently learning through their experiences to become more effective in their practice. (Johns, 1994a. Cited from Ghaye and Lillyman, 1997). The 'Iterative' models also give a good picture of the concept of reflection. They are based upon the idea that 'the reflective process is most appropriately described as a 'cycle' ' (Ghaye and Lilliman, p.26). The reflective cycle by Gibbs (1988. Cited in Ghaye and Lilliman, 1997) is often used: Health professionals can use the models of reflective practice I have mentioned to actively think about, evaluate and make practical changes in their work. However, the process is not always systematic, or like the cycle illustrated above and can be muddled at each stage. One should not expect it to be straightforward as health and social care can be complex and multifaceted. Health professionals have to constantly make decisions in practice to reach clinical or practical goals, and they may face problems on the way. According to Wilkinson (as cited in Taylor, 2000) the cognitive skills required for critical thinking include decision making and problem solving methods. 'Problem solving involves assessing, planning, implementing and evaluating the best course of action in given situations so that the most effective care can be negotiated' (Taylor, 2000, p.14) Reflection is an important process in all of these stages. When assessing, a Health Care Professional may need to reflect on the information given to gain a better understanding of the client or situation. When planning, they may need to reflect on anything that may have not gone to plan in previous similar situations. When implementing, they may need to reflect on their practice as they undertake it. Evaluation is reflection in itself, in looking what has happened, what could be improved and what went well. I have had experience of reflecting in module workshops for my Occupational Therapy course. For example, we did an arts and crafts activity making cards and gift tags, and then reflected on the experience afterwards: I found the exercise interesting, and as well as getting ideas for activities we could do with clients in the future when we are qualified OT's, we were able to experience how it felt to do the activity for ourselves and think about how it feels to do it. Reflecting made me think about what parts of my body I used for the activity, how I was creative with the materials available and got involved and absorbed in the activity. I became aware", "label": 1 }, { "main_document": "the share price had risen in value considerably and the paid dividends remained constant. This is not the case for Renold Plc as from graph 9; the share price has in fact decreased over the past financial year. To complete the analysis of investment ratios of Renold Plc the earnings per share will also be considered along with the investment ratio of dividend yield as discussed above. This ratio puts the profit of the company in context by relating it to the number of shares in use. The fluctuating behaviour of Renold Plc is un-beneficial for shareholders as they would prefer to see a growth in earnings per share over the past 5 years (graph 12). To consider the overall performance of Renold Plc it is necessary to consider an efficiency ratio. Trade creditor collection period has been chosen as it represents the average time taken for an entity to pay for its credit purchases. Graph 13 shows the upward trend of trade creditor collection period over the past 5 years for Renold Plc. J. R. Dyson, 'Accounting for Non-Accounting Students', Sixth Edition, Prentice Hall, 2004, p260 As there is an upward trend over the past 5 years in the trade creditor collection period it suggests that Renold Plc does not have enough cash to pay its creditors. An upward trend indicates that the entity might be having some financial difficulties as it is taking longer to pay its creditors. The sharp increase in 2005 could be alarming as it could indicate that there is a problem within the entity, if this trend continues over the next few years, it could be a sign that Renold is in financial trouble. ]The future prospects of Renold Plc have already been discussed above regarding the performance indicators; but other data gathered from public sources, for example, CRU steel price index and Reuters UK provide useful sources of information regarding the future prospects of Renold Plc. The Annual financial review also offers beneficial information regarding the future prospects of the company especially within the narrative (although the narrative could potentially provide a bias view). The narrative outlines the future plans for the company regarding operations, relocations and future plans for acquisitions; these can be used in conjunction with the public data sources to evaluate the future performance of Renold Plc. Although Renold Plc did not have a particularly good last financial year last year, the Directors believe that the next financial year will hopefully be improved on this year's performance. With steel prices reaching a plateau and the strength of the Euro against the Dollar weakening (as discussed above), along with lean manufacturing just to mention one of the methods they are implementing to reduce operating costs; the financial outlook for the company should be improved next year. If Renold Plc can hold its place within its peer group (ranked 4 The company is currently establishing a wholly owned manufacturing facility in China, which will open up the market in the Far East. Also establishing a manufacturing facility in Tennessee in December 2004 will reduce", "label": 1 }, { "main_document": "This essay argues that familial obligations are relevantly similar to some political obligations. I proceed from a comparison of the family and state to an account of how these obligations can be understood, and how they can have the moral requirements that they are sometimes denied. By, for example, Simmons (1981), pages 16-23. Hardimon (1994), pages 333-363. In this section I shall explain how the membership of a political community and membership of a family are, in some ways, similar. This section is an attempt to separate the parts of the comparison that can be disregarded. I wait until a later section to draw out the significance of the analogous properties shared by both cases. It is important to note that, throughout this essay I shall be concerned with the general cases (or what I take to be the general cases) of familial relationships and political communities. Horton expresses explicitly his intention to use only the \"standard, or paradigmatic\" cases of the aforementioned groups and I agree with him that we should not concern ourselves with cases that differ from this standard example. I use the terms 'political community' and 'state' interchangeably throughout this essay. Although I realise that not all political communities are states, in this essay, whenever I use 'political community' I mean state. I do this simply for ease of use. Horton (2002), page 150. This makes particular sense given the limited scope of this essay; I am unable to consider all but the standard cases in the amount of detail required to make it worthwhile. An individual's membership of a family does seem to share some things with membership of a political community. The most immediate property that both have in common is that neither are cases in which an individual chooses to belong to a group. People do not choose the state they are born in anymore than they choose who their parents or siblings are. Clearly there are examples such as emigration and immigration and adoption in which this claim is inapplicable, but these are deviations from what I take to be the standard case. It is clear that we do not choose our family or our state; rather we are born into it. We have obligations to the members of our families, and to the members of our political community. That we have the latter kind of obligation is something that this essay must take as a premise. If I did not assume there were such things as political obligations, neither considerations about the analogy of political communities with the family, nor anything else could tell us anything about them. This essay aims to establish that there is something to be learned about political obligation from this comparison, it is beyond the scope of this essay to also provide a justification for the (assumed) ontological status of political obligation itself. In the case of the assumption that we have obligations because we belong to a family it seems clear that we do at least think we have these obligations. At this point, I will simply", "label": 0 }, { "main_document": "The ability to \"use our senses\" relies on a number of sensory systems that enable detection, perception and cognition of environmental stimuli. Every sensory system has a specific way of responding to stimuli, yet the end result essentially remains the same: the generation of an action potential to stimulate nerve cells with a nerve impulse that will transfer this signal to the central nervous system. In this essay sensory transduction is illustrated by briefly explaining the different cell signalling mechanisms involved in the sensation of touch, heat, light, sound and smell, to allow for discussion of their similarities and differences. Sensory systems allow us to sense various stimuli constantly provided by the environment. This essay focuses mainly on the processes of sensory cells involved in the detection of these stimuli by specialised peripheral receptors, transduction along signalling pathways and encoding into a pattern of nerve impulses. Stimuli can be of mechanical, visual or chemical nature. Acoustic sensations as well as those of touch and heat rely on the activity of mechanoreceptors, which are mostly ion channels of some sort. In contrast to that, vision and olfaction are achieved by detection of photons and odourants, respectively, which act as ligands on receptors that are coupled to G proteins. The ultimate goal of generating a nerve impulse, to be perceived and interpreted by specialised areas of the central nervous system is the common task of all sensory cells, which otherwise are very distinct from one another on the structural level. All sensory systems function differently, yet there are a lot of similarities as well. It is difficult to establish comparisons on all aspects alike sheer due to the complexity and specificity of the cell signalling pathways involved, thus comparison has to be limited to some of the most obvious characteristics of sensory perception. Cutaneous touch and temperature perception are two senses that have quite a lot in common. Touch and temperature are detected by receptors, which are primarily found in specialised epidermal cells e.g. Merkel cells for touch or specialised nerve cells called nociceptors. As there is generally no ligand involved in touch perception or temperature perception, unless the skin is suspected to irritant chemicals or acid, the identification of mechanically sensitive receptors by ligand or toxin binding is not possible. Thus the detailed molecular compositions of mechanically activated proteins and the exact ways of activation remain unclear. But in either way, the response to touch or temperature does cause activation of mechanically gated ion channels and change of ion concentrations within the cell, which in turn cause the cellular membrane potential to change, therefore generating an action potential and ultimately resulting in a nerve impulse. The types of proteins most intimately involved in somatosensational processes belong to the transient receptor potential (TRP) channel family. Up to now, 28 genes in six subfamilies have been classified as TRP channels in humans. These channels are largely non-selective and classified by their primary amino acid sequence or structure, which commonly entails a certain number of ankyrin repeats, five transmembrane helices and a membrane pore, rather than", "label": 0 }, { "main_document": "distribution of income in the agricultural sector, as well as to extract funds from agriculture for investment in the manufacturing sector (Gulhati and Nallari, 1990) in order to overcome the problem of a small domestic resource base (Chernoff and Warner, 2002). As regards subsidies, they were applied to wheat and rice to target poor consumers, which were in the 1970s primarily smallholder farmers. The subsidies made up about 21% of the recurrent government budget. Moreover, the poor benefited from tax exemptions and subsidies on electricity rates and water (Gulhati and Nallari, 1990). In the 1980s, when the government wanted to reduce its budget deficit (YeungLamKo, 1998), the subsidies on consumer prices of rice and wheat flour were phased out (Gulhati and Nallari, 1990). In the context of its diversification program away from sugar cane, the government provided input support in particular for food crops and tea and tobacco as export crops. Planting materials and products subject to price controls were eligible for subsidy (WTO, 2001). The greatest effort was put into the tea industry. As from 1980, planters received compensation to cover land preparation costs and monthly financial assistance from the time of uprooting to the time of harvest. Small tea planters received a complex fertilizer grant, winter assistance, and special endof-year assistance. As from 1986, the government subsidised tea production for export. Still, in the late 1980s world prices fell and tea production and export earnings declined substantially (Library of Congress, 1994). As from 2000, the financial assistance, including subsidies and fertilizer grants, was terminated in the tea branch (WTO, 2001). As regards water, irrigation systems are provided by the state in needy areas whereas around 20-25% of the costs are recovered from the beneficiaries (WTO, 2001). Since 1978, imports were generally limited through quotas in order to raise the domestic price above import prices and alleviate balance of payments problems (Gulhati and Nallari, 1990). For food self-sufficiency purposes, Mauritius maintained import, export, and price controls, and strategic reserve stocks on certain agricultural products. Moreover, marketing boards were in place and monopolies have been granted to certain public enterprises over the importation of certain products (WTO, 2001). These parastatals influenced prices by direct controls on the price formation, marketing, and storage of sugar, tea and tobacco. The parastatals included the Tea Board, Tobacco Board and AMB. The latter had export and import monopolies so as to administer producer and domestic sales prices, respectively (WTO, 2001). The objectives of the marketing policies were to improve quality and reach minimum standards which was particularly important in case of sugar having access to preferential EEC prices and therefore being subject to rigorous quality norms (Gulhati and Nallari, 1990). Sugar revenues were shared according to a fixed rate (Larson and Borrell, 2001). The parastatals' duties also include arbitrating differences between millers and planters, operating the bulk sugar terminal, providing insurance to millers and planters, funding projects to increase cane productivity, and providing an equipment pool for the planters. All sugar is marketed by the Mauritius Sugar Syndicate, whereas the syndicate pays marketing charges such as", "label": 0 }, { "main_document": "the contrary, increases its legitimacy and relevance. It can be then concluded that the representative role of NEPAs (and their affiliates in the case of Sweden) in the three countries researched is still a very important aspect of their activity and actually promotes their role. Limiting the influence the state has in regulating industrial relations is present in the prerogatives of peak employers' organisations in two of the countries examined in this essay - Sweden and especially France. This position is a logical consequence of the very central role the state plays in the industrial arena in France and to a lesser extent in Sweden (more so since the 1970s and 1980s), in the same way as the absence of such an activity on the part of the German NEPA reflects the comparatively less direct state intervention in Germany. France and its NEPAs are especially indicative in this respect. Parsons (2005:83) draws attention to the fact that 'for [MEDEF], in order for companies to be able to survive in a globalised economy, the regulation of social relations (...) should be a matter for the social partners. The role of the state should be limited to providing a safety net of 'public assistance''. The author goes on to refer to the 1999 proposal MEDEF made for a new French 'social constitution' (the 'Refondation Sociale' project), which in his reasonable assessment was 'born with the express intention of marginalizing the state from the establishment of social norms' (2005:84). This point is advanced by an EIRO commentary (EIRO1999:5) which says that this 'social constitution' 'can be viewed as the culmination of the project by the employers' confederation to limit government stewardship in the area of industrial relations'. In the case of Sweden, Myrdal evaluates SAF's role after its withdrawal from collective bargaining in 1991 and defines it as directed 'to defend its member firms against excessive legislation and interference (...), both against open attacks or intervention (such as the wage-earner funds) and against measures which are gradually undermining the market' (1991:203). In addition to that, Vatta (1999) finds historical evidence of the attempts SAF has undertaken to curb the state's influence in employment relations, providing example of its strong opposition to the 1976 co-determination law (MBL). Despite the dubious end results of the policies to marginalize the role of the state in industrial relations, especially in France (see also Parsons 2005), these are clearly on the agenda for peak employers' associations in France and Sweden (although not in Germany) and contribute another facet to their activities and complement their role. The political lobbying function has been reinforced as becoming an increasingly important and strengthened area of activity of employer organisations in Germany (Hornung-Draus 2002, 2004), Sweden (Vatta 1999, Hammarstr Reviewing the tasks of SN in Sweden Hammarstr As far as France in concerned, Hornung-Draus (2002:215) notes that MEDEF has 'oriented its action towards activities with a high political visibility' like summer university for entrepreneurs, political marches against the 35 hours week, etc. Moreover, political lobbying (and information dissemination) on the side of employers' organisations in", "label": 0 }, { "main_document": "covers. Total cost is The new level of net profit, Promotion may raise customer's awareness; the number of customers is increased and so does the sales volume. ( Launch an advertising campaign Assumption- An advertising campaign costing With reference to Appendix 5, It should be noted that as the total fixed costs is increased, the number of covers needed to reach the break-even point is higher, which requires 13 more covers. However, the new level of net profit, Besides cost control and financial management, there are several areas that the manager should take into account to improve the wealth of an operation: This includes the management of relationship with customers and employees. Customer loyalty is an important determinant of profit. Benefits of having loyal customers: To ensure customer loyalty: The service outcome and experience should of a high standard and matches their own needs that satisfies even delights customers. Satisfaction is the outcome of the consumer's evaluation of a service. There are four attributes of satisfaction and Dissatisfaction, stated by Cadotte and Turgeon (1988): I) Hygiene factors: Have a low potential to delight customers, however they need to be in place otherwise it will be a source of dissatisfactions. Areas include: II) Enhancing factors: Have a potential to delight if they are present, but are unlikely to dissatisfy customers if they are not in place. III) Critical factors: Have a high potential that lead to customers delight and dissatisfaction. Areas include: IV) Neutral factors: Have little effect on satisfaction. Areas include: The operations manager should realize which factors will delight and which will dissatisfy the customers in order to create external service value. Thus customers are retained and provide significant benefits for the restaurant. In service-operations industry, activities are human-oriented therefore the staff is an important resource in determining organizational effectiveness. The personal nature of restaurant service places emphasis on the importance of direct interaction between employees and customers. To maintain an effective workforce, this can be done by: This helps enhancing employees' knowledge and skills to perform their tasks With the use of staff/workforce scheduling, it helps to ensure staffing levels are sufficient to meet demand but not exceed it. At the same time, it achieves maximum productivity from each and every member of staff taking into account the need for appropriate staff welfare. Tasks should be allocated according to employees' capabilities. Besides, it needs to ensure the staff understand their tasks clearly. Development of standards of performance or operating manuals with job description and specifications may play an important role. Levels of performance are determined not only by the ability of staff but also by the strength of their motivation, which is derived from the fulfilment of an individual's needs and expectations. With reference to the Herzberg's two actor theory, the manager should fulfil the hygiene factors to prevent employee dissatisfaction, alongside with the motivators to create an effective workforce. These include praising employees when they do good work, providing a sense of achievement, increasing the level of responsibility of employees, and taking a proactive view of the organizational climate.", "label": 0 }, { "main_document": "on it. Another inherent complication within custom is that of This is termed the 'psychological element' \"or belief that a state activity is legally obligatory.\" There is a sense of circularity in this requirement. In order to create a new custom, there must be a belief by states involved that such a norm is already legally binding. As encapsulated by Goldsmith and Posner, \"The idea of Therefore it seems rather inefficient to determine a formation of custom on a belief that custom is already in existence. It is exactly because the existence of The uncertainties surrounding the requirements of state practice and For example, in order to establish a concrete argument based on custom, there must also be a concrete basis for an argument; that would be custom itself. So without certainty it seems that custom is now of limited importance; it will possibly be relegated to the bottom if its problems are not dealt with. However there is also the argument that custom is rendered flexible without these precise set of requirements. This is linked with the argument that customary international law as a judicial tool promotes efficiency. Shaw, Malcolm N., Kunz, Josef L., 'The Nature of Customary International Law', See Cassese, Antonio, Cassese argues that \"the contention is warranted that a State is not entitled to claim that it is not bound by a new customary rule because it consistently opposed it before it ripened into a customary rule.\" Shaw, Malcolm N., Goldsmith, Jack L. & Posner, Eric A., See Benvenisti, Eyal & Hirsch, Moshe, Custom may at closer inspection still retain its importance in international law especially in relation to states that are not parties to a particular convention. Therefore allowing some sort of judicial activism to manifest itself and manipulate custom to promote efficiency. As per Benvenisti, \"efficiency is the underlying principle- the Kelsen established the concept of But judicial activism is itself another source of debate as judges are not allowed to make law in a course of proceedings. Instead judges are to apply the law But it is evident that custom is used as a judicial tool to create new law as highlighted in the two decisions of the ICJ, In the former, \"it reveals the judges of the [ICJ] deciding the content of customary international law on a tabula rasa.\" So according to D'Amato, the judges circumvented the requirements of state practice and The impetus for such a decision was probably due to strong convictions among the majority judges that there should be a customary international law governing the use of force in that instance. In Thus to circumvent the two problematic requirements; state practice and It can be clearly seen that it is exactly the existence of these problems that eventually lead to efficiency. But is there really efficiency? If to promote efficiency means to create uncertainty and vagueness, surely international law is better off with a coherent set of rules and requirements. Therefore custom may indeed be of limited importance in light of this discussion. This means that these norms such as slavery", "label": 0 }, { "main_document": "Onyango, a research associate at the Food Policy Institute found that male, white, southerners and those with some college education are more likely to consume genetically modified fruit and vegetables. He found that once the respondents were well informed of the risks of the product, their willingness to consume such products greatly diminished. This supports the claim that their opinions could be easily influenced. The Food Policy Institute study found the stance of Americans on labelling of GM foods to be unclear. When asked directly 94% agreed that GM ingredients should be labelled as such, however before GM was mentioned less than 1% mentioned GM ingredients as something they would wish to appear on labels. It is not currently law for food products in the USA to be labelled as containing genetically modified ingredients. However, at the next session of the Codex Committee on Food Labelling (Ottawa, Canada, on May 1-5, 2006), the Committee will be discussing \"proposed Draft Guidelines for the Labelling of Foods and Food Ingredients obtained through Certain Techniques of Genetic Modification/Genetic Engineering.\" In conclusion, there is a large difference between the attitude towards genetic modification in Europe and the USA. Of all countries, consumers in North America are among the most willing to accept GM produce, whereas consumers in Europe hold the most concerns. However many Americans still harbour concerns and consumer attitude is critical to the acceptance of a new technology. This is shown by the refusal of major food companies Mc Donald's and Frito-Lays to the use of GM potatoes. Nevertheless, it is important to analyse which of these concerns are real, and which are perceived. Consumer concerns should not be simply dismissed as false perceptions due to their lack of understanding. Rather, the concerns should be acknowledged; and unbiased information regarding the technology should be made readily available in order to enable the consumers to form better judgements. It should be the consumers right to have access to this information. Consumer acceptance of GM foods is critical to the future development of this technology.", "label": 1 }, { "main_document": "child's language. As with TTR a score of below 0.45 is considered a cause for concern, but as mentioned previously these expectations were based on a slightly older sample than L, which may have some effect on the significance of the results. VTTR is assessed using a 250 word sample, and once each transcript was analysed a mean score was calculated. L is achieving a mean VTTR of 0.35 which, as with TTR, is suggested to be cause for concern. As mentioned previously, repetition influences type-token scores massively and this has been a major characteristic of L's speech throughout the transcripts analysed. In transcript 1 L uses the verb However, the most influential factor in this VTTR assessment appears to be L's use of L is, however, found to be using a relatively large range of verbs and seems aware of many implicit syntactic rules which accompany many of these, such as verb valency. Verb valency looks at the number of arguments a single main verb takes with the final clause structure depending on the verb choice. L is demonstrating use of monovalent, divalent and trivalent verbs with the verb taking one, two or three arguments respectively. Monovalent verbs do not take a direct object, with an example within L's utterances being L's repeated use of This demonstrates the subject, verb and two arguments in the form of the object and adjunct. Use of such features shows that L has acquired implicit syntactic rules, however due to her age errors would still be expected. L's TTR is suggested to be cause for concern by Templin's criteria (Fletcher, 1985: 47). Sophie's scores on this measures were also shown to be delayed, again suggesting cause for concern. The average TTR is suggested to be 0.50, with anything below, or indeed above, 0.45 being said to show atypical lexical diversity (Fletcher, 1985: 47) L's score is significantly below this marker, which does indicate concern; however it is important to remember that TTR averages were formulated on an older sample (3;0-8;0) and therefore may not be as representative of a younger child's language (Fletcher, 1985: 47). Bennett- Kastor (1988: 87) stresses that TTR is sensitive to sample size, and therefore suggests that factors such as repetition which effectively decrease the sample size may be partly responsible. However, due to repetition being expected within any language sample, it appears that L's vocabulary may be developing at a slower rate than Sophie's due to the sample sizes assessed for both L and Sophie being equal. This is only a suggestion and would need to be assessed in more detail, and using larger samples, to form any significant conclusions. A similar situation arises when assessing L's VTTR, both in comparison to Sophie's achievements and with Templin's suggestions. Again L is performing significantly below Templin's(1957) suggested 0.45 average, achieving 0.35. Again this is suggested to be a cause for concern and once again the same limitations of Templin's criteria being based on older children are faced. L is found to repeat some verbs regularly within a transcript, as referred to", "label": 1 }, { "main_document": "at 16. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 208; see also G Watt, Or reasonableness, according to section 1 of the Trustee Act 2000. P Cane, While a breach of a trustee's duty of care is arguably more concerned with justice between parties, the fiduciary principle is not only concerned with justice between parties, but also to maintain exemplary standards of trust in the public interest. It is also interesting to note that fiduciary duties are different from 'any obligation imposed by tort law', since tort law generally 'does not require people to act for the benefit of others and to ignore their own interests, but to avoid causing \"disbenefit\" to others'. P Cane, As mentioned on the outset, Commonwealth authorities have diverged on the point concerning apportionment of fiduciary liability on the ground of contributory fault. In New Zealand, Sir Cooke P in This was approved by the majority of the Supreme Court of Canada in Although Sir Cooke P's approach in [1987] NZLR 443. [1999] Lloyd's Rep PN 241 (ChD). (2001) 207 CLR 165. Hans David De Beer v Kannar & Co (A Firm) and another [2002] EWHC 688 Hans David De Beer v Kannar & Co (A Firm) and another [2002] EWHC 688, at [92]. See, for example, CEF Rickett 'Where Are We Going With Equitable Compensation?' in AJ Oakley (ed), 19(3) Professional Negligence 422-436. Due to the close conceptual proximity of liability for a trustee's breach of fiduciary duty and liability of trustees for breach of their duty of care, it is important to dispel arguments that stem from the fiduciary principle before proceeding. Case law rejecting contributory negligence in breach of fiduciary duty has been based primarily on the distinctive features of a trustee's fiduciary duty. Considerable argument is related to the exemplary function of the fiduciary principle. This is the strong policy-based presumption that a trustee should bear the risk of all losses flowing from breach of fiduciary duty. It may be noted that the same may be said for tortuous negligence as well, where there is arguably no similar exemplary function. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 209. For a survey on both sides of the argument, see R Mulheron 'Contributory negligence in equity: Should fiduciaries accept all the blame?', (2003) 19(3) Professional Negligence 422-436, at 434-435. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 211-212. See P Cane, The second argument against the introduction of contributory negligence into liability for breach of fiduciary duty is related to the intention element. One must distinguish Given the exemplary function of fiduciary duties, Blackburne J in Again, the same may be said for tortuous negligence. Thus, trustees' breach of their duty of care is in the same position as tortuous negligence in this sense. G Watt, 'Contributory Fault and Breach of Trust', (Winter 2005) OUCLJ 5(2), 205-224, at 209. [1999] Lloyd's Rep PN 241 (ChD). See R Mulheron 'Contributory negligence in equity: Should fiduciaries accept", "label": 0 }, { "main_document": "two systems of reasoning. Other researchers do not accept such discrepancies as evidence for a dual systems theory. Instead, they may see contradictory results as a consequence of performance errors. Such errors occur because of a failure to apply a rule or algorithm a person would normally have no problem with, due to a temporary and random lapse in the supporting processes that are needed to carry them out. This includes the lack of attention, momentary memory disengagement or distraction (Stein, 1996; cited by Stanovich & West, 2000). The most extreme form of this perspective attributes all reasoning inconsistencies to performance errors. If this is so, there should be practically no correlations among any atypical tendencies across tasks. Assuming divergence from normative models is caused merely by carelessness or temporary confusion; one would not expect covariance among biases across tasks because any differences due to errors should not be systematically related (Stanovich & West, 2000). However, Rip and Conrad (1983) found that participants' absolute scores in deductive reasoning tests which showed individual differences, correlated with their performance on other tests. Therefore it is unlikely that these differences can be accounted for by performance errors. Furthermore, Stanovich and West's (2000) findings show considerable internal consistency in judgement tasks which produce responses that do not conform to normative models. Without strong evidence to support the performance errors theory, it still seems more probable that there are indeed two systems of reasoning. Patterns of individual differences imply that the performance error view is not sufficient in explaining the discrepancies between descriptive models and normative models. Alternatively, these discrepancies may be due to computational biases. Responses in judgement tasks may deviate from normative models because of the resource-limited nature of the human cognitive apparatus and situational constraints - such as limited time - under which the decisions are made (Baron, 1985). Even if reasoning is carried out in an optimally rational manner, there may still be computational limitations at the algorithmic level (Goldman 1978) which cause individual differences in reasoning. Stanovich & West (2000) proposed that this hypothesis could be tested by measuring the relationship between cognitive capacity and performance on a reasoning task. A strong correlation would imply that algorithmic level limitations might hinder those of lower cognitive capacity from producing a normative response. Stanovich & West (1998) calculated the correlation between SAT total scores (as a measure of cognitive ability) and eight different reasoning tasks and concluded that to a certain extent, the failure to conform to the normative model seems to be caused by variation in computational limitations at the algorithmic level. Hence, contradictory responses to a judgement task could be attributed to the inability of participants to carry out rule-based reasoning with their limited cognitive capacity rather than a second reasoning system. Alternatively, it could be argued that systematic errors in reasoning may occur not because of cognitive limitations on the part of participants but rather due to the application of the wrong normative model by experimenters. According to this perspective, responses to tasks such as those put forward by Tversky and", "label": 1 }, { "main_document": "available to meet the strategic requirements. Resource deployment analysis will also determine how Tesco's resources and competences would need to change to undertake a strategy and whether they have the unique resources an competences to sustain a competitive advantage. Tesco has been a success story in the UK market because of its innovative strategies which meet the needs and requirements of customers and make it unique amongst competition. The way in which Tesco capitilises on their strengths such as buying power and translates it into an aggressive pricing strategy has been a core competence to its success. The adoption of a hybrid stratergy has also spread Tesco across a range of market segments to attract a variety of customers.", "label": 1 }, { "main_document": "chapter in Austen's novel, the idea of social status is brought into the frame, as the difference between certain characters is illuminated, especially between Emma and Miss Bates. One critic, Adela Pinch, states that Emma 'learns of her potential to abuse her social position' There is a clear divide as Knightley points out: Jane Austen, Emma (Oxford World's Classics, 2003), Introduction, p. xix. Jane Austen, Emma (Oxford World's Classics, 2003), Volume III, Chapter VII, p. 295. The obvious class distinction between Miss Bates and Emma reveals to the reader that not all females of the time in which Jane Austen lived were privileged 'rich, handsome and clever' like our central character. Miss Bates, a single woman faces harsh realities, similarly does Jane Fairfax who is forced to work as a governess due to lack of money. Likewise, one could argue that the idea of status and hierarchy are also a prominent feature of the trial scene in 'The Winter's Tale'. At the beginning of the scene King Leontes is all-powerful, individually constructing the consequences of the trial if his beliefs are made true: The Oxford Shakespeare, The Winter's Tale (Oxford World's Classics) Act 3, Scene 2, line 89. But it soon prevails that Apollo, a higher power than even a King, has the control of the situation as Leontes' son drops down 'dead' due to the 'great profaneness' of the King. Another similarity between the two different texts is the idea that misinterpretation, imagination and jealousy are linked and also act as separate themes in both pieces of literature. Leontes in Act 3, Scene 2 is described as 'a jealous tyrant' before admitting 'I have too much believed mine own suspicion'. Leontes is jealous of an adulterous relationship he believes his old friend Polixenes and his wife Hermione are having due to his misinterpretations. As early on in the play as Act 1, Scene 2 his seemingly unfounded jealousy is revealed, his over-imagination causing him to believe the adultery sincerely: The Oxford Shakespeare, The Winter's Tale (Oxford World's Classics) Act 1, Scene 2, line 108. In the Box Hill scene, which I have chosen, Emma's imagination again leads to misunderstanding when she believes Frank Churchill could have feelings for Harriet: Jane Austen, Emma (Oxford World's Classics, 2003), Volume III, Chapter VII, p. 294. This specific chapter is brimming with false readings such as Knightley's false reading of Emma and Frank Churchill's relationship and everyone's misunderstanding of Jane Fairfax and Frank Churchill's relationship. Austen uses irony in this key chapter as a device to emphasise this on-running theme. For example the statement made by Frank Churchill: Jane Austen, Emma (Oxford World's Classics, 2003), Volume III, Chapter VII, p. 290. is heavily ironic because if Miss Woodhouse truly did know what everyone was thinking, there would be no room for misinterpretation and less harm caused. A recurring motif in Emma is that of parties and social gatherings. The Box Hill episode is a prime example of this. Austen uses this idea as a representation of the society as a whole, a microcosm of society.", "label": 1 }, { "main_document": "the Penn World Tables The time period was chosen to eliminate any business cycle influences one might get if testing for a more limited time period. However, in order to be able to retrieve data for the different variables it was also necessary to limit both the countries included in the sample and the time period chosen. The data for all the variables for the entire sample is included in the appendix at the end of this paper. As can be seen from plotting the residuals (Graph A5 in the appendix), I fail the normality test due to an outlier (Ireland). However, this is not a problem as I can apply the Central Limit Theorem having more than 30 observations in my sample. I have also tried to correct for the outlier by including an impulse dummy variable (as described in the next section). The mean real GDP per capita growth rate is 2.22 %, although the maximum growth rate is an extreme 8.48 % (Ireland). The standard deviation of the real GDP growth is relatively large (1.85), as the range of observations go from -1.87 to 8.48 %. Investment share of GDP has an average of around 19 % and is fairly equally distributed from 5-30 % over the sample range. Initial GDP also features a quite large standard deviation of 8081, giving the point estimate of the mean fairly little certainty. The distribution of school enrolment is (negatively) skewed, with most countries in the top quartile between 75 and 100 % and a mean of approximately 75 %. Initial GDP measured in 1994 has an average of just above $ 12 000 per capita and the distribution is a bit clustered around $ 5000 and between $ 18 000 and $ 24 000. Alan Heston, Robert Summers and Bettina Aten, Penn World Table Version 6.1, Center for International Comparisons at the University of Pennsylvania (CICUP), October 2002. The variables included in my datasets were, after sorting them alphabetically and calculating averages, imported to Eviews. The estimated equation is as follows: When regressing the averaged real GDP growth per capita over the seven years from 1994-2000 on the initial level of income in 1994, the average level of investment share of GDP and the average rate of secondary school enrolment I obtained the following estimates: From the regression output I obtain some interesting results. According to growth theory the coefficient on investment ( It is also significant at a significance level of 5 % (which I will use for the rest of this paper). In accordance with Solow's convergence theory there is signs of absolute convergence with a negative, although insignificant, coefficient ( The intercept C would in principle imply that if all my explanatory variables were zero, there would be a negative growth rate of real GDP of around 2 %; however, I would say that it is an unrealistic interpretation in this particular case. To test the validity of my estimated equation, I performed a series of diagnostic tests. The tests are summarised in Table 3 below: The estimated", "label": 0 }, { "main_document": "uniform and being identifiable to their clients it relieved them of having to introduce themselves before touching the client or performing tasks (Tiffany, 1987) and therefore not gaining consent, which is prosecutable as assault under civil law (Jenkins cited in Sweet 2003). By removing the clothes that distinguish them, nurses found that they were much less likely to approach a patient without first identifying themselves (Smoothy, 1991). This nature of care takes account of the ethics of midwifery practice that a patient does not give up there rights once they become a 'patient' (Sweet, 2003). Just as the familiarity of the nurses' uniform appears the give practitioners rights to the patients it also appears to give the wearer passport to all areas of the hospital without question by staff, security or patients. This is a problem when considering the ease with which one can obtain a nurses outfit or a white coat. A member of public gained entrance to a maternity ward and abducted a baby, although she was dressed in a uniform, it was not the uniform of that hospital demonstrating the trust that the uniform we wear instils in the public (Schan and Cleary, 1996). If all staff were dressed in mufti then security would have to be more rigid, as nurses found when dressed in mufti they were asked to identify themselves more frequently and the use of photographic identification came under more scrutiny leading to improved safety for both staff and patients (Sparrow, 1991). It is questionable how stringent staff would be at checking identities if all staff wore mufti when considering the volume of visitors on wards over large periods of the day. The idea that the clothes we wear can be used in a therapeutic nature is an aspect which has influenced the abandonment of uniform in a number of areas as early as nineteen forty seven. Staff working on a children's ward felt that the ward would feel more homey and the children would feel more secure if staff were dressed in casual clothes as it added a theme of normality to an otherwise unusual atmosphere. This idea is also used by psychiatric nursing staff in an attempt to improve relationships between staff and clients (Smoothy et al 1989). This idea could also be related to midwifery practice as the women under our care for a large majority are not ill but going through a normal process. By wearing mufti we are treating childbearing as the normal physiological process that it is (Hicks, 1992). However Taylor (cited in Oxtoby 2003) claims the therapeutic nature of the uniform we wear must not be overlooked, psychologists claim that colour has a crucial part to play in the way nurses and midwives are perceived by the public, the colour blue is seen as calming, pristine but authoritative at the same time. Sparrow (1986) also comments on the therapeutic benefits of colour and noted that when staff wore mufti a majority tended to wear fashionable dark or neutral colours which may add to feelings of unease or stress. The", "label": 1 }, { "main_document": "and collected the rent, no longer interested in innovation in mercantilism. This evidence seems to indicate a shift in Amsterdam towards the country house and thus a rentier approach around 1700, later than traditionally thought. Burke is wary of the fact that it is possible for there to be an entrepreneur landowner, as well as a rentier without a country residence, but he argues that the overall picture created should be enough for analysis. However, Burke still asserts that there 'is some evidence for a shift out of trade into land in this period'. For this he looks towards factors pushing the Venetian elite out of trade, the loss of Cyprus and the arrival of the English and Dutch merchant ships, and the pull of the land, namely the trebling of wheat prices between 1550 and 1590. However, this was done initially, especially in Venice, with the same profit seeking spirit with which the elite once engaged in trade, Once they became comfortable in their Burke, Burke, Burke, Mackenney, Lane, It is also important to place such an evolution in context. Burke argues that there was a decline in the mercantile activity of Venice. However, Venice was still 'the metropolitan market for a rich, densely populated area' and thus there was expansion in smaller scale and shorter distance trade, mainly in local products such as olive oil. Burke, Venice and Amsterdam, p. 69 and p. 127 Lane, This relatively new approach sometimes created problems for the author in finding a balance between comparison and conclusion, although this is not a problem with his main argument. The book provides an important portrayal, not only of the structure of the patriciate, but also investigating further into their attitudes. What needs to be considered is that it is such a large subject, condensed into a relatively short study; hence, it is not necessarily every detail that is of great importance, but the main ideas and overall character of the patriciate that Burke manages to create. Grassby, review in", "label": 1 }, { "main_document": "especially in New Guinea. Small clan segments tend to be useful in pre-capitalist societies, which are often identified as politically and economically vulnerable, therefore these groups need allies as ceremonial exchange partners, for the purpose of foodstuffs in the case of disasters, defence in the case of warfare, or external trade for valuables. Commonly, these political alliances for warfare and for ceremonial exchange are sustained through transaction of foods and valuables, as well as arranging a marriage to secure kindred, which tend to be identified as life long reciprocal transactions, therefore these marriage alliance are frequently controlled by corporate descent groups. Big men or lineage leaders in stateless societies occupy internal and external roles, and political leaderships particularly had better accessibilities to food stuffs, women, and valuables through ceremonial exchanges. In some region, leadership careers were open to talent, demonstrating superiority, achievement and luck would help to obtain leadership position. In New Guinea, big men frequently demonstrate effectiveness in entrepreneurial roles, in especially success in warfare and ceremonial exchange. In Kula, successive commonly leads to enhance the social position of leader, and also enhance the power of decent groups and followers. The meaning and usage of primitive valuables in aboriginal economies have internal and external roles, for example Kula shell necklace and bracelets are often identified as valuables. Acquiring the valuables in political or social transactions was commonly required the status. Furthermore, the difficulties of understanding the role of primitive valuables are derived from the various meaning of money, for example, the primitive valuables as reciprocal payment in warfare, as commodity money, or as early coinage. (Dalton,G.1977:191-204) The introduction of processual archaeology by Marshall Sahlin made large impact into the formalist approaches in especially the study of European prehistory, which turned to emphasise quantitative modelling (Bradley,R. Edmonds,M.1993:4-5). This section is going to discuss the development of exchange studies in archaeology with representing the outline of formalist approach, as well as those of limitations. In archaeological term, most widely applied formalist approach is regression analysis, which is known as a model of down the line exchange. The regression analysis of the fall-off curve was mainly applied to identify the particular types of exchange distribution, and explains this occurrence mathematically with the assumptions that the primitive economy is described in modern economical term, such as minimization of advantage, scarcity, and surplus. This mathematical model identified as relatively close to geographers. Moreover, this analysis is applicable cross-culturally in many archaeological cases studies. The regional study of exchange was pioneered by Colin Renfrew's obsidian distribution in the Aegean and Near East. It indicates the principle of fall-off procedure that the distribution of highly localised materials shows certain regularities that materials are abundant near the source(Hodder,I.1982:201-203). This simple method tends to be considered as high validity, and this method has contributed the basic principles for various subsequent studies of obsidian trade on the regional level. (R.Torrence .1986:14) Generally, three mathematical curves have been considered relevant to the distance regressions analysis: Pareto model, Exponential distance decay, and Gaussian fall off. In especially, Gaussian fall off first applied to", "label": 0 }, { "main_document": "factor would also increase if the precision of cutting is important. Developments could also be performed in order to improve the performance. Alternative materials could be used if the fatigue strength was not sufficient enough. Increasing the number of shaft sample would also improve the reliability of the analysis, which may also increase the variety of effective design. In the design process, a number of analyses were carried out in order to achieve an optimal gearbox design for a commercial meat slicer. The manual calculations were carried out smoothly and a number of gear combinations were investigated. Unfortunately the computer program did not operate as expected so that both results could not be compared. However, the computer program only offered a limited number of variables which meant the flexibility was limited even such program run normally. Nevertheless, manual calculations enable a more flexible and reliable analysis during design process, hence the volume of the gear box could be minimized more easily. After a range of manual calculations, the gearbox could be operated as requested from 1800rpm to ~90rpm, using a double reduction gearbox with gear ratios of ~4.472. After that, the minimal gear diameters of 36mm and 161mm with 27mm thickness were selected for small and large gears respectively. Such values were then utilized for shaft analysis and hence bearing analysis near the final stage. Finally, two 25mm and one 12mm diameter shafts with 100mm length were chosen for gearbox shafts and 6 bearings (25mm and 12mm diameter with 37.5mm and 18mm thickness respectively) were selected to support the corresponding shafts. The gearbox design was then constructed in SolidWorks, CAD drawing was then produced in order to minimize the required gearbox housing. A size of", "label": 0 }, { "main_document": "can be considered as a cylinder in 3 dimensions. Hence If Since volume of a cylinder in 2 Substituting this value in the initial integral and solving it Since the density is the ratio of the mass and volume. Substituting this in the new integral, This is the theoretical derivation of the value of the moment of inertia of a disc. Hence Since Torque is the rotational equivalent of Force, it is the product of the angular equivalent of mass I and the equivalent of acceleration Hence Substituting linear quantities into the equation From Newton's Law of motion Since the drum was brought to rest before restarting it, let us take the value of u to be Hence Substituting in Eqn 1, the value of F is If m is the suspended mass, then substituting the value of force, We have ignored the effects of friction and the tension of the spring in doing this which will result in some considerable experimental error that we would analyse and judge at the end of the experiment. Hence the value of I from this equation is Hence when we measure the time taken for the different weights to fall and plot a graph of the reciprocal of the square of time against the mass, the slope of the line gives the value of mt An apparatus consists of a wooden plate with a holder for fixing the disc at one end and a drum with a helical groove on the other end. The drum is has a string tied to it with a lasso on the other end to suspend weights.The apparatus also consisted of a stopwatch used to measure the time taken for the weights to fall to the ground when they were suspended from the drum using the string. A steel disc of radius 7.6 cm and mass 0.696 kg was mounted on the disc holder and screwed in tightly. The string was then wound around the helical groove. A weight was then attached to the lasso at the bottom of the string and dropped down by suspending it gently without applying any downward force manually. Care was also taken to make sure that the drum was at complete rest when the weight was dropped. The time taken for the weight to fall to the ground was measured using a stop watch. The same process was repeated for various weights ranging from 10 grams to 100 grams and making a gradual increase in the weight to finish experimenting the maximum possible range of weights within the given time. The disc was then changed to a smaller one of radius 5.05 cm and mass 0.314 kg and screwed in tightly. The same processes were repeated on this disc and the values were noted. The tension of the cable was considered to be a constant assuming the string to nearly mass less compared to the mass of the discs and the suspended weights. The string was considered to be inextensible and hence the acceleration due to gravity was taken as a constant. Since the", "label": 0 }, { "main_document": "way as well as a parallel way. Some tests have been done, with several matrices sizes, using either a simple or multiple (to compute the inverse of a matrix A) system of linear equations. However, such benchmarks (as the matrices multiplication) have unfortunately not been done using the Jacobi method because some difficulties and some problems concerning MPI on Beowulf came and took some times to be solved. So, no graph can be drawn but we can analyse the results found. The result matrix found using the sequential way and the parallel way are exactly the same which means both works well (the result have been checked). However, the execution time using the parallel way was always longer than the one using the sequential way; at least with for all the tests which have been done (the matrices were not larger than 500 x 500). The parallel algorithm has been tested using 4, then 6 processors. We could explain these results because as we saw in the algorithm, a data gather is often done. After discussing about these unsatisfactory results with my supervisor, Chris Cox, I have understood that the Jacobi method is efficient with very large matrices (larger than the one I used) and with very specifics matrices (matrices which are dense around the diagonal). After this discussion, I checked again if I could obtain better result using the parallel algorithm, but I didn't manage to obtain them. I have also checked again whether the matrix result obtained with the parallel algorithm was correct in order to be sure there was no mistake in the code and I realized that the matrix result was correct. So, the algorithm works but to be efficient using several processors the matrices representing the system of linear equations has to be very large. We have seen in this project several technologies and concepts concerning parallel mathematical applications, distributed computing etc... A lot of these technologies are still developed and are the subject of a lot of research projects throughout the world. As we saw, distributed computing is a very large and interesting subject, but complex. That is why this project involved a lot of researches dealing with these technologies and concepts which I nearly didn't know before. The most difficult part has been the understanding part. Indeed, I had to understand the concepts dealing with the computing part (distributed computing, parallel algorithm, Message Passing Interface, etc...) and the mathematical part (different properties of matrices, Jacobi method, etc...). But after this part, came the practical part which involved the development of my own mathematical applications. This part was also very interesting because I could try concretely all the concepts I had read before. Work using a cluster such as the Brookes Beowulf was totally new for. It was a real great opportunity to be able to use it because we can't use such a cluster \"every day\" at home. Applications which have been created in this project gave some interesting results, such as the results obtained by the matrices multiplication program with a speedup not bad", "label": 0 }, { "main_document": "consult with each other. It has been proved that even a very experienced expert can fail to notice some of the usability issues therefore to achieve better results the test should be conducted by several evaluators. One would think that the more evaluators we hire the better results we get. Unfortunately this is not necessarily true as the benefit to cost ratio starts to decrease rapidly with too big amount of evaluators. According to Nielsen the optimal number of evaluators is 3 to 5. A typical expert session lasts 1 to 2 hours however in case of large and very complicated interfaces more time might be needed. It is then recommended to have more than one session each concentrating on a specific part of the user interface. Having conducted the heuristic evaluation we should get a list of usability problems with reference which heuristic has been violated. The evaluators are supposed not only to point the problem but also explain why it is a problem (according to usability principles). It is essential to be as specific as possible. Basing on heuristic evaluation output it will be easy to redesign the system to make it more usable. There are numerous different sets of heuristic designed by different usability experts: This usability test will base on the set of heuristics by Benyon, Turner & Turner (from \"Designing Interactive Systems\"): The system should also inform the user about its current status. It should also adhere to general standards of similar systems. We must ensure that all words, phrases and concepts are familiar to the user. Some of the affordances are determined culturally. The user should easily find adequate menus and links placed in some logical order. The system should enable quick and effective way to correct them e.g. in case of filling a long form we should enable the user to change the content of just one, incorrect field (e.g. spelling mistake) instead of filling the form from the beginning. It should also prevent the user from making serious errors e.g. by asking for confirmation. It is highly recommended that system asks for confirmation in case of payments and other money transfers. The user should be also given a chance to change the look of the system (e.g. various colour schemes in Microsoft Windows) It is also essential that the dialogues do not contain irrelevant information. Constant error messages cause frustration and make the user leave the system. The test pointed out several problems which have been classified into 3 groups in descending order of importance: Having conducted the test the participants have found a few usability problems. Even though they concern mostly aesthetic issues some of them need immediate attention as they have been classified to major severity group. Most problems are easy to solve by the web designer - in some cases there is a need only to place additional link. Some are probably just an oversight (e.g. lack of link on \"click here\" text). In the future the designer should pay more attention to the text on the images. The service is", "label": 0 }, { "main_document": "th GBP. The firm has had a great improvement in year 2003 and they have been able to keep the same standard in 2004. their average profit margin has been 1.13% for years 2002, 2003 and 2004. When comparing the turnovers of 2002 and 2003 there has been a great improvement in 2003 but since the profit margin in 2003 is low the profit before taxation in 2003 does not give such a great value. Below is a table consisting information about comparison of turnovers,etc in 2002 and 2003, (sources: Below are bar-charts of the annual turnover and percentage profit margin of Peugeot Motor Company Ltd, UK for years 2002, 2003, and 2004, Below is a graphical presentation of the evolution of the key variables during years 2002, 2003 and 2004, The above graph shows that operating revenue, cash flow, profit and shareholders funds have been increased over the three year and overall it shows a good positive growth. (sources:- The most viable strategy for a new entrant would be to This is because the market the entrant looking at is a complex monopoly and gaining market share is difficult since the market is very established. The new entrant would have difficulties gaining market share through low price since the very big companies that already exist which acts as a oligopoly have a low profit margin and a good market share; therefore they could easily start a price war where the new entrant can not exist. By differentiating the product the new entrant could concentrate on a more specific small market and since there is no oligopoly the new entrant can enter the market and gain market share easily. The entrepreneur should acquire the company since it already has a good market share and advertising for a new brand name would not be necessary. The firm has improved during the last few years and will show a positive growth in the future. Since the market the firm acts in is an oligopoly, it would be difficult to enter the market as a new company to compete with the current firm and starting a price war with the current firm is not possible because the current firm has a good market share and low profit margin.", "label": 0 }, { "main_document": "within Thomas Heatherwick studio there were other job profiles that complemented and supported the activities of the director such as office administrator and company marketing assistant. On the other hand White Design Associates did not have these job positions and much of the administrative and marketing work has been done or overseen by one of the directors. A person who has been working within the practice the longest would have taken up this role. He (in both practices this position was taken up by a man) would usually manage a particular project himself. At the same time, he would also serve as a support for project architects giving them advise or providing them with help related to running a job. This is because a projects manager had the most insight into the way building industry operates and the way that company preferred to deliver projects. Apart from running a particular job himself, his role within White Design Associates or Thomas Heatherwick Studio was to: There were not major differences between the activities of people who fitted this job profile in White Design Associates and Thomas Heatherwick Studio. Project architect or project designer (the title used at Thomas Heatherwick Studio) would be the person in charge of running one or several projects from the early stages up to their completion. They would be engaged in usual activities that relate to the stages of development process from developing design options up to hand over of the finished project. Within White Design Associates, people undertaking this job had a high degree of independence in terms that the input of directors or projects manger would be limited to the very initial stages of the process. Only if the project was facing difficulties or if project architect felt that he or she needed advice or help from senior team members would they get involved. On the other hand, due to different approach to design process at Thomas Heatherwick Studio, project designers had, to an extent a lesser degree of independent decision making. Because company director had developed personal approach to design, the project could change or shift due to director's changing attitude towards the project. Within both practices, project architects and designers would also be involved in some administrative work, such as preparing bid submissions. This job description best fitted the work that I had been undertaking. Within the offices that I worked for the responsibilities and tasks varied significantly. Within White Design Associates, an architectural assistant would usually be allocated to one project architect. Together they would form a team and work on delivery of a single project, with project architect having a senior role. However, the assistant would be exposed to all stages of building development either directly as a participant in the workload, or indirectly by observing the tasks that project architect is undertaking. Coupled with that, this job position would usually include a responsibility for a smaller - scale development that the office is engaged with. In this situation the assistant would implement the knowledge gained through the collaboration with project architect and", "label": 0 }, { "main_document": "Admitted via GP on On admission, Mrs Mrs She was coughing up increasing amounts of phlegm prior to admission, and there was no diurnal variation in amount. Mrs Mrs She was also suffering from a fever on admission, and described herself as being 'rather delirious'. She had also noticed an increasing hoarseness of voice over the preceding days, but could not elaborate further. Mrs She had not complained of haemoptysis, chest pain or night sweats. Regarding her leg pain, Mrs She described it as being 'stinging' in nature, and graded the pain as being 10/10. It was aggravated by standing and weight bearing, and relieved by E45 topical cream. It had no timing associations, or was associated with anything else. Mrs Mrs Her most recent bout involved her c-spine where she described pain, morning stiffness, and fatigue, loss of flexion/extension / lateral movements and weakness that required a cervical collar for support. At the time of the exacerbation, she had no rash, fever, constipation, ulcers or fatigue. She did not complain of neck pain or a change in sensation down her arms. Mrs This causes her difficulty in all areas of her life. Mrs She has also had to give her main hobby of knitting, which was also an incidental source of income. Mrs Her prescriptions on admission were: There was no significant family history. Mrs She has a strong family support network of her daughter, sister in law, and neighbours. She does not drink alcohol. A systems review revealed nothing of note. The most likely diagnosis at this stage would be a Lower Respiratory Tract Infection (LRTI) on a background of lung fibrosis 2 to RA. Important, less likely causes requiring exclusion includes a pulmonary embolus (PE), pneumothorax, Cardiac failure, and valvular disease. Malignant causes are less likely, but must also be considered. First presentation of adult onset asthma, anaemia, COPD, physical deconditioning. The strong, typical history of chest infection coupled with the examination findings of tachypnoea, bilateral basal crackles, and bronchial breath sounds at the right base all further support the diagnosis of chest infection superimposed on lung fibrosis 2 to RA. The wheeze may have previously been present, or a new finding. The only unexpected finding was the patient being afebrile. Mrs Observed on ward for 4 days. Mrs Avoid people with respiratory infections. Have Influenza inoculation. Patient became febrile during the night (T= 38c). CRP was elevated. The CXR showed reticular shadowing bilaterally. The ECG was unremarkable. Peak flow was 240 ml/s. Discharged uneventfully after 4 days. There is a reasonable amount of recent clinical evidence on RA. The most relevant summaries are presented below. One systematic review and one additional RCT found that introducing disease modifying antirheumatic drugs early (oral gold, intramuscular gold, hydroxychloroquine, methotrexate, minocycline) significantly improved radiological progression, swollen joint counts, and quality of life scores at 12 to 60 months compared with delayed treatment. One systematic review has found that hydroxychloroquine versus placebo reduces disease activity and joint inflammation in people with rheumatoid arthritis. We found insufficient evidence about effects on functional status. One", "label": 1 }, { "main_document": "For Francis Fukuyama, John Ruggie, and other reputed international relations experts, the end of the Cold War heralded the emergence of a \"new world order\" The shift away from a bipolar distribution of power that these analysts have in mind, however, is only one aspect of this transformation; perhaps more importantly, the stage of world politics has recently been invaded by a host of transnational actors. As transnational cooperations, non-governmental organisations and superstate institutions gain influence in global debate, they may \"call into question the primacy of states\" by challenging their sovereignty Some even warn that international organisations can be used by powerful states as tools to secure their hegemony These somewhat pessimistic forecasts may be exaggerated, however; if transnational actors can be made accountable to civil society, they will more likely reform rather than annihilate the Westphalian system. Through the emergence of \"global governance\" Ruggie p554 Kaldor p583 Cronin p103 Barnett In popular belief, some of the most powerful and potentially threatening actors of the current international stage may be transnational coorperations (TNCs), firms which hold branches, subsidiaries and production facilities outside their own country. These have proliferated in recent years, emerging even in developing countries According to Peter Willetts, their growth and the scale of their activities leaves \"the sovereignty of most governments . . . significantly reduced\" As TNCs have highly mobile production facilities, they can easily move out of a host country if its policy is no longer attractive to them. Due to such 'regulatory arbitrage', host governments must compete for a firm's presence by lowering taxes, relaxing regulation, or improving particular infrastructures ; major aspects of state policy thus become dictated by the needs of the TNC. In Indonesia, for instance, the government has voluntarily relaxed environmental laws in order to accommodate the American mining company Freeport-McMoRan Copper & Gold. This firm's \"importance to Indonesia's treasury . . . ha[s] helped secure it against challenges from local people [and] environmental groups\" Morevoer, TNC's can cause states to lose control not only of policy, but also of currency and foreign trade, as the bulk of financial flows generated by these companies leads to considerable fluctuations in a currency's exchange rate and in a country's current account. Furthermore, companies can occasionally evade state supervision altogether, for example by transfer pricing, where TNC's transfer their funds from one country to another to avoid taxation. In cases where firms follow directives from extraterritorial headquarters, domestic branches may be subject to the policy of a foreign government and may disregard domestic regulation TNCs can indeed pose very real threats to state sovereignty. Willetts p363 Willetts p363 Perlez and Raymond Willetts pp363 - 365 Beyond the impact of TNCs, the growth of transnational actors also worries states by fostering the development of a \"global civil society\" Modern voluntary associations seek to shape social rules by gaining independence from the state, engaging in transnational public debate, and becoming globally connected In particular, the number of Non-Governmental Organisations (NGOs) is \"skyrocketing worldwide\" NGOs are generally seen as potentially destabilising activist movements that have been 'tamed'", "label": 0 }, { "main_document": "This essay will assess the three most common interpretations of the Bolshevik Revolution of October 1917 in trying to discover whether the Bolsheviks were successful because they had a mass following. It will examine whether the insurrection was a coup d' The final interpretation and the one that this essay will subscribe to is that the Bolsheviks came to power in October 1917 because they had the support of an important minority in Russian society, namely the workers and soldiers, and that the peasants also saw them as the best potential provider of their wishes at the time, especially regarding the re-distribution of the land. Therefore, there was no real opposition to the Bolshevik coup but it would be wrong to describe them as having popular support. The essay will also examine other decisive factors which meant the Bolsheviks were successful in their quest for power, including the significance of Lenin's role in the Revolution as well as the mistakes and unpopularity of the Provisional Government. Some historians, the vast majority of whom were living in Russia under Communist rule, believe that the Bolsheviks had the support of the masses in Russia and this was the primary reason for their success - 'In the Soviet view, the Bolshevik triumph was based upon its success in winning the support of nothing less than a majority of the population.\" They argue that the Bolsheviks managed to educate the people and form a new mass consciousness. Indeed, between February and October 1917, the membership of the Bolshevik party rose from 24,000 to 350,000. The party even managed to attract the support of the peasants - 'Nevertheless the party succeeded in winning [the peasants] over to its programme... In the words of Lenin, \"Here was objective proof, proof not in words, but in deeds, of the people coming over to the side of the Bolsheviks.\"' The July Days had proved that the Bolsheviks had the support of the vast majority of the workers, soldiers and sailors. After the Kornilov Affair, Bolshevik support began to grow even more rapidly as they put forward the motion of a left wing dominated government, which the Soviet passed - 'the social polarization of the summer gave the Bolsheviks their first real mass following as a party which based its main appeal on the plebian rejection of all superordinate authority.\" The Presidium (the organizing committee) of the Petrograd Soviet now had seven members, four of whom were Bolsheviks - 'Bolsheviks thus stood at the head of the institution that was most representative of the popular movement.\" This helped to increase their popularity still further - 'During August Bolshevism was growing almost throughout the entire breadth of the land.\" They were the second largest party in the country after polling 33 per cent in the Petrograd City Duma elections - a massive rise of almost 15 per cent in just three months. This trend continued as the Bolsheviks scored a victory in the local Duma elections receiving 52 per cent of the total vote cast 'winning an outright majority in eleven of", "label": 1 }, { "main_document": "relevant as an explanation of the onset of ethnic war. In peaceful societies, conflict is channelled into non-violent means and institutions for both its expression and resolution The fundamental problem of developing countries is that such conflict-management institutions do not exist or are too weak to enforce any course of action. Economists' have been unable to reach a consensus on the causes of war, as seen by the sample literature provided in this essay. However regardless of the causes of war, whether is it motivated by grievance or greed, the fact remains that conflict lowers economic growth, as shown in Sub-Saharan Africa were per capita growth rates have been negative in many countries over the last two decades. During a civil war a society diverts some of its resources from production to destruction. This causes a double loss: the loss from what resources were previously contributing and the loss from the damage that they are now inflicting. Therefore, it is essential to create institutions, particularly in developing countries, which can design and implement effective policies to protect political stability and lower the risk of conflict. Murshed (2002), \"Conflict, Civil War and Underdevelopment\",", "label": 0 }, { "main_document": "behaviour than television will. Hodge and Tripp (1986) believe that television can not be solely blamed for aggressive behaviours in children, and that it is imperative to consider socio-economic status and situational factors, challenging the somewhat reductionist social learning approach. They reason that children perceive television as a mode of entertainment, not as a source of information about the world. They argue further that what a child views is not infallible - it has the ability to be corrected and put into context by others, namely through socialisation with parents and peers. Hamilton (1998), supports this view, stating that television only exacerbates other more influential factors, rather than having its own direct effect. The research discussed above by Bushman and Geen (1990), Kiewitz and Weaver (2001) and Wood, Wong and Chancere (1991) shows a flaw in the media violence debate. Though children who score high on higher trait aggressiveness are more prone to violent behaviour after watching it on the television, this simultaneously demonstrates that children with more violent tendencies watch these sorts of programmes. Not only does this accentuate the controversy over whether evidence is due to correlation or whether it is causal, but it places emphasis on the root of the development of such traits. Social factors and the importance of parental responsibility and control over what their children watch must be considered. Furthermore, Savage (2004) suggests that violent or antisocial traits can occur as a result of neglect, and believes a shift in research from the media violence \"blame game\" to the causes of childhood aggression and neglect may be more useful. Potter (2003) agrees with the need for such a shift, and suggests that the on-going war between the public, policy makers, researchers and producers only serves to complicate and distort the issue. This, in addition to Savage's argument, indicates the difficulties associated with proposing a link between viewed and displayed aggression, making the question about need for concern even harder to answer. Savage implies the difficulties with conducting this research. Most evidence is collected using qualitative methods, which comes with its own set of caveats. Obtaining evidence by means of a questionnaire, in the manner of Bushman and Geen (1990) for example, is not always the most accurate way to obtain results. Also, the subject of the research focuses on testing children, who may not always treat scientific experiments with the appropriate respect and seriousness. The question of indirect aggression, whether in the form of verbal insults or more subtle antisocial behaviour, has also been studied. This type of behaviour can have just as much of an impact on behaviour as overt aggression (Coyne, Archer and Eslea, 2004), but can be harder to observe in behaviour. This can confuse the research, leading to an inaccurate picture of events. The question of indirect aggression can seem to be the beginning of a slippery slope argument, as one may wonder how far the criticisms of television content will expand in the future. It is also worth noting that much media is allocated an age suitability rating, showing the role", "label": 1 }, { "main_document": "because they are forced to; not because they have political ambitions. In conclusion, Plato's Republic does contain a model state that can be seen to be based on elitist principles; but we must remember that our political mindsets are different to those of the classical period. Moreover, to condemn Plato for his elitist ideals would be to condemn our present day society for retaining them to a certain extent, after centuries of 'progress'. If we were to condemn Plato for his viewpoints, it would be seen as an attack on our own belief that everyone is entitled to their own opinion. Naturally we may supply the counter arguments to any viewpoint but we should never go beyond this; unless we wish to repeat the travesties that were the crusades and the witch hunts.", "label": 1 }, { "main_document": "to control the engine speed. For each specified engine speed the engine torque and power were recorded and can be seen in Table 1 To obtain the performance map of the Rover K16 engine the following parameters are monitored: Rearranging Reference source not found. torque values can be estimated for given BMEP values. For a BMEP of 2 bar the torque produced is: At specified torque values the parameters listed above were measured, and can be seen in Table 2. From these results the bsfc can be calculated using Reference source not found. , the results of which can be seen in Table 3. From Table 3 it is possible to produce a performance map which shows contours of constant bsfc on a graph of bmep against engine speed. The power and torque values for different engine speeds have been plotted in Graph 1. It can be seen that power increases with engine speed, from 2000 rev/min up to 4500 rev/min the power is almost linear, above 4500 rev/min the power begins to drop off. If further tests were made at higher engine speeds it would show a dramatic drop in power, this can be seen in Figure 1. This drop in power can be attributed to the \"increase in frictional flow losses and the flow into the engine during at least part of the intake process becoming choked\" Since the engines flow rate is now being restricted the volumetric efficiency of the engine decreases. JOHN B HEYWOOD - \"Internal combustion engine fundamentals\" p218 The torque drops off at lower engine speeds due to reduced thermal efficiency since there is more heat loss between cycles. The volumetric efficiency decreases at low engine speeds due to back flow caused by the late inlet valve closing which is designed to allow increased air flow at high engine speeds. Looking at the torque curve in Graph 1 it can be seen that the maximum torque value is at approximately 4300 rev/min. This maximum value appears in the mid range of the engines speed range where the engine is operating most efficiently. To achieve a constant torque engine the volumetric efficiency would need to be 100% which is unattainable. Factors such as valve timing, valve lift, porting, and intake manifold design all effect the volumetric efficiency of the engine. In the case of the K16 engine tested in this report maximum torque is designed around mid rev range (2000-5000 rev/min) under normal driving conditions. This is achieved by tuning the valve timing Engines with variable valve timing such as the K16 1.8 VVC (Variable valve control) can achieve a more linear torque curve than fixed valve timing engines. * The adjustment of inlet and exhaust values opening/closing, and valve overlap The performance map in Graph 2 shows contours of constant bsfc against BMEP vs. Engine speed. The performance map agrees with the description given by Colin R. Ferguson in his book \"Internal combustion Engines Applied Thermosciences\" page 479 which states that \"typically the bsfc is minimum at about 60% of the maximum BMEP and 60% of", "label": 1 }, { "main_document": "Committee on the Elimination of Discrimination against Women (CEDAW), and the Committee on the Rights of the Child (CRC) regarding FGM in African countries, and an analysis and critique of these observations. Although only the Declaration on the Elimination of Violence Against Women specifically identifies FGM as physical, sexual and psychological violence (Art. 2), other relevant conventions include the Universal Declaration of Human Rights, the Convention against Torture. See Wood (2001) and Han (2002). Egypt, Sudan and Senegal were asked by the CESR in May 1999 E/C.12/Q/EGY/1. (List of Issues: 21/05/99). E/C.12/Q/Sud/1. (List of Issues: 13/12/99). E/C.12/Q/Sen/1. (List of Issues: 13/12/2000). E/C.12/Q/EGY/1. (List of Issues: 21/05/99), paragraph 34. E/C.12/Q/EGY/1. (Reply to List of Issues: 28/03/2000), paragraph 34. The reply also noted that the government promotes educational efforts and disseminates health information through the media and with the help of religious ministers to highlight the dangers of excision, and stated that the Egyptian Penal Code \"prescribes penalties for anyone who performs such operations.\" However it also added that since it is prohibited in health centres and hospitals, unqualified persons continue to secretly perform it. Ibid. Sudan was requested to provide statistics disaggregated by age group, geographic location and socio-economic standards on reports that urban educated families are abandoning FGM. The reply also explained how the government was pursuing education and awareness-raising to highlight the harms of FGM and help eradicate it through lectures, midwife and nurse training and through women's unions. E/C.12/Q/Sud/1. (List of Issues: 13/12/99), paragraph 37. E/C.12/Q/Sud/1. (Reply to List of Issues: 24/06/2000), paragraph 37. In both cases, the CESCR is concerned with the success of various measures but the responses highlight the difficult in providing immediate and quantifiable 'results'. Egypt's reply is rather debatable because it is difficult, if not impossible, to make a statement claiming FGM's eradication (even in urban areas), for the main reason that FGM is an irreversible process, and to claim that it has been eradicated requires waiting until the next generation to see if girls who are circumcised now end up circumcising their own daughters. Sudan's response is slightly more restrained as the reply indicates that such detailed statistics (which would require quite intensive research) are unavailable at present. An examination of concluding observations regarding FGM by the CESCR, CCPR, CRC, CEDAW and CAT of just over twenty African states yields responses that repeated frequently and can be categorized into the following groups: Statements of concern regarding FGM that outline how it is a harmful and discriminatory traditional practice See, Zimbabwe, CEDAW, A/53/38/Rev.1 part I (1998) 13, paragraph 141. See Lesotho, CCPR, A/54/40 vol. I (1999) 52, paragraph 255. See Guinea, CESCR, E/1997/22 (1996) 39, paragraph 207. Calls to conduct studies on the practice and plans to eradicate it. See, Yemen, CCPR, A/50/40 vol. I (1995) 49 paragraph 261. Calls to take steps to eradicate the practice through education and sensitization programmes See, United Republic of Tanzania, CRC, CRC/C/108 (2001) 71, paragraph 403. See, Democratic Republic of the Congo, CEDAW, A/55/38 part I (2000) 21, paragraph 216. Positive praise regarding the adoption of laws against", "label": 0 }, { "main_document": "Cortez; yet he organized his settlement so that it would function properly, setting up provisional laws and punishing those who refused to work. He also successfully negotiated with the Powhatans, which ensured that while the colony gained strength, they were kept relatively safe from the local Indians. Like Cortez he facilitated the survival of the men under his command. The Powhatans' first mistake was to continue trading food with the English whilst their colony grew in strength. After the Powhatans killed 350 colonist in 1622, John Smith's successor proceeded to exterminate the Powhatans, even poisoning Powhatan chiefs at a peace settlement. Disease may have been the single most important European weapon deployed against the native peoples in America, despite it being unintentional. In both Mexico and North America, after the Europeans had come into contact with native tribes, the Europeans passed on diseases of the Old World. The Indians had been isolated from the rest of the world for centuries so European diseases such as smallpox, measles and scarlet fever produced devastating effects on native populations: 'According to Henry Dobyns, there were almost a hundred epidemics and pandemics of European diseases among Native Americans between the first contact and the beginning of the twentieth century.\" He continues, 'many Native American communities lost 75 per cent or more of their members within just a few weeks.' In the short time that it took the Conquistadores to defeat the Aztecs, the Spanish also managed to infect the Indian populations in Mexico (and Tenochtitlan) with smallpox during the winter of 1520. Although many people died, the most devastating effect was the deaths of the Aztec leaders. Those who emerged to take their place 'lacked both the experience of their predecessors and the time needed to consolidated their rule and reaffirm allegiances with tributaries.\" Notably, the Spanish also lost many supporting Indian allies and their leaders. However, it was far less devastating for the Spanish as the leaders who came to replace the old native leaders were more loyal to the Spaniards than their forerunners. In both Mexico in the sixteenth century and in North East America in the seventeenth century, European diseases severely weakened native populations of the ability to fight and reduced their ability to feed themselves. Some historians' estimates put total population reductions due to European diseases at ninety per cent. James Wilson, The Earth Shall Weep, A History of Native America (Picador 1998) p. 75 Ross Hassig, Mexico and the Spanish Conquest, (Longman Group UK Limited, 1994) p. 102 Whilst the native's superior numbers when the European settlers first arrived in the New World, it is clear that after the fruitful voyages of Christopher Columbus, there would be an endless supply of Europeans willing to risk everything in order to make a profit or even a new life for themselves in America. They could also bring with them, and increasingly manufacture locally, the weapons of subjugation 'The New World was 'a land to be desired, and, once seen, never to be left.\" It was the initial aim of all the European powers", "label": 1 }, { "main_document": "ability to get B to do something against his own genuine self interests. Thus within Dahl's analysis, power is understood in terms of its effects - if B has not been made to do anything against his own interest then A has no power. Power is considered an attribute of individuals, exercised through behaviour and associated with domination. Hay notes that, as a consequence, Dahl's innovative analysis remains appealing in that it identifies an area of enquiry - the decision making arena. To discover who holds power, a behavioural analysis of this arena is required. Power can be measured in the sense that a decision making area, relevant actors and their preferences can be identified, whilst decisions made can be analysed and compared with actors' preferences. Dahl has been criticised for promoting what is essentially a one dimensional view and for placing too much emphasis upon individual actors. Dahl focuses exclusively on the exercise of power ignoring the extent to which power is a possession as a result of wealth or status. If such conceptualization allows power to be measured then on what basis can the importance of different decisions be calculated? Dahl does not account for political power which may exist but remain unexercised. Most prominent within such criticism however, is that Dahl's pluralist conception ignores those circumstances in which decisions are prevented from happening - the area of non-decision making highlighted by Bachrach and Baratz in Hay C. ' Bachrach and Baratz put forward a stark alternative to the essence of political power offered by Dahl. Motivated by a desire to defend elitist theories, Bachrach and Baratz acknowledged decision making was essentially a power relation in so far as the actions of A affect B, but they argue this alone is not a coherent conceptualization of power. Power is also exercised in what they termed 'non-decision making'; agenda setting - the 'second' face of power. Schattschneider states To be powerful A devotes his energies towards limiting the values and practises of the decision making process. Power is exerted in setting the agenda - an area overlooked by pluralists. Thus Bachrach and Baratz were essentially advancing a debate about the boundaries of the political seeking to broaden them beyond the decision making arena. This they criticised as a mere talking-shop, from which all controversial and contentious issues had been excluded. Bachrach and Baratz argued accountability could only be ensured if the agenda setting process was also subject to scrutiny - they advocated a more inclusive conception of power. A process of non-decision making is said to have helped sustain the arms race during the Cold War in which there seemed a consensus on the need for nuclear deterrents - the option of disarmament remained unexplored until the D Bachrach and Baratz argue that the The irony of such an approach however, is that chief among criticisms of Bachrach and Baratz is that, just as they argue Dahl ignores the agenda setting process, so they are accused for neglecting the means by which preferences are shaped. Quotation taken from Heywood A. Quotations", "label": 1 }, { "main_document": "performed in the following manner: Total electric power to armature = Electrical power lost in armature resistance = [Total electrical power to armature]-[power lost in R [3] From equation 2 and 3, we can obtain table 7: We can make the following conclusions: Mechanical power increases as efficiency increases Because of electrical losses, it will never obtain an efficiency of 100% The induction motor has the same physical stator as a synchronous machine however; its rotor construction is different. There are two types of induction motor rotors available, a cage rotor and a wound rotor. An induction motor relies for its operation on the induction of voltages and current in its rotor circuit from the stator circuit. Stator receives 3-phase power supply and thus produces a rotating magnetic field. The rotating magnetic field induces a current in the conductors in the rotor, which in turn sets up a counterbalancing magnetic field that causes the rotor to turn in the direction the field is rotating. The rotor must always rotate slower than synchronous speed or else the rotor bars would be stationary relative to the magnetic field and thus there would be no induced voltage. Figure 2 shows an equivalent circuit of a DC motor relative to the magnetic field. However, a simplified version is often preferred for its convenience and will be used later on in our investigation. Table 8 below illustrates the values of the name plate ratings of the Induction motor. Using the Ohm meter determine the dc resistance per phase of the motor. This is illustrates in the table 9 below. Now connect the variable voltage source to the energy analyser, switch on and set the variable voltage control to 380V (line to line voltage). Then switch off. Finally, set the dynamometer to T Manu and load torque to 0. The no-load test of an induction motor shows the rotational losses of the motor and gives information about its magnetization current. This is due to the fact that the only load on the motor is the friction and windage losses so all the power in this motor is consumed by mechanical losses. Then we are asked to measure the waveform of starting current. To do this, an oscilloscope is used. First of all, channel volts/ div has to be set as 200mV with a time base sec/div= 50 msec. Push the 'Acquire' button and then follow the procedures below: - On the screen 'Acquire' menu, select 'peak detect' push the 'trigger' button - on the screen trigger' menu, set 'mode' to single and 'slope' to rising - use the 'Trigger level' knob to set this to about 200 mV - set run/stop - push the run stop button such that is shows 'armed' and 'ready' - scope is then ready to use and we turn on the induction motor. Measure the peaks using the oscilloscope and print a copy of starting current trace, which can be found in appendix. The print out from the oscilloscope illustrates a starting current of approximately 1 ampere then a settling down to approximately", "label": 0 }, { "main_document": "trying to create the EFTA). Two important examples conform to this theory. Firstly, creation of the FTA between Mexico and USA (NAFTA) led to other South American countries applying for membership and eventually led to creation of MERCOSUR as countries recognized the benefits of regionalism and possible costs of non-members. Creation of NAFTA led to creation of further regional projects. The Single Market initiative in the EU also started a domino effect in Europe. The EFTA countries recognized the economic costs of non-membership, decided to apply and most of the actually went to become full members of the EU. In conclusion, several lines of argumentation were considered. In particular, the paper has tried to make a distinction between motivation for regionalism and changes which caused the rise of regionalism. End of Cold war and related consequence both enabled and motivated regionalism. Advances of globalisation, existing economic interdependence levels and globalization of structures of international economy in particular led states to engage in projects which try to deal with the forces of the global market. Both neo-liberal as well as neo-realist theories provide a perspective on the arguments about economic and political motivation. Furthermore, domino theory of regionalism is a theory which tries to build on existing theories and tries to explain the recent rise in regional projects. It seems that international level provides a more important source of motivation for regionalism. Global changes and developments enabled as well as made regional projects more attractive for states seeking both greater political stability as well as economic growth. Still it seems that regionalism reflects more the national interests of individual group members rather than a general broad liberalization.", "label": 0 }, { "main_document": "causal factor which can be attributed to resentment against government attempts to change the status quo and also examined the nature of the \"culture of opposition\" in different contexts which is critical in transforming economic conflict into wholesale social crisis. This will help us to understand why different societies took different trajectories through an age of revolution which had different origins (time) but broad global consequences. Bayley, 101. On the other hand, Goldstone substantiates his causal arguments by adopting a demographic or structural model of state breakdown to explain parallel features and why these crises occurred at nearly the same time. This in turn undermined state finances, gave birth to intra-elite factionalism and to elite adherence to religious heterodoxy. Goldstone, \"East and West in the Seventeenth Century: Political Crises in Stuart England, Ottoman Turkey and Ming China\", pp. 133. Steensgaard, 'The Seventeenth Century Crisis', pp. 22. Both writers account differently for the nature of the causes. Bayley reiterates that there are deeper and more fundamental causes beyond the larger materialistic explanations (financial or military defeats) posed, For example, the representation of royal financial incompetence in France was turned into a moral issue - eroded the moral bases of the monarchy, the aristocracy and the church through obscenity and political libel. Similarly, in India, the ruling dynasty and the Indo-Muslim ruling elite were unable politically to counter the growing restiveness of emerging local powers which could employ new claims to legitimacy - the moral campaign to establish dharma (righteous rule). The Ottoman Empire also faced radical challenges to its legitimacy as the notions of \"Liberty, Equality and Fraternity\" released during the \"Age of Revolutions\" continued to threaten rulers and religious establishments worldwide. Bayley, Bayley, Bayley, Bayley, In contrast, Goldstone's account of world crisis in the mid-17 His framework suggests a balance of material and cultural factors as compared to Bayley's account; it gives predominant role to material factors in bringing about state breakdown but a predominant role to culture and ideologies in shaping state construction which will be discussed later. For the former, he was right in asserting the drastic increase in population in the agrarian empires; Asia Minor witnessed 50 to 70 percent increase between 1500 and 1570 while Istanbul's population of 1000 in 1520 swelled to a metropolis of 700 000 by 1600. Goldstone, \"East and West in the Seventeenth Century: Political Crises in Stuart England, Ottoman Turkey and Ming China\", pp. 106. Goldstone, \"East and West in the Seventeenth Century: Political Crises in Stuart England, Ottoman Turkey and Ming China\", pp. 112. Goldstone, \"East and West in the Seventeenth Century: Political Crises in Stuart England, Ottoman Turkey and Ming China\", pp. 116-117. With both writers examining causes of world crises from both ends of a spectrum, one can firmly establish that both material and non material causal factors interact to trigger and sustain a world crisis. The third aspect of analysis covers the nature or character of world crisis. In C.A Bayley's coverage of the world crisis in late 18 Similarly, the Ottoman Empire's military failure against the Austrians and", "label": 1 }, { "main_document": "a positive effect on increasing teenager resistance towards smoking behavior Therefore after identifying the associated risk/protective factors towards affecting a teenage smoking uptake status, in the following section I will carry out an empirical regression analysis on investigating the significance of these explanatory factors. The data I've selected to use will be based on National Youth Tobacco Survey in USA 2004 conducted by the Cancer diseases centre since CDC is one of the 13 major operating centre of the Department of Health and Human Services (HHS) which is globally recognized for conducting health research and investigations. The survey consists of 27933 observations; the sample target is on US middle school (grade 6-8) and high school (grade 9-12) students who are basically from 4 ethnicity regions (white, Hispanic, Asian, African American). The reason why I have chosen to pick for year 2004 dataset to estimate as the research in that year was conducted in a larger sampling set and it only consisted of small amount of missing datas compared to some other datas of recent years. However it is not without limitation since this questionnaire was only available in English. So comprehension maybe limited to some ethical participants whose first language is not English. This may one of the main potential bias in the data result. In this section, the dependant variable we are going to test is the likelihood of the adolescent whether or not he/she is going be a smoker. Explanatory variables include examining into how factors such as adolescent's gender, age, ethnicity, friends influence, parental attitude, living people influence, their interest in schooling, amount of income earned, effect of anti smoking advertisement and school discussion of danger of tobacco use will independently correlated with the dependant variable. The choice of these explanatory variables will allow us to compare the effect of how biological/genetics factors, social environment, individual characteristics, commodity price, control intervention each can increase or decrease the probability for a adolescent to take up smoking. The test of dependent variables will also be binary in nature. Therefore before turning to estimate the regression, it is necessary to generalize all the dependant and independent variables correctly into dummies variables from the raw dataset. A better approach to estimate this regression is to run the binary choice and taking the probit regression approach. 3Appendix 3 Before trying to interpret the significance of the factors, it is necessary to check if there is multi-collinearity in the model as even though it will not affect the coefficient output, it will inflate the standard errors thus bias the z statistics. Multi-collinearity problem existed between two variables As the correlation shown in the table between them is 0.078890 which is bigger than the correlation shown between the independent and dependant variable when we looked into the relationship between To solve the problem, I will choose to drop the variable of parental influence and estimate the model again. The regression model: (Smoke/Non Smoke)= Constant+ Age +Female+ American Indian + Peer smoke+ Living people smoke+ loss in interest in schooling + high income+ parental strict attitude +", "label": 0 }, { "main_document": "same plot, plot the PER for BPSK, QPSK, 16PSK and 16QAM over SNR 0,2,4,... 20dB over 1000000+6 bits. The result will be shown in the following page and also can be seen in Like the performance of uncoded scheme, the comparisons and contracts in both the BER and PER against SNR of different modulation schemes with the CRC method have a same conclusion, that is, the BPSK has the best performance, and then the QPSK and then 16QAM and the worst 16PSK. 4.5 Multi-carrier Modulation (OFDM) At last, we have accomplished all the modulation and convolutional coding parts. Now we come to the The OFDM model, offered by IEEE 802.11a (Wireless LAN), can simulate the data transmission in the radio environment. See in In the modulator bank ( The OFDM data, the pilots and the training sequences are collected together and the padding of 11 zeros is at the end of each frame and the cyclic prefix is added as well. The multi-path channel model allows you to s At the receiver, notice the cyclic prefix is removed and the FFT is performed to convert back from time to frequency. The frequency domain equalizer divides the received pilots by the sent pilots to extract an estimate of the channel frequency response and then equalizes the data by multiplying the received data symbols by an estimate of the inverted channel response. The demodulator bank performs the channel decoding. By re-modulating the decoded data symbols and comparing with the received symbols, an error vector can be created. An estimate of the channel SNR can be found and used to adaptively control the modulation used. Therefore, for poor channels BPSK can be used with a low data rate while noiseless channels allow 64QAM and a high data rate to be used. Run the model, By trial-and-error Using the no fading, flat fading and dispersive fading channel, for SNR=0, 2, ... 30 plot the PER against SNR and annotate the graph indicating which portion of the graph uses which modulation scheme and data rate for the three propagation channels. Up to now, we have gained all the materials and proofs for the further analysis and discussions. In PSK modulation schemes (BPSK, QPSK and 16PSK), the instantaneous power out of the modulator is always 1, because points on their constellation is equispaced around a circle therefore the amplitude is a constant 1. However, it is not a constant for QAM modulation. Take the example of 16QAM, it has three instantaneous power: 0.33333, , and1. Refer to the figures below. The graph above compares the bit-error rates of BPSK, QPSK, 16QAM, 16PSK and 64QAM. As we have mentioned in implementation part that the level of performance from good to bad is: BPSK, QPSK, 16QAM, 16PSK and 64QAM. It is seen that higher-order modulations exhibit higher error-rates; in exchange however they deliver a higher raw data-rate. BPSK is the simplest form of PSK. It is This modulation It is, however, only able to With four phases, QPSK can encode two bits per symbol - twice the rate of BPSK.", "label": 0 }, { "main_document": "Many historians and Che Guevara himself have recognized that there were two stages to the Cuban revolution; the insurrectionary stage and the post insurrectionary stage. The first phase was armed action lasting to 1 There are two ways to begin an assessment of Che Guevara's contribution to the insurrectionary phase of the Cuban Revolution; the first is by studying what Che actually did during the insurrectionary phase of revolution and what he accomplished, in other words was he a success? The second approach is to examine what Che meant to Cuba and the revolutionary forces during the insurrectionary phase. Through these two approaches an evaluation shall be made of Che Guevara's contribution to the insurrectionary phase of the Cuban revolution. The main area of concern when discussing Che Guevara is problem of objectivity and impartiality, and a fair conclusion has tried to be drawn from the evidence collected. Che Guevara, Donald C. Hodges, The historian Richard Harris once wrote 'revolutionaries are not born, they are made' As a child and teenager Ernesto Guevara never appeared to be the man he would later become. He was plagued by asthma his whole life, though as many historians and biographers agree this could account for Ernesto's strong desire to prove himself as capable as his peers. He competed in football and rugby and has always been described as an 'adventurer'. Still despite this daring and competitive character Ernesto never had any strong political beliefs till much later in his life, though his sense of the injustices of the world began to develop much earlier. Ernesto became fascinated by injustice when he took his now infamous trip around Latin America on a motorbike with Alberto Granados, in 1952. This trip opened his eyes to the fate of many Latin Americans and marked his first step into the revolutionary world. Richard Harris, Ernesto traveled extensively round Latin America after completing his medical degree in 1953 and his political ideas developed rapidly during this time, especially after he reached Guatemala. He supported the new President Jacobo Arbenz when the government expropriated land belonging to feudal estates and the United Fruit Company to give to the Guatemalan peasants; he also supported the government treatment of the indigenous peoples, though at this time he was still not Communist. It was the CIA sponsored invasion of Guatemala that pushed Che into his belief in armed resistance. As the invasion force pushed into the capital Che tried to persuade arms to be given to the masses to support 'their revolution' but this failed and Che got his first glimpse of 'Yankee Imperialism' Due to his support of the government Che was now a wanted man whose convictions and beliefs were growing stronger by the day. He was soon forced to flee and he arrived in Mexico in 1954. Che's arrival in Mexico would be were his real career as a revolutionary began. Che Guevara, Che's meeting with Castro in Mexico launched Che into the world of a revolutionary and a guerilla. His political beliefs were now firmly entrenched in his mind and", "label": 1 }, { "main_document": "however, in reality, it appeared to mark an end to a progressive kind of thought. 8. Vanessa Redgrave saw the 1960s as a period far from liberating. She states how Joe Orton was sent to prison for being homosexual and how Lord Chamberlain censored playwrights and productions. The Sexual Offences Act-1967 decriminalised homosexual activity between consenting adults in private. This was initially seen as liberating and a step forward in society. However, although this was a remarkable breakthrough, the age of consent still differed from that of those involved in heterosexual relationships. As Vanessa Redgrave states, the 1960s may have had legal changes, but was an era far from liberating. It is claimed that this reform led to a revolution in attitudes; the legal harassment was removed, yet the law did not alter the national attitudes and stigma towards sexual deviants; it merely altered the framework within which the law operated; in reality, 'the Sexual Offences Act of 1967 did nothing to eliminate the hard core of bigotry and hatred' (Davenport-Hines, 1990, p328). It is evident that all of the reforms of the late 1950s and 1960s marked a retreat from the social controls imposed in the Victorian era. Yet on reflection of both the article and further literature, it appears that the sexual revolution was like a rose with thorns (Ferris 1993, p186). In some ways, there was a distinct move to liberation and progression. In reality however, such progression was not without drawbacks. What appeared to be radical legislative reforms all encased a contradictory nature. As Weeks illustrates, there were two key points to the problems of such reforms that were passed in the 1960s. Firstly, each reform was argued for on its own merits, as support was needed form the government for each reform, the chief concern was to obtain a parliamentary majority vote. Thus, nothing too radical would ever be proposed in fear of rejection. Secondly, there was a distinct limited nature of reform. The homosexual law reform did not legalise homosexuality as such, it narrowly decriminalised certain aspects of male adult behaviour in private (Weeks, 1981, p267). There were evident changes in the law, yet did these translate to liberalisation? What is evident from the above article is that the sexual revolution has meant different things for different people. For some people, it was an era of great optimism and liberation, yet for others, it just subordinated women further and exploited the consumerism market. As Abbie Hoffman states, 'revolution is not something fixed in ideology, nor is it something fashioned to a particular decade. It is a perpetual process embedded in the human spirit'. (Abbie Hoffman. Date unknown). Perhaps to fully understand this, and to realise that liberation could just be on an individual level, is indeed the greatest revolution of all.", "label": 1 }, { "main_document": "a few generations but is then unsuccessful or abandoned resulting in a discontinuity of settlement. Akrotiri It is a rockshelter site on the Akrotiri peninsular in the south of the island with an average radiocarbon date of 10,464 (Simmons 1991:865). The site has been interpreted as a specialised food processing site (Simmons 1999:310) and is characterised by an enormous faunal assemblage containing over 200,000 bones dominated by the remains of extinct Pleistocene species Cultural features supporting an anthropogenic origin for this deposit include hearths, burnt bone, chipped stone tools and various other artefacts including 2 picrolite pendants/ beads and a pierced stone disc. Hereafter Site names are made up of the name of the nearest village followed by the nearest topographic feature but are usually referred to only by the second word and the village name is dropped. There are some exceptions to this rule, such as Khirokitia- Refering to a site by the second part of the name prevents confusion in cases where villages have more than one site in their environs (e.g. Kalavassos, Kissonerga etc.). There is some debate as to whether this site represents colonisation of the island or if it is just a visitation site. Archaeologists have been criticised for their \"tendency not only to emphasise the first evidence of human presence in the islands, but also to assume that such finds are indicative of The excavator concludes that is it 'unclear' whether the The people using the site are clearly exploiting the local resources, particularly the endemic fauna, however, since it is estimated that In the absence of any substantial continuity demonstrable between this and later sites, the Akrotiri phase should be defined as an 'occupation' phase rather than colonisation. Given that current evidence indicates that the island was abandoned when native faunal resources declined, it could be argued that this represents visitation. As more discoveries are made which fill in the gap between the Akrotiri phase and the Aceramic Neolithic, it may be that what has thus far been assumed to be visitation or unsuccessful colonisation may require re-evaluation. \"The absence of evidence...is not evidence of absence\" (Watkins 2004: 24) and perhaps future research may reveal that Prior to the excavation of Comparisons were made between the archaeological remains from contemporary mainland sites and those from Khirokitia in order to establish the origin of the 'first' Cypriots through similarities and to ascertain whether they were direct descendents or if differences in material culture implied that they had island predecessors who were as yet undiscovered. Stanley Price expressed this in the form of two opposing hypotheses: the 'colonisation hypothesis and the 'antecedent development hypothesis', describing the latter as \"not only logically but empirically inadequate\" (1977b: 66) but which has since been proved correct through discoveries of earlier material. Though it is possible that the Akrotiri people may have been the distant ancestors of later colonisers through the impartation of some kind of 'residual memory' of the island passed down through the generations (Simmons 1999:323), the Aceramic Neolithic phase is so far removed (both chronologically and culturally) that", "label": 1 }, { "main_document": "plan enable the patient to adjust there medication in response to particular symptoms and signs. Such strategy avoids delays, prevents exacerbations and adds to the patient's sense of control over their asthma [2]. IgE plays a central role in the development of allergic diseases. People with allergies produce IgE when exposed to allergens, this causes a release of chemicals such as histamine that produce the symptoms associated with allergic diseases e.g. itching, sneezing, shortness of breath and cough. Recombinant monoclonal antibodies to IgE may play a role in the treatment of allergic diseases such as asthma. Indeed one review found that anti IgE led to a reduction in asthma exacerbations and in some cases withdrawal of regular inhaled steroid use [8]. The prevalence of asthma is increasing worldwide with approximately 10% of people having suffered an asthma attack [9]. Exposure to certain stimuli produces inflammation and structural changes in the airways resulting in airway hyper responsiveness and obstruction to airflow. The aim of intervention is to: minimise or eliminate symptoms maximise lung function, prevent exacerbations minimise the need for medication facilitate self-management of asthma Many asthmatics do not use their inhalers correctly resulting in only 10% of the medication actually reaching the patients airways. It is therefore vital that a patients inhaler technique is observed to ensure that they are gaining maximum benefit from it and to prevent their asthma treatment being inappropriately stepped up to include steroid use. Upper respiratory tract infections and exposure to allergens are thought to be the most common triggers of severe asthma exacerbations.", "label": 1 }, { "main_document": "that criminalise sexualities outside the purview of the heterosexual, monogamous family, the state has constructed heterosexuality as a pre-requisite to citizenship and as the unspoken norm of membership and belonging to a nation. Amy L. Brandzel, \"Queering Citizenship? Same-Sex Marriage and the State\" (2005) 11:2 GLQ at p. 190 Ibid at p. 174-175 In contrast, other gays and lesbians argue in favour of equal marriage and other civil rights available to heterosexuals on the ground of non-discrimination and equality before law since the right to marry is intrinsic to forming long-term relationships and to create families. Moreover, legalizing same-sex marriages would eventually result in the development of a gender-just social system since the power equation and the tendency to dominate the weaker sex would not exist. George Lakeoff So legalizing gay marriage does undermine the traditional patriarchal hegemony\". However, this notion of a gay relationship as an equal one is not always true since the influence of the dominant patriarchy and its concomitant values of 'masculinity or feminity' is so entrenched that even within same-sex couples, one partner becomes dominant while the other becomes dependent either emotionally or financially thereby falling into the same trap of a heterosexual relation, for e.g. 'butch-femme' among lesbians where one woman behaves like the man/husband while the other pretends to be the wife. Ann Ferguson, \"Gay Marriage: An American & Feminist Dilemma\" (Winter 2007) 22.1 In the end, I would like to say that the demand for the right to same-sex marriage should be subject to intense scrutiny and critical thinking. It is imperative to broaden our notion of 'family' in order to be open to all other kinds of families, e.g. same-sex couple adopting a baby without being married etc thereby creating a different definition of family apart from the heterosexual nuclear one. Also, it needs to be noted that unless an egalitarian and gender sensitive marriage is developed wherein the relationship is based on equal trust and respect, there is no purpose of claiming the right to same-sex marriage except to have access to the state sponsored marital benefits and privilege which are, not of course, unimportant. Still, the idea of gay marriage is rooted in an assumption of an equal relation which if legalized might even transform the heterosexist bias of the institution of marriage. However, there is a real danger of pronouncing a moral judgement on those who choose not to be married and also of regulating the sexual fluidity currently existing in homosexual discourse which permits gays and lesbians to experiment with their sexualities by having relationships with men, women, transsexual etc so as not to define their sexual orientation as purely heterosexual/homosexual. Thus, in effect, marriage law could be an instrument to domesticate and tame the so-called 'sexual promiscuity' of gays and lesbians by forcing them to look for monogamous relationships in order to be 'respected' by the society. But there also exists a constant demand among some gay couples to be legally married and to create an egalitarian relationship distinct from the patriarchal one. Hence, same-sex marriage should be", "label": 0 }, { "main_document": "by the following phases: Problem identification: Establishing why there is need for the system. Requirements determination: Specify what the system needs to do and what information it needs to store. Feasibility study: Ensure the system is a feasible option. Systems analysis: Determine the functionalities of the system and create a model of the system processes needed to meet the requirements. System design: Define the classes and methods needed to meet the system functionalities. Construction: Write and test the code, modify as required until testing reveals no errors. Implementation: Put the completed solution into use. Maintenance: Make any changes, corrections and additions to the system. This stage was not included as this is not a real life project. Evaluation: Verify that system meets the specification. In this section the effort (labour required to complete a task) and the schedule (time frame required to complete the task) for each phase of the systems life cycle are calculated. Since the project report was due in on 28 It was essential to have the project completed and tested by 28 I split the project into following milestones in order to ensure the project was completed on time. I also created a project website which was useful in keeping the supervisor informed about the project progress at all time ( An online store allows you to be open for business 24 hours a day, 7 days a week. Not only is this an important convenience for the customers, it also means more revenue for the merchants. Many merchants say that 20% of their revenue is through online stores. Another benefit that merchants gain from online store is the reduction in overhead costs, as the merchants don't need to employ staff to take orders. With the right payment processing tools, these functions are all done automatically. And lastly an online store helps merchants to reach markets across the countries. It's often seen customers from United Kingdom doing online shopping from merchants based in United States of America or other parts of Europe. The most important part of selling online is accepting payments from customers. Online payment processing offers customers the convenience of submitting their card details on the merchant website and get there goods delivered at their door steps. But, as e-commerce has been an extra source of income for online traders, it has also been a source of income for people who make revenue by using stolen cards and information. Fraud is increasing dramatically these days with expansion of modern technology, resulting in the loss of billion of pounds each year. Revenue lost through Credit Card fraud in the United Kingdom has been growing rapidly over last five years with The Fraud is as old as humanity itself and can take an unlimited variety of different forms. However, in recent years, the development of new technologies (which have made it easier for us to communicate and helped increase our spending power) has also provided yet further ways in which criminals may commit fraud. Traditional forms of fraudulent behavior such as money laundering have become easier to perpetrate", "label": 0 }, { "main_document": "was too great and so Negroes had to be supplied from Angola and used as slaves. Slavery was not a new idea to the Europeans However the Church could never \"proscribe slavery as unconditionally immoral as it functioned in the society of men\" The Catholic Doctrine did not oppose slavery as such since the master and slave were equal in the sight of God, but there was a distinct difference between the Churches treatment of the Indians and the Negroes. The Jesuits arrived in Brazil as they were to evangelise the Indian pagans, they protected them from exploitation and \"were tutelary and paternalist\" They created specially designed villages or aldeias, which entirely reorganised the Indian society and transformed them into a working force on the missionaries fields and plantations, much to the uproar of the local settlers. Consequently the natives were used as labour but did not suffer the hardships they would have encountered on normal plantations. However as the Indians were extremely recalcitrant and their population decreased whilst the Negro population increased, the Indians were regarded as exotic but the Negro was the main source of labour. Slavery was an accepted feature, which was often essential for the economy and society for all ancient civilizations. The Europeans, due to Christianity and the Church transformed slavery into serfdom. They practiced slavery since 1444 as they imported 900 slaves annually from East Africa as part of the agricultural workforce they lacked. Laura Foner and Eugene Genovese, Edwin Williamson, There has been a great deal of turmoil regarding the punishment and brutal, savage behaviour towards the Negro slaves. Not only were they abducted by force from Angola, but also the transport conditions were so appalling that about 20% of the slaves perished while crossing the Atlantic. They starved and thirst in the insanely cramped conditions, epidemics were frequent, but this environment broke the slaves will enabling subjugation. The harshness of life in Brazil and the foul living conditions forced the slaves to resist, however they paid dearly for impertinence. There are numerous accounts of barbaric treatment of slave's punishments that ranged from flogging to castration, novenas It was unsurprising that the slaves resisted, rebelled and consequently runaway forming quilombos Slaves were tied down and flogged for 9 to 13 consecutive nights \"Slaves broken on the wheel were tied to a wooden frame while the executioner broke their bones beginning with the smallest in the fingers and toes, and progressing to the arms and legs, and ribs before delivering final, massive blows to the genitals and head.\" Chasteen and Tulchin, Runaway communities It seems evident that throughout the 16 The presence of the bandieras, which were bands of 200 men who tracked and hunted down Indians, later to be used as slaves also ridiculed the legal system. This ignited fierce opposition from the Church yet they were powerless to challenge the vast Brazilian coast and aid every persecuted victim. Technically the Portuguese crown had established an attorney-general under whose jurisdiction came all matters relating to the treatment of slaves and fines were placed upon those", "label": 1 }, { "main_document": "a signature on a paper document. Supposing that A wants to digitally sign a message to B, A uses a one-way hash function of the document and then encrypts the hash using A's private-key, the encrypted hash is appended to the original document. Then A sends the message along with A's public-key. Only this public-key can decrypt that message. B strips off the encrypted hash and uses the A's public key to decrypt it. B also encrypts the hash from the received message, than compares it to the hash message he obtained. If values are equal, Digital Signature Verification is produced, what means that there is no doubt that it is A's private key that encrypted this message. A sender signs a message with his private key in a way that guarantees not only that the message came from this sender, but also that it has not been modified. Even the slightest change in the document will negate the signature, as the crypto functions bind mathematically with a hash of the document. And if the message needs to be kept private, then additional encryption is added. Public-key technology is widely used in many branches these days. Its effectiveness and reliability in both encryption and digital signature made it a useful tool in wireless networks. Algorithms used in PKC, such as RSA, made illegal decoding of a message even more complicated. The concept of digital signature appeared few years before reasonable realization of it was available. The first practical method which fulfilled developers' expectations about digital signature was the RSA signature scheme, which main innovation was introduction of RSA algorithm. The RSA algorithm published in April 1977 was named after Ronald Rivest, Adi Shamir and Leonard Adelman who discovered and patented it. Since then, the algorithm has been used in many Internet-based applications. RSA type of encryption is employed in web browsing programs like Netscape Navigator and Microsoft Explorer, where it use used in implementations of the Secure Sockets Layer (SSL) protocol. The algorithm is also widely used in our every day's life, which we are not even aware of. Who thinks about RSA when one uses debit or credit card? But as a meter of fact RSA is the algorithm which makes these transactions possible, and secure. The companies like Mastercard and VISA employs RSA in the Secure Electronic Transactions (SET) protocol. The algorithm is used in public key cryptography which was described in previous question. But how does it work? This cryptosystem is based on assumption: \" \" This is how RSA algorithm employs one way functions. Such function is simple to do in one direction but is nearly impossible (or very hard) to inverse. According to RSA to decode a message one must find all prime factors of given number and use following mathematical algorithm to find a result: According to RSA formula: This operation is very hard to reverse even if we now C, e and n. That is why we change our exponent into d: We know that: The encryption and decryption exponents, (d) and (e), are related", "label": 0 }, { "main_document": "the loss of a degree of control over states ability to implement its policy objectives, marked by increased importance of translational relations and pre-existing levels of intraregional levels of trade led states to look for new forms of political governance. Regional institutions and projects can be understood as aggregating states ability to control and decide in policy areas where individual states lost part of it. By pooling their power, states gain a greater degree of relative power and in this sense external independence at least in relation to outside the region environment. Rise of regionalism is to a certain extent a response to the effect of globalisation on states ability to implement their policy objectives. In this sense, regional institutions like European Commission provide a regional level of policy output for member states and by representing the region externally is/are able to exert a greater level of relative power than single members. Individual member states thus gain more power in the international field. In particular, smaller states like Belgium and Cyprus gain power by joining powerful regional organisations with collective decision making like EU. However, political institutions and integration of non-economic policies, e.g. in EU, are an exception rather than a rule as most of other projects are concerned mainly with convergence of economic policies. While economic motivation as explained above are important, globalization and The Asian financial crisis of 1997 particularly illuminates the lacking global economic governance and regulations. ASEAN as a response to these started to develop institutions to prevent or at least tackle market failures. Development of a monetary union in EU is also motivated by states willingness to gain greater control over their monetary policies and avoid problems associated with weak and unstable currencies. Regional projects can be seen as providing the foundation of political governance to match the expanding force of the market.(Balaam and Veseth 2005) Another factor contributing to the region projects is Neo-functionalism argues that \"the expansion of economic activity generates pressure on enhanced regional or international coordination by enabling such organizations to fulfil functions that states can no longer perform.\" (Mansfield and Milner 1997) This can be also seen in NAFTA where existing levels of trade between USA and Mexico created a level of interdependence between these two countries which made the trade agreement easier to accept. Regionalism here is seen as creating benefits for members involved and creating costs for non-members which makes membership particularly attractive. In this sense membership in a regional block creates a competitive advantage against non-members companies as it guarantee preferential treatment for members companies. Potential loss of competitiveness makes the option of remaining outside the block very costly as it can cause trade diversion, expanding trade between members at the cost of trade between non-members and members. As it is better to be in that out of the regional block, regional integration is naturally expansive. Once a group of countries decides on an economic regional project, it is rational for other countries to try to join the block or create one of their own (as seen with UK", "label": 0 }, { "main_document": "them to do. Politics and power exists in all organisations from schools to multi billion pound businesses. In FamRelief specifically it was clear that politics existed between the Regional Administrator and the District Manager. The District Manager felt there was little support received from the Regional Administrator and as a result was less than forthcoming with sales data that the she needed to perform her role effectively. The Regional Director felt her role was inhibited by restrictions on trading and the restrictions of national targets, a use of power exerted by Head Office. However, although power and politics are exhibited at different levels of the organisation the recommendations made to FamRelief are more concerned with improving the availability and flow of information, and the communications in the organisation. The main benefit of group work was the syndication of different views and individuals knowledge. It was also beneficial having students studying CBS and Management Science as we were able to combine knowledge of IT and IS, with more specific business skills such as report writing and organisational analysis. As a group we assigned individuals to the different tasks for interviewing the FamRelief individuals and then to work on particular sections of the assignment. We worked hard and the assignment was brought together and submitted on time. Important lessons were learnt: regular clear communication within the group, all members attended meetings, knowing what was required before and after meetings and creating a dynamic where all members felt at ease to contribute was all important. Away from the group, effective use of email and phone communication was vital. We also learned that we worked better in a group session when it was purely a case of syndication of work rather than commenting on the work. We squandered time trying to complete each stage of the assignment as a group. Belbin's 9 team roles framework (1996) shows that our 6 members displayed characteristics of these roles. Initially I occupied the role of a co-ordinator and then at the end a completer. Other members with greater IS experience took on the role of specialist, while one individual was good at averting friction and calming the waters, thus the role of a teamworker. A rich picture of the group process is attached which illustrates what we did well and poorly as a group. Clock faces indicate where time was note used efficiently, and crossed sword indicate conflict.", "label": 1 }, { "main_document": "64x24 with zeros. These zeros are later discarded at the receiver, after the corresponding FFT has been carried out. A cyclic prefix is added to each frame. This provides what is known as a guard interval between symbols so that delayed and echoed symbols do not interfere with their successors. The PER produced in the model is for analysis purposes only. In practice, the system would not have direct access to the received message and the original uncoded message, so the PER could not be obtained. The method shown earlier of calculating a PER using a CRC is also not possible, since there is no CRC included in the physical layer of 802.11a. Instead, the system uses an alternative method of judging transmission quality. The demodulated message at the receiver is remodulated. Then the two are compared. The difference in the magnitude and phase of the symbols provides a fairly accurate figure of the noise that was injected by the channel. An estimate of the SNR is calculated, and this is used to select the modulation scheme to be used. There are three types of channel noise which can be modelled using the 'Multipath Channel' block, each includes AWGN: no fading - the SNR will remain constant during simulation flat fading - the amplitude and phase shift of the signal will vary over time. dispersive fading - the amplitude and phase of the signal will change and a varying amount of echo is also applied to the signal. The latter can be used to realistically analyse the 802.11a systems immunity to noise caused by multipath reception. This section provides a detailed comparison of the coding and modulation schemes used throughout the project. For the majority of comparisons, the BER curves will be used. A BER curve is a plot of Bit Error Rate vs. Signal to Noise Ratio, and shows how effective a modulation scheme is at providing immunity to noise. Before comparing real results, it is useful to note what the curves mean. Figure 4.1 shows four possible different shapes of BER curve. A system that performs like (a) would be preferable to the system that produced curve (b), since the BER drops much faster, at a lower SNR. For any given SNR, system (a) will perform the same or better than (b). System (c) could be better or worse than (a) depending on the SNR at which the system is likely to operate. It can be seen that at a low SNR (high noise environment), (c) would give fewer bit errors, but at a high SNR (a) would be best. Curve (d) tails off slowly and does not reach zero. This type should be avoided in most cases, since it will produce bit errors even at very low SNR. Armed with this knowledge, the results in Figure 4.2-4.6 can be correctly interpreted. Figure 4.2 compares the different modulation schemes when no coding is used. These curves were produced with the model in Figure 3.1. It can be clearly seen that as the complexity of the modulation scheme increases (along with", "label": 1 }, { "main_document": "said \"the income to be generated is designed to provide benefits to local communities for their protection efforts and provide incentives for continued and enhanced protection\" (WWF, 2004, p4). Environment officials in Namibia certainly viewed the decision as a reward,, stating; \"We appreciate this recognition of our conservation achievements\" (Stoddard, 2004). Other conservation organisations do not see consumptive use as appropriate and opposed WWF's position, stating that \"the pressure from the pro-utilisation movement, the hunters and the traders to increase quotas or re-open trade was plain for all to see\" and \"was hard to swallow\" (Travers, 2004). Child (1991, p160) says often there are intentional efforts to undermine consumptive utilisation by linking it to poaching problems, particularly with endangered species. This link was been seen in the debate about rhino hunting in IFAW's (2004) response that; \"This was a poor decision which may well lead to increased poaching of threatened populations\". Despite these concerns and objections, Koro (2005) reports that the CITES 2004 meeting approved all proposals to hunt or trade in species that were 'scientifically proven' to be out of danger, even if not always in full. Therefore CITES evidently views consumptive utilisation as appropriate within its remit to control trade in endangered species. Consumptive wildlife utilisation is here, and is probably here to stay as long as the economic benefits from wildlife are greater than from competing land-uses (Du Toit, 2002). So, it seems to be appropriate for local communities who can financially benefit from it. However, it may not be an appropriate tool for conservation agencies which rely on goodwill for donations, and may alienate supporters with consumptive utilisation. On the question of whether it is appropriate for wildlife, there are those who believe the hunting rates are low enough to be sustainable. However, the consequences are uncertain, as \"without better information on harvest rates and the impact on trophy species population densities over time, it is unclear whether safari hunting is ecologically sustainable\" (Wilkie & Carpenter, 1999, p 343).", "label": 1 }, { "main_document": "IE. Given his current drug regime and history of heart disease, Mr Fortunately however there was nothing to suggest that this had been exacerbated by the recent events, for example as a result of valve destruction. There was also nothing to suggest the occurrence of an infarction (e.g. arterial, pulmonary, coronary or cerebral) as a result of vegetation emboli. When considering differential diagnoses at this stage, there are numerous conditions which could explain his main symptoms of malaise, lethargy and rigors and the association with a dental procedure may just be incidental. Following his recent hospital admission, anaemia would have to be a possible reason, especially if no underlying cause was previously determined. Secondly, despite the lack of pyrexia and cough, a LRTI may present in this way and given his bronchiectasis, Mr Elderly patients are also prone to UTI's which can often present with non-specific systemic symptoms. Other conditions such as SLE, sickle cell disease and atrial myxoma (primary cardiac tumour) can also mimic IE. During the physical examination it would be necessary to assess the cardiovascular, respiratory and abdominal systems as well as searching for more specific clinical signs which indicate IE. Important clinical signs of IE include the following: The admission notes from A+E will also be obtained in order to provide an insight into Mr HS 1 + 2 + ejection systolic murmur Murmur was loudest in mitral region, but heard throughout precordium. Indicating mitral regurgitation. Few left basal crepitations but there was no percussive irritability. No other significant signs. AP CXR revealed slightly large heart and lower left zone shadowing Abdomen soft and non-tender. No masses felt. Bowel sounds present. No hepatosplenomegaly. Not formally assessed. All limbs moving The admission notes revealed that Mr He also had cold peripheries and a reduced oxygen saturation which may indicate a degree of anaemia. Significant findings from the systems examinations included splinter haemorrhages, anaemia and the detection of a previously undocumented MR murmur. These all point towards a diagnosis of IE as previously suggested by the history. Although some basal lung crepitations were heard indicating consolidation, Mr Mr In addition to the physical diagnosis, it is important to consider the psychosocial implications this condition may have upon Mr Despite his history of cardiac disease, Mr His reported anxiety and low mood during the past 6 weeks obviously highlights his concern for his health, and therefore it would be important to consider the impact a complication such as severe heart failure would have upon his lifestyle and mental health. Fortunately however there does appear to be significant family support network available to care for him in the event of chronic complications. In suspected cases of infective endocarditis, further investigation is essential in order to confirm the diagnosis. Some of the necessary investigations are described below, along with actual results obtained from Mr It is however important to remember that the majority of these investigations (with the exception of Echo and ECG) are non-specific, and are instead used to confirm the presence of sepsis rather than IE itself. A normochromic, normocytic anaemia", "label": 1 }, { "main_document": "of culture, in the context that the critical theorists examine culture, is distorted from one that is inherently challenging to society; to one that is complicit with society. The value of art should be its use value, not its exchange value, in accepting the art to be synonymous with its price we lose the real value; the potential to inspire and provoke; the chance for culture to be freedom as Kant hoped. This is not an accident; again the critical theorists are intuitively correct in their view of mass media, albeit a miserable one. Herbert Marcuse calls this both 'technological domination' and 'repressive desubliminiation'. Monopolies and large scale state intervention have reduced people's freedoms: extending their control in ever more secretly manifested ways as well as being exacted more efficiently and more successfully than ever before. A combination of authoritarianism and capitalism was present in the 'brave new world' post World War II. The blossoming of a more complete Fordism diminished the prospect of any working class revolt. This effected culture profoundly, high art and low art seemed to be amalgamating and not for the better. 'Culture' was being eroded by \"repressive desubliminiation\" that \"manifests itself in all the manifold ways of fun, relaxation and togetherness which practise the destruction of privacy, the contempt of form, the inability to tolerate silence, the proud exhibition of crudeness and brutality\" (Marcuse:1995:203) Society is becoming merely 'one dimension', allowing the economic base, and its institutions, to assert more control over social relationships. It is na Technological progress governs our lives; the celebration of humanism and romance are seen as backward in an age of science and 'reason' (Marcuse:1964:58). Technology and more astutely; its misapplication, is a derogatory force against a wealthy and rich culture. Technological domination owes much to what post-structuralists, as well as the Frankfurt School, have called 'instrumental reason'. \"Instrumental reason, then, is concerned solely with practical ends... instrumental reason separates fact and value: it is concerned with discovering how to do things, not what should be done\" (Craib:1992:213) Marcuse sees that the needs of society have curtailed and replaced the needs of the individual. Adorno postulates a similar idea in his chapter on 'Free time' (Adorno:1991:chapter 8). The premise of both their arguments is that sexual energies, as well as other 'life-energy' or 'libidos', are harnessed by the system to endorse its values or transferred to more productive outputs. Adorno sees 'free time' and hobbies as essentially a means to supporting the structure of the work environment. The very idea of hobbies, to Adorno, is depressing: we create ourselves through our work; hobbies are something we attach to ourselves rather than allowing them to become part of our self identity (Adorno:1991:163). This functionalist conception of 'free time' can be transposed to the 'culture industry'. Culture's function is to offer some light relief from the toil of work, yet which also sanctions the current structures and institutions in existence. Having broadly discussed the core concepts that run through critical theory; the 'culture industry' can be explored in depth. \"[T]he expression 'industry' is not", "label": 1 }, { "main_document": "scientists in order to develop a sustainable tourism program. Hollensen (2004) ranks the entry modes in export-, intermediate- and hierarchical groups. Export modes are indirect or direct exporting, or export marketing groups; which are typical choices when entering a foreign market for the first time and supplying from the domestic set. Secondly, types of intermediate entry strategies are franchising, joint venture and management contracting, where there is a shared ownership between the parent company and the local firm. Finally hierarchical modes are where the company has an entire ownership and control. There are advantages and disadvantages in all categories depending on several factors. Research showed that for FBH most suitable forms of entry would be franchising or whole ownership. Franchising is easier in developed countries suggests Contractor and Kundu (1998). Wholly owned subsidiaries require the greatest resource commitment, though it enables highest control and lowest technology risk (Osland There is a need for high personal control at FBH to ensure consistency. While in case of franchising the franchisee has the power of operational control and the franchiser has not a great deal of direct control. Root (1994, cited in Hollensen, 2004) states that there are three different rules companies follow when entering foreign market. If decision is to enter all foreign markets in the same way it is 'naive', when the most effective mode is chosen he calls it 'pragmatic' and when all the possible entry modes are investigated and then the choice is made taking in account maximum profit that is carried out following the 'strategy rules'. FBH being present in several countries already suggests experience and capital to be able to invest. Some of the existing properties have been franchised, but for the new Canadian subsidiary an ownership might be more suitable to begin with. Also the employees need to represent the company itself rather than the country they work in. There is a plan to relocate some professionals from the UK based hotel to Canada in order to train the new, locally hired staff and transfer the knowledge and policy of the hotel which is necessary in case of transnational organization according to Hollensen (2004). Subsequently there is the question of whether to acquire an existing building or to proceed with foreign direct investment. Acquisition might cause some difficulty as FBH is a standardised hard brand in terms of quality which will be explained later. However it might be a possibility to buy an establishment where a different industry operates that would be a 'conglomerate' form of acquisition (Root, 1987 cited in Hollensen, 2004) in order to be able to establish own brand. Nevertheless it is time consuming and there is a need for high investment FBH should consider building a hotel with the opportunity to design according to the latest fashion and create its own image, and the possibility to implement 'state of the art technology.' Market entry strategy suggested is a wholly owned subsidiary, a greenfield investment. Demand determines potential clientele. From geographic prospective target markets are visitors from the UK as home country market, Canadians as", "label": 0 }, { "main_document": "This paper addresses the topics of particularism and generalism, where the main concern is the conflict between these two theories and its evaluation. A particular form of generalism, Ross' deontology is firstly explained in more detail and some examples are given on how the theory is supposed to work in the process of moral judgement. The particularist position and criticism of generalism is taken from Jonathan Dancy's paper. Once the theories are defined and more fully explained, the essay presents Dancy's objection to Ross' deontology. I take a closer look at three, in my opinion, most influential arguments Dancy makes: the argument from examples, holism of reason and argument from enabling conditions. The second part of the essay evaluates Dancy's objections to particularism, Brad Hooker's paper and Sean McKeever's and Michael Ridge's paper were used as the basis. The essay ends with some conclusions on the debate where the plausibility of particularism is given some thought. The main aim of the essay is to establish whether Dancy's objections create serious problems for Ross' deontology to work, so that we have better reasons to rely on particularism, as it is more plausible. One should note that argument against Ross' deontology also work as arguments for particularism, as these theories hold a completely different view of morality, refuting one makes the other look more plausible. To begin with, one needs to clarify the position Ross deontology promotes. Generalism, and Ross deontology as a form of it, maintain that \"rationality of moral thought and judgements depends on a suitable provision of moral principles\" When an agent faces a moral judgement he ought to look at moral principles, which serve him to decide which action is right or false. Thus it can be said that in generalist perception moral principles serve to connect or to specify the connection between the non-moral properties or features of the situation with moral properties, which are then translated as moral obligations. Ross, in contrast to absolute generalism, makes a distinction between pro tanto When an action upholds a pro tanto duty then it is a pro tanto moral reason in favour of the action. In practice whenever we make a moral decision its overall polarity, i.e. whether it is overall right or wrong, depends on the pro tanto moral reasons that are relevant to the case. According to Ross, to figure out whether an action is overall morally right or wrong, we need to weight the Pro tanto moral reasons against each other. To decide which of the moral reasons count most depends on the circumstance. Ross uses the term prima facie duties (for more details see Jonathan Dancy, 'An Ethic of Prima Facie Duties', chapter 18 of Peter Singer (ed.), Now for Ross the polarity of the overall moral judgement varies and depends on the features of the case as they affect which pro tanto moral reasons are relevant. He is then in this sense particularist about the overall moral judgements / reasons. Polarity of the pro tanto reason, on the other hand, does not change and Ross is", "label": 0 }, { "main_document": "Nitrogen is the most important nutrient influencing grass production, and its supply is largely under the farmer's control (Frame 2000). Naturally there are not enough nitrates in the soil to optimise plant growth, resulting in nitrates being applied to grasslands to achieve higher yields. The health concerns that have been linked with nitrates being leached from agricultural land in the last 30years have forced the Government and farmers to reduce nitrate leaching where possible. Nitrate ions are the soluble form of nitrogen that is necessary for photosynthesis and protein development in the grass plant. Nitrates (NO Water percolating down through the soil easily removes nitrate ions, leaching them from the soil. The diagram to the right represents the mass balance of nitrogen leaching from the soil. It shows that nitrogen leaching is not the only 'inefficiency' of this system. Most nitrates are in the top 20cm of the soil (2.41kg However, being close to the surface makes it easier for the nutrients to be washed down the soil profile. The loss of nitrates through the soil is a concern for two main reasons: The productivity of the grassland will suffer, firstly because of the loss of nitrogen, but also because the leaching of nitrogen from acidic sources (acid rain) accelerates the loss of important nutrient cations. There is a considerable cost of replacing the lost nitrates and nutrients. The main grass-growing period is between spring and early autumn (April to September), and it is essential that nitrates are available for this time, to enable the plant to grow, (produce a high yield). The nitrate is converted into ammonium ions in the plant, which combine with carbohydrate to synthesize amino acids, used to form proteins. Proteins are needed for growth. The environmental problems caused by nitrate leaching concern everybody, not just the farmer. Among the problems are: Issues that result from it entering groundwater - Agricultural land is responsible for 70% of nitrates in water supplies (DEFRA 04) Drinking water - there are significant problems that can arise when nitrates drain into water supplies and contaminate water intended for drinking. There is a cost to limit the affect of nitrates (cancers, 'blue baby syndrome'). The 1980 Drinking Water Directive sets a limit of 50 mg/l for nitrate in public water supplies. (DEFRA 03). Eutrophication - this occurs in surface waters (lakes, streams etc), and is damaging to ecosystems. The farmer must try to prevent nitrates leaching through the soil, to limit the economic and environmental costs that this would generate. The main sources of inputs/sources of soil nitrates are: Grassland systems involve the production of slurries and manures, which are recycled to contribute to the nutrient supply. To know how much manure etc that should be applied to the land, the manure should be analysed to know levels of N, the soil type recognised (to establish if it is free draining), the requirement of the crop known, to determine the quantity to be spread. For example, slurries have 40-60% more soluble N than of solid manures (Pain 2000), and are therefore more easily", "label": 1 }, { "main_document": "any experience with SNN's. By going through the code, examining each section and rewriting it as pseudo-code, I was able to tell what was simply an artefact of Izhikevich programming style, what was a core feature of SNN's and what was necessary to the Izhikevich model. The pseudo code I created is available in the appendix. Having now analysed what spiking neurons are and a large range of related systems - hardware, simulations and purely software, it is now possible to lay down a set of rules which the final system must obey. Matters such as how input should be gathered and entered, and what network topography should be used will be left to the next sections. Here are the initial requirements: The hardware version should be able to work in parallel with the aim of allowing real time simulation - these parts should be based on dedicated hardware It should be able to support any number of neurons The hardware should be expandable to suit networks of different sizes Different types of neurons, such as Regular Spiking and Resonators should be supported and the variety and number of which are in any system should be editable by the user The ratio of excitory to inhibitory neurons should be variable Inhibitory neurons should not be able to change to excitory and visa versa Floating point inputs and outputs should be entered and taken from the system The SNN model should used Axonal Delays, Spike-Timing Dependant Plasticity and the Izhikevich internal potential equations There should be a user interface allowing selection of network choices, input of data and viewing of network output Should each 'neuron' have its own piece of hardware? The first of the hardware projects looked at had a separate chunk for each neuron, and had areas of memory dedicated to each. MASPINN on the other hand was simply an accelerator and only had one of each component and was able to run any number of neurons. As it is important that this hardware/simulator be able to simulate any number of neurons, it should not have a physical limitation - a low one anyway - on the number of neurons that can be run on it per board. As the simulations for one neuron can be run at a far greater speed than required for real time simulations anyway, the number of neurons run per board could instead be based on the speed of the operations and the amount of memory on the chip, and the extent to which these enable x number of neurons to run at real-time. Implementing this is however, not strictly necessary for the first version, and thus shall be left till later. Should each 'neuron' be able to directly call each other neuron? Neither of the projects looked at had direct connections between neurons. Having such connections would only really have any value or purpose in an analogue system or a digital system where each neuron was physically connected - something that could only happen in a system with very few neurons. In a digital system", "label": 1 }, { "main_document": "to focus on an aspect of the situation to the exclusion of all others (Slater, Hocking and Loose, 2003). This, in turn, aids their ability to learn new ideas. With regard to scientific concepts, a body of research exists which implies that the understanding of different concepts can emerge at different times, based more on what the subject topic involves specifically, than the child's aptitude for knowledge in general as it matures. For example, Massey and Gelman (1988) showed that four year olds are aware of whether or not different types of animate and inanimate objects could move up a hill by themselves. However, Rosengren et al (1991), demonstrate that at four, but more at five years of age, children understand how animate objects grow as they mature, and how inanimate objects do not. Simons and Keil (1995), and Gelman and Wellman (1991), show that four and five year old children understand the differences that exist in how animate and inanimate objects are supposed to look on the inside as opposed to the outside. Krist, Fieberg and Wilkening (1993), found that capabilities in a task testing knowledge between distance and speed emerge later, at five- and six- years old. At this age, children were able to accurately apply some spatial knowledge to such situations, while ten-year-olds accurately link distance and speed in their explicit judgments. It is appropriate, then, to teach five- and six- year-olds the basic principles of speed, acceleration, force and energy. Inagaki and Sukiyama (1988), illustrate that in their sample of children aged from four to ten years, children become gradually more able to attribute human, animal and physical properties to the appropriate objects. This research indicates that even the simplest of abstract concepts about the world should ideally be taught when children are at the uppermost end of the appropriate age bracket and can produce more accurate attributions. It also shows that an understanding of the differences between the living and material worlds exists from an early age, and that an altered science curriculum incorporating these findings could build a foundation for when the child learns more complex things, for example the differences between the respiratory systems in plants and humans. Such research indicates at what age a child best understands scientific rules involved in separate disciplines in science, namely biology and physics. It also demonstrates that concepts are, in fact, comprehended at different ages. For example, one would assume that the experiments of Massey and Gelman (1988) and Rosengren et al (1991) are linked, as they both entail a sound knowledge of the principles of biology, specifically features of animate and inanimate life. However, the onset of understanding occurs at age four in the former research but at age five in the latter, showing that cognitive development occurs more as a function of concept than of domain. This illustrates how the Piagetian premise of domain generality can work against the cognitive progression of the primary school aged child. There is further research which demonstrates when a child should be introduced to certain scientific concepts. Hood (1995), on", "label": 1 }, { "main_document": "force. Output per worker varies widely across countries: the USA's 1988levels were 35 times higher than Niger's, meaning that ten days' worth of American worker's production was only matched by a Nigerian after a year Hall and Jones relate such disparities to the level of countries' human capital inputs, such as education investment (a lack of the latter may explain France's and the UK's lag on the USA) and, in poorer countries, to economies' total factor productivity - which reflects both the state of their technology and how efficiently their capital and labour are utilized Hall and Jones claim that TFP and human capital inputs are \"fundamentally related to differences in [economies'] social infrastructures\" As social infrastructures can provide incentives for productivity just as they can inhibit innovation by encouraging inefficiency and corruption, they heavily impact economic growth. Indeed, policy modifications in China (which greatly reformed its agricultural incentive structures since 1978 Countering the Solow model's assumption of constant TFP growth worldwide, Hall and Jones demonstrate that today's divergence is largely impacted by TFP disparities and by the quality of countries' social infrastructures. Hall, R.E. and Jones, C.I.. \"Why Do Some Countries Produce So Much More Output Per Worker Than Others?\" Pg. 84. Accessed at: Hall, R.E. and Jones, C.I.. \"Why Do Some Countries Produce So Much More Output Per Worker Than Others?\" Pg. 82. Hall, R.E. and Jones, C.I.. \"Why Do Some Countries Produce So Much More Output Per Worker Than Others?\" Pg. 84. Accessed at: Deng, K.G. (2000), \"A Critical Survey of Recent Research in Chinese Economic History\", Economic History Review, 53. Pg. 11. Bosworth, B.P. and Collins, S.M. \"The Empirics of Growth: An Update\". Washington: 2003 Iss.2; pg.116. What determines such \"social capability\"? New Institutional Economic Historians claim that In particular, the strength of institutions in limiting government power and in enforcing contracts determines the appropriability of an economy's returns, and therefore its investment levels. Acemoglu and Johnson demonstrate that institutional efficiency differs greatly across countries, depending particularly on the antiquity of an economy's state, on its government's democratic accountability, and, in former colonies, on the legal systems inherited from colonizers These variations significantly impact economic growth. An economy's adherence to the rule of law (or the extent to which agents have confidence in and abide by society's rules and institutions) for instance strongly correlates with productivity - Africa's and Latin America's low productivity levels thus coincide with poor RoL scores (-0.81 and -0.41 respectively) By contrast, accession countries having undergone institutional reform under EU conditionality boast high RoL scores and improving growth rates Such evidence suggests that variations in institutional quality highly influence economic growth, and may explain much of 20 Bardhan, Pranab. \"Institutions matter, but which ones?\" 3. July 2005. Pg. 501. Kaufman, Daniel, Aart Kraay, and Massimo Mastruzzi. 2003. \"Governance Matters III: Governance Indicators for 1996-2002.\" Pg. 4. Crafts, Nicholas & Kaiser, Kai, 2004. \"Long-term growth prospects in transition economies: a reappraisal.\" 110. This implies that countries with inadequate institutions may be doomed to perpetual underdevelopment and divergence, as institutional change is extremely difficult to achieve.", "label": 0 }, { "main_document": "instigated the Reformation in the countryside and led the peasants' movement into battle against the princes, to be examined later. Overall, a passionate hostility developed between Karlstadt and Luther, as well as between Luther and M Roland Bainton, Dickens, Lortz, Rival forms of Evangelicalism, namely Zwinglianism and Anabaptism, also developed. These movements were independent but not completely unrelated to Luther's form of Protestantism. Although Zurich was the centre of Zwinglianism (under Ulrich Zwingli), its influence spread throughout southern Germany. Zurich itself had become a Reformed city by 1523. Zwingli was a humanist and was also more radical than Luther - he believed art and music had no place in religion. Zwingli held the view that the presence of Christ was spiritual in the Communion, while Luther thought Christ was physically and materially present. Another area of division between the two was that Luther wanted distance between the church and state, while Zwingli promoted a strong relationship between the Church and city council. Zwingli was also, like M The Anabaptists arose out of Zwingli's movement. They were, however, almost polar opposites. They wanted a return to primitive Christianity and felt that Christians should relinquish violence, private possessions and drunkenness. They were at work in cities such as Strasbourg as early as 1523. Conflicts began to rage between the various groups. For example, in Zurich Anabaptists were subject to the death penalty. Bainton, The Peasants' War of 1525 showed that Luther's ideas, although purely religious in nature, could be given a social turn. The Peasants' War generally lacked a clear program and coherent leadership. 'Some groups wanted a peasant dictatorship, some a classless society, some a return to feudalism, some the abolition of all rulers save the pope and the emperor.\" There was, in general, no co-ordination between the various bands. Only one person who could have led and united the movement - Martin Luther. The peasants felt themselves drawn to him. However, the manifesto that they drew up, In his eyes, it was a cover for a more revolutionary aim. He also disagreed with armed conflict. M The peasants were no match for the armies of the princes and the war ended in slaughter, with M Up to 100,000 peasants were killed in battles throughout Germany. Luther was seen by the peasants as a traitor to their cause and consequently, 'Lutheranism lost much of the productive power it would have enjoyed from contact with the true native soil.\" Luther simply did not believe that the vast majority of the peasants were motivated by religious reasons. Ibid, p. 275 Lortz, Clearly, therefore, Luther lost a significant degree of support when he himself failed to support the peasants' movement, which began to drift towards more radical Protestant alternatives such as Karlstadt and M Towards the end of the first phase, it was clear that Luther's influence was waning, as the Reformation began to split into subdivisions. Dickens, Conversely, the fact that Luther was the only man who was considered able to unite the peasants' movement, merely underlines the fact that he was still unrivalled in", "label": 1 }, { "main_document": "and collectively, considerable scope to influence policy locally and nationally. To a large degree the effectiveness of these measures depends on the motivation and enthusiasm of individual doctors to eradicate health inequalities, both in their own practices and the wider communities. All of the initiatives could achieve their stated aims if the professionals involved are willing to prioritise the reduction of health inequalities. However, even with the best will in the world, it is implausible that action by doctors as a profession, acting either individually or collectively could bring about wholesale change. Many of the most important causes of health inequalities, for example poor housing and low income, can only be addressed by concerted and committed action from politicians. Whilst the challenge of reducing health inequalities is not intractable, and will require considerable political will and not inconsiderable amounts of money, it is surely one worth pursuing for the benefit of society as a whole. As Frank Dobson, the then Health Secretary, said in 1997, \"Inequality in health is the worst inequality of all. There is no more serious inequality than knowing you'll die sooner because you're badly off.\"", "label": 1 }, { "main_document": "For this analysis I will look at the supermarket chain Asda. Asda began life in Yorkshire in 1965 as Asquithes and later became Associated Dairies, which was eventually shortened to Asda. Asda had a troubled history until the early 1990's, when Archie Norman turned the company around by focussing on value for money and permanently low prices. In 1998 Asda became part of the Wal-Mart family, who also shared the same low price strategy. Since the acquisition by Wal-Mart, Asda has gone from strength to strength, although this has started to stutter in the last year. Asda has always has always focused on sales rather than marketing, which is part of the reason for the recent decline in fortunes. Andy Bond, the new president of Asda, has started to make attempts to rectify this situation by looking towards a more marketing driven orientation. This assignment will discuss the elements of the marketing environment relevant to the retail sector, along with steps that are being discussed to aid the recovery of the company. It will conclude with recommendations on where Asda should be focussing their efforts. British retailers are constantly calling the current retail economy the toughest they have been in for more than 20 years (Finch 2005). In February, the BBC website (BBC 2006 a) announced that 40% of shops had said that sales were down on a year ago. Consumer concern over energy, water and council tax has caused consumers to spend less. However, consumers still need to purchase basic goods such as food, which means that Asda, along with other retailers should still have customers walking through their doors. Last year Asda won the grocer 33, a supermarket price comparison run by the grocer magazine, for the 8 This means that they should be in a strong position during tough retail climates. As a result of this, I would conclude that the economy is an important factor in the marketing mix; consumers are looking to get good value for money, so a marketing mix involving price and promotion would be important. Asda recently introduced the promotion of buy one get one free (BOGOFs) on selected items, although they are changing this to be two for Supermarkets, especially Tesco's, are attempting to expand rapidly. It has emerged recently that Tesco's has a land bank of 260 sites (Lyons 2006), at least 60 of which have been granted planning permission (Grimston 2006). Asda had been banking on the successful takeover of the Safeway chain of stores, which the competition commission determined that Morrison's was the only bidder allowed to take over the Safeway chain due to concerns over competition. As a result of the difficult planning permission process, supermarkets have turned their attention to the convenience sector in an attempt to aid their aggressive expansion plans. Both Tesco's and Sainsbury's have already moved into the convenience store format by taking over existing players. Tesco aims to increase the number of small format stores to 1200 in the next 10 years (Grimston 2006) from its current total of 706 made up from 160", "label": 1 }, { "main_document": "lungs, example include: emphysema, chronic bronchitis and asthma. Restrictive and obstructive lung diseases are characterized by the change of lung volumes, data is shown below: 3) The vitrograph provide an easy approach for the detection of obstructive and restrictive lung diseases. In a normal individual, the percentage of forced vital capacity that is expired in the first second (FEV However, for patient suffering form obstructive lung disease, the FEV In serious airway obstruction, as often occurs in acute asthma, FEV The vitrograph is a good guideline for the primary confirmation of the obstructive and restrictive lung disease. Characterized change of FEV My FEV However, this may also link to the errors due to leakage of gas; more repeats can be done to give a more precise measurement. My metabolic rate was There are number of factors affecting the metabolic rate such as exercise, occupation and specific dynamic action of protein. Exercising can cause a dynamic increase of metabolic rate. A well- trained athlete can raise his metabolic rate to 2000 % of normal. Also, the types of work you doing vary the energy requirements and therefore have an affect of the metabolic rate. Furthermore, after a meal is ingested, the metabolic rate increases. It is mainly results form the effect of directly stimulating the cellular chemical process of certain of the amino acid derived form the proteins of ingested food. My high metabolic rate was possibly due to the meal I had approximately two hours before the experiment. Also, people who perform regular exercise or sports tend to have a higher metabolic rate.", "label": 0 }, { "main_document": "application of substantially higher thresholds of support for negotiations over the new information and consultation arrangements to take place (40% of the employees and the majority of those voting in the ballot)' (2005:115). These increased requirements, existing due to and alongside the fact that the keeping of PEAs is regulated by legislation, are another barrier to granting universal and unconditional rights to information and consultation to workers and could significantly diminish implications for the workforce. The provisions for different types of employee representation in the new regulations must also be addressed. Hall summarizes that 'the law prioritises information and consultation via the representatives of independent trade unions where they are recognized by employers, and that otherwise information and consultation should take place with representatives elected by employees under regulated balloting procedures' (2005: 113). It seems evident then that the successful implementation of the ICE regulations would largely depend on the sufficient expertise of the employee representatives balloted by their fellow workers. But as Hall goes on to point that 'it (...) seems odd that the Regulations do not provide the right to time off for employee representatives to undergo training' (2005: 116), a view not at odds with that of Davies et al (2004:151) who conclude that 'it is difficult to place much confidence in the quality of the consultation - or therefore in its potential for transforming work-place cultures - where (...) the representatives are inexperienced'. It is then clear why this feature of the regulations' form might not lead to significant implications for employment relations or at least not highly effective ones. It should be noted though, that however highly prescriptive the new legislation is, it incorporates the vision that, even with regard to form there is no 'one size fits all' solution. A survey executed by Welfare (2004:11) illustrates there is no uniform solution to consultation and information with 'arrangements ranging from purely direct forms of consultation with individual employees at some organisations to a patchwork of different arrangements with different groups of employees in some larger ones.' Various company-specific arrangements do exist but the above-mentioned points of the regulations' form affect all of them. The new regulations introducing minimum standards for information and consultation of employees should, by definition, benefit employees first and foremost. They are a part of a more general framework, aiming at, as DTI states, 'introducing fair standards for employees and a better framework for industrial relations designed to promote both fairness and flexibility in the workplace' (2002: 8). One of the key elements in the new legislation with regard to employee participation is that it would bring forward mechanisms for representation of non union members, alongside union members (already represented by their relevant union), as the new regulations have to cover all employees. This would mean that as the Chief executive of the Work Foundation (the independent advisor on workplace issues) notes ' these [ICE] proposals have the The question seems to emerge about if this general potential will be realized to bring the much promised advantages closer to employees. Several factors may hamper", "label": 0 }, { "main_document": "vary, so the model where the In that model the coefficients of the regressors are the same, but the intercept varies. The unrestricted model is in this case the three separate years added together. When the F-test is carried out for this the result is 1.8897; and when the critical value of the test in this case is 2.21 at 5%-level the null hypothesis cannot be rejected. Therefore we can conclude that the structural change comes from the intercept, and the coefficients of the regressions are constant over the three samples at 5%-level Hence the best model to describe this is the model, where the years are added as dummy variables. The calculation showed in Appendix P.6 The aim of this question was to provide a statistically preferred model for the result in first year statistics exam. To generate a model that provides a coherent model for This is done by starting with a large model, including most the variables appearing in the question sheet provided and then eliminating the insignificant ones by using individual t-tests and not taking into account variables with small coefficients. To reach the final model, other tests were applied, to test for overall significance, structural change, inspection for outliers and the R Also few new variables were created from the data provided to gain more coherent results. Different functional forms were also searched for, but there was no evidence of them being significant, so the final model is indeed linear in variables. The test for overall significance provided positive results, so the model can be used. The final regression chosen is as follows: The value of the mark gained in the first year statistics exam depends on the students ability, the number of A-grades gained in A-levels and average hours spent per week working on statistics during the year. Also additional dummy variables were used to describe the additional effects of studying pure economics, The same argument applies for the person having either an A in her A-level mathematics or a 7 in IB mathematics, if the person is British or not, and the effect of the years on the regression, The variable Even though the R The R When the variable ability was included the R The variable The variable Even though in question 2 c) it is found that the effect of All the variables listed above have positive values, and intuitively the model sounds rational. The more able the student is, the more likely he is to do well in his statistics exam. Also having A-grades (or equivalent) in A-levels (or equivalent) and spending more hours on average per week on statistics during the year have positive effects on the The value of For every extra unit of the variables will increase the value of When the variable The other dummies are not included, since some of them fail the t-test in the large regression or when tried with other variables. The variable When the variable The dummy variables The values obtained were The additional effect on So the additional effect of doing economics", "label": 0 }, { "main_document": "of my current placement and can attribute this increased insecurity to my last placement experience. Niven and Robinson (1994) point out that a reduction in self-esteem can lead to feelings of inadequacy when confronting new situations. Placements are considered the heart of nursing education, crucial to the consolidation of learning and vital to the development of competence and confidence (Levett-Jones In an extensive literature review examining the potential relationship between belongingness, nursing students and their placement experiences Levett-Jones They describe belongingness as an individual's experience of involvement in a system or environment and the extent to which they feel an integral part. It includes 'valued involvement' - the experience of being valued and accepted, and 'fit' or 'connectedness' - the perception that the individual's characteristics fit into the environment. They also suggest that a diminished sense of belonging, a lack of role acceptance, and the resultant low self-esteem, may cause a lack of self-confidence. I feel that this is applicable to my situation as disregard of the student nurse's role at my previous placement made me feel devalued resulting in feelings of not belonging and reduced self-worth. Even though I feel the care assistant role is integral to nursing and of great worth I felt that by neglecting my needs as a student nurse preparing for future practice my self-esteem and confidence were reduced. Farrand Placements are perceived poorly by students if they feel their learning is limited as this often results in lowered levels of confidence in their own ability (Evans, 2001). The quality of placements is of great importance in optimising the development of student nurses, plays a critical role in confidence and preparedness for future practice and influence the degree to which the learning experience is regarded as positive. Positive experiences and perceptions of self-worth are related to how the student felt that they and their role were valued (Edwards Newly qualified nurses, in a study by Gerrish (1999) assessing preparedness for practice, largely attributed lack of confidence in ability to poor placements that did not allow opportunity to develop skills needed for future practice. Again I feel this relates to me as by refusing to acknowledge my role and learning needs my development was limited. I feel that this has set me back as I have not had the opportunity to consolidate and further my learning and thus I do not feel able to fulfil the expectations of third year students, which in turn has adversely affected my confidence. The transition associated with a new placement setting is a difficult time with energy being focused on fitting in, fulfilling expectations and adapting to new responsibility. The resultant fear and anxiety felt at this time can impact negatively on confidence and student learning (Evans, 2001; Levett-Jones Learning environments that make students feel threatened reduce self-belief (Dix & Hughes, 2004). If students feel safe enough to admit their limitations, express their fears and feelings, assert themselves and not fear disapproval they are more likely to feel confident (Biley & Smith, 1998; Glen & Parker, 2003; Thompson, 2002). The need for", "label": 1 }, { "main_document": "without apt consideration. Dworkin is criticised for \"paying insufficient attention to CLS work\" and dismissing the significance attributed to political influence Conversely, Altman asserts that, even if there is a theory which determines correct legal outcomes as Dworkin submits, \"It will make no practical difference because a judge will interpret his favourite ideology as essentially constituting that theory\" Hercules has no application and is merely representative of an idyllic world, far removed from our jurisdiction. Altman, L.B. Curzon, 'Jurisprudence', 1995, 2nd ed., p.221, 21.12.3 L.B. Curzon, 'Jurisprudence', 1995, 2nd ed., p.221 , 21.12.3 Dworkin's amalgamation of natural law and formalism is akin to Hegel's system of dialectics; synthesising a resolving theory from antithetical concepts. Rather than reconciling the two theories, Dworkin's objective is to restore confidence in formalism by characterising it with features of natural law. However, as well as taking the advantages of combining two theories, limitations of both theories are attributable to this new thesis. Dworkin is criticised for attempting to overcome aporia. It seems paradoxical to assert that one right answer (i.e. certainty) can be attained through discretion that permits flexibility. It is evident that law is used instrumentally and Dworkin's claim of weak discretion is flawed. The conflict between certainty and flexibility continues to exist, undermining the existence 'one right answer' in every case.", "label": 1 }, { "main_document": "by Eliot, (1968, as cited in Hogg & Vaughn) suggested that prejudice could be learned, simultaneously implying that it could be unlearned on the basis that if children experienced stigmatisation than they would be aware how unpleasant it is and should be less likely to replicate it. In addition Woolf et al., (2005) suggested that education can be a primary source of intervention. If children are taught how to think independently and critically than there is a possibility that this could 'counter the tendency in humans towards blind obedience to authority...in situations supporting the promotion of inter and intrapersonal violence,' (p.122). If education can inhibit the development of ethnocentric values, and prejudice against those that appear different, and in turn enhance assimilation and multicultural values; than the onset of extreme prejudice can be reduced. Therefore if children are taught more about other cultures and religions, the differentiation between people from countries such as the Middle East and themselves will not be as acute as at present. Sadly however, formal education only has a marginal impact on the reduction of prejudice if children are exposed to prejudice outside the classroom in the form of media, family and peers. If education is paired with the aid of external sources such as propaganda and legislation, the possibility of reducing prejudice at a young age and maintaining less ethnocentric values is more viable. The establishment of a legal norm may not overtly control prejudice but rather it challenges its open expression, and provides awareness in the form of a social conscience of its existence. 'People need and want their consciences bolstered by law, and is nowhere more true than the area of group relations,' (Allport, 1954, p.471). Furthermore attempts to change prejudiced attitudes through propaganda have met with variable success, media is utilised as a source of information, and thus it is most successful when dealing with a poorly informed public. However, Cooper & Jahoda, (1947, as cited in Tesser, p. 500) argued that prejudice people developed a highly effective mechanism to evade anti-prejudice media messages. Furthermore if this information contains mixed messages, for example the reporting on the war in Iraq, which is highly debated in the public, the effect of propaganda may be diluted. The problem of intergroup relations and prejudice is multifaced and beyond the scope of this essay; the challenge of reducing these acts of prejudice such as terrorism and political violence is also multifaced. Returning to Allport's quotation, now that we know more about the origins of hostility, can we employ our intelligence effectively in controlling its' destructiveness? According to the research and findings of social psychologists the answer is yes, but not by one single approach. Prejudice and intergroup hostility must be challenged from all levels, in both a direct and indirect manner. By challenging prejudice norms and attitudes in a constructive way, using tools such as; cooperation, acceptance, the media and education, then in turn the acts of violence that arise due to these prejudices should also be reduced. In this sense social psychologists have contributed to public knowledge", "label": 1 }, { "main_document": "how well companies could capture their customers. Relative price and quality of other products There are a few toothpaste claiming to be 'breath-freshening' in the market and they would be our Colgate even claims all their 7 'breath-freshening' toothpastes fights against bad breath for 12 hours. And their prices are very low. A 50ml toothpaste cost about 85p nowadays and this is definitely one of the criteria Fresh Breath have to think about seriously. It determines the impact customers could make in the industry. There are Buyer's power is high as Fresh Breath Though toothpaste is Therefore, Fresh Breath do It determines the impact suppliers could make in the industry. This applies to all industries that require supply in any form (labours, materials, etc) Suppliers could influence industries in lots of ways, e.g. raising the price of raw materials. Unfortunately, there is no suppliers' concentration information available. And it will be very hard to find substitutes when Fresh Breath toothpaste is formulated. It is assumed here that Fresh Breath would carry on as a 'virtual' company once it enters the toothpaste market and thus cost of changing suppliers would not be very high as competition is fierce in the market and suppliers' threat of forward integration would not be likely as there are great risks they have to take. Suppliers have to spend a lot of money on assets and they are specialised in making toothpaste and toothpaste companies would be their only clients. Therefore it is believed that suppliers' power threat to Fresh breath would be medium to low. This indicates how great the competition in the market. The following industry characteristic could change the level of intensity of rivalry. It can also be affected by advantage pursuing of companies (More of this would be mentioned in Recommendation) There are already lots of competitors in the market and the industry growth rate is estimated to be around 2%. Fixed cost would be low as Fresh Breath is assumed to not to manufacture their toothpaste themselves and storage cost could be high in this country. Switching cost and product differentiation is low as explained above. Exit barriers would not be great as the patented formula is the only asset of the company. Industry stakeout is another problem Fresh Breath have to consider as 'No Industry keeps on growing forever' and growth will slow down or stop one day if not declines. That would lead to fierce competition and price war. In general the UK toothpaste market is between moderate to intense in the next 10 years. A Chinese proverb says 'Understanding both strength and weakness of ourselves and our enemies, winning is 'inevitable'. After looking at the toothpaste industry's likely future, problems have been estimated, it is important to identify Fresh Breath's strength and weakness to encounter the problems they may have to face. SWOT Analysis can be divided into External and Internal factors. External factors (i.e. Opportunities and Threat) have already been covered by Porter's Five Force; this part of analysis will concentrate on internal factors i.e. Strength and Weakness, After", "label": 0 }, { "main_document": "progress from Propositional to Predicate Calculus. Secondly, the process of Quantifier Raising allows the syntactician to derive the Logical Form of the proposition from surface structure. However, despite their critical role in these domains of analysis, the quantifier, in particular in its interaction with negation, appears to raise more questions than it answers. It is concluded, therefore, that syntactic rules alone are insufficient to predict the behaviour of quantifiers in negative and interrogative sentences. It is certain that this area will present the logician and linguist with a wealth of challenges for many years to come before a fully integrated, explanatory theory of quantifiers has been identified.", "label": 1 }, { "main_document": "to provide training courses but to serve as informal opportunities for people to exchange experiences on works in ecology and environmental management. The programmes are taught by a member of IEEM, as often as twice a week in different locations within the U.K, usually run for one day with 15 to 25 participants and places are open to non-members as well. Conferences: The Institute organises conferences on particular issue such as climate change, Ecological Impact Assessment, or on regional issues. Conferences are open to members and non-members. The presentations are uploaded on the Institute's website and the proceedings are published to disseminate information on the outcomes. Publication: Publication is one of the main services offered by the Institute. 'In Practice' is a quarterly journal produced by the Institute, and provides information on meetings, events, technical papers and public news. Although the contribution on any aspect of relating issues are accepted, the aim of the journal is not to publish scientific papers on research but only to ensure that members are well informed of up-to-date news. Some publication is only available to members such as The Professional Issues Series (PIS) which is a collection of guidance documents on practical works relating to the professions. In addition to the production of guidelines or conference proceedings to support and enhance professional standards, the institute promotes to increase the students' interest on ecology and environmental management. The recent outcome of such objective is the career guidance to students at school or university 'Rooting for a Career in Ecology and Environmental Management?' published in collaboration with the British Ecological Society (BES&IEEM, 2001). The Institute plays an important role in the provision of guidelines which provide practical advice for professional in order to conduct assessments or survey in relation to ecology and environmental management. 'Ecological Impact Assessment Guidelines for the United Kingdom (Fig 2)' is a recommended procedure for the ecological component of Environmental Impact Assessment which has been made available for free of charge as a downloadable PDF in the website in July 2006 (IEEM b, 2006). The Guidelines have been formulated by some members of the Institute and leading EIA professionals in the U.K. The significance of this piece of work is that there has been a very broad scale consultation with various organisations in the development of the Guidelines, including U.K Government Departments, U.K country conservation and environment agencies, the Heritage Service and National Parks and Wildlife Service and the Environment Agency in Irelandand, and many non-Government Organisations (CIB, 2006). Hence, the Guidelines are being promoted as 'best practice' in the UK Government's Circular - Planning for Biodiversity and Geological Conservation: a Guide to Good Practice (CIB, 2006, IEEM b, 2006). It is expected that the Guidelines will provide practical methodology and procedures for practitioners to carry out the evaluation of ecological features not only as a component of an assessment prior to development but also as an evidence to support appraisals on environmental issues (IEEM b, 2006). The provision of the standard protocols will improve the quality of data collected in ecological analysis by", "label": 0 }, { "main_document": "only goes to provide limiting reliance on the cross-sectional work, at the same time it emphasises the need for improving the quality of data and reduce the missing observations, which would solve a lot of problems. In a nutshell data constraints do cripple the analysis severely. The final model does resemble earlier findings where the relation between health and education holds one way or has shown conflicting results for different countries. Another drawback is the inability to control for unobserved variables which bias the estimates. Instruments for these need to be chosen very carefully so as not to create a bias. Extensions to this study could firstly include the use of better data in correcting the drawbacks of the model. Also a country or region specific analysis especially for developing nations would be worthwhile. A point of mention is that all these variables are even more important for developing countries where the basic levels of health and education are yet to be attained and thus even necessary provisions for a minimal level of subsistence like daily food consumption is directly influenced by the level of income. Also the income distribution in these countries show highly skewed patterns with the top 20% of the population receiving 5 to 10 times the income of the bottom 40%. Literacy rates remain strikingly low at 45% among the least developed countries and infant mortality rates run as high as 10 times those in developed nations. Life expectancy in 1998 still averaged only 48 years in the least developed nations compared to 63 years for other developing countries and 75 years for developed countries. In Asia and Africa over 60% of the population barely met minimum caloric requirements necessary to maintain adequate health. For the year 2001 certain human deprivation indices show that almost a billion people in poor countries were without access to safe drinking water, 766 million did not have access to health services and 2.4 billion lived without sanitation facilities. In 1995, the number of physicians per 100,000 people averaged only 4.4 in least developed countries compared to 217 in developed countries. 90% of the people inflicted with HIV in the world, live in LDCs. By the year 2010 life expectancy in Namibia for instance, is expected to fall from 70.1 years without AIDS to 38.9 years with AIDS. This is only a brief insight to the myriad of problems which need to be tackled in the world today. It is true that cross sectional data may over or underestimate true causal effects. Moreover prior studies based on past data may show different effects due to different incentives, shocks that might have hit the economy at the time and new market developments and reforms that must have come about making it slightly less comparable. However, there are better studies to suggest to the policy makers the grave importance of improving health standards for economic development. The relation between health and economic development can create either a vicious or a virtuous cycle. There is a debate over whether or not the government should subsidise health", "label": 0 }, { "main_document": "is only optional. On the other hand nose is an essential part of well-formed face. Therefore it is said that nose-face relations are more prototypical than beard-face relations. The second characteristic is integrality. It is to think about whether a part is integrated or attached to the whole. For instance However In this view it is said that hand-finger relations are more prototypical. The third aspect is Discreteness. It is about how a part moves independently from the whole. For example the tip of the tongue is not clearly separated from tongue while the division is rather clear in arm-body relations (The Everything Development Company, 2002). In this aspect the latter relation is more prototypical. The fourth feature is Motivation. It is about whether a part is functional to its whole. For instance the handle of a door enables people to grasp and open or shut the door as an identifiable function. The wheels of a car are for the car moving smoothly over the grounds (Cruse, 2004). In other words these parts are more prototypic as a meronymic relation. The last feature is Congruence. There are separated features, range and phase. The first feature is range which shows how limited meronym is related to the holonym. Supermeronym is a meronymic relation with more than one holonym (Cruse, 2004). For example leg has chair, table, and human body as holonym. It is said that supermeronym is less prototypical meronymy since the meronym has a broad sense of holonym. The other feature of congruence is phase. It is to think about whether parts and wholes exist at the same time (Cruse, 2004). For instance grape juice and wine are both made of grapes and it is possible to say wine is a part of grape juice. However flour-bread relations are not the same case since flour is not quite part of bread but rather one ingredient of bread. Therefore the latter relation is less prototypical. In conclusion meronymy is simply defined as a part-whole feature. However each unit of meaning is related differently. Therefore how meronymic relation is structured is only found by looking carefully at each relation with prototypical features. Word Count: 980 words", "label": 0 }, { "main_document": "the seeds are more susceptible to decomposition by micro organisms or loss of viability. It is worth noting that repeated inversion tillage of the same soil will bring buried seeds back, or near, to the surface and viable seeds may germinate (WRAG, 2003). The mechanical scarifying effect on seed coats by soil particles and plough may also be a factor in increased seed mortality or germination promotion (H The increased aeration provided by shallow tillage may also be beneficial in weed management, increasing the aerobic activity of micro organisms, and promoting seed-coat degradation and seed germination. Germination promotion may be desirable, especially when integrated with some chemical control or other direct, not chemical control e.g. mulching, solarization. The timing of crop planting, especially delaying the planting date of crops traditionally sown in early spring, has been explored as a way of increasing the competitiveness of the crop plant. L Exploiting weed phenology to reduce competition, has however been demonstrated when delaying sowings of soybean ( Individual 'roguing' of weeds by hand may be practical only for very localised infestations in small horticultural situations; however, most growers must employ mechanised weed control methods. The effectiveness of mechanical weed control practices such as harrowing or hoeing is largely regulated by the subsequent crop-weed competition (H These procedures must strike a balance between the level of weed control and the negative impact this has on the crop and the soil structure (Kurstjens et al, 2000). Between row weeding is normally carried out using tractor-pulled hoes, harrows cage weeders or rotary cultivators (HDRA, 2004). The timing and frequency of any weeding activities is related to the crop being grown. Those with dense, rapidly expanding canopies may require only one early weeding operation before they shade competing weeds, whilst slow-growing crops or those with smaller leaves may require frequent operations (ibid). The cost of such control is by no means a small influence on producers. Labour and fuel costs mean running machinery reduce profits, and may become more expensive in the future as 'green taxes' come into force to control carbon emissions. More complex mechanical finger weeders and brush weeders have value for intra-row weeding, uprooting weeds rather than covering them with soil, however, soil type and condition may influence their effectiveness (B In order to minimise the competitive effects of weeds, the crop needs to establish quickly and gain a competitive advantage for light and nutrients over any weeds. The height of the mature plant (combined with plant spacing) will also determine the amount of shade cast on surrounding weeds. Data available for wheat indicates that short stemmed cultivars (often selected as they are less prone to wind damage) produce less shading and therefore allow greater weed growth. Aldrich & Kremer (1997) and Bridges & Chandler (1988) suggest that four times as many annual grass weeds occurred in short-stemmed varieties. Low-growing cultivars also produce less crop residue, meaning there may be more weeds in the next crop in a low-till regime through reduced ground cover (ibid). Successive monocultures mean the same cultivation practices at the same", "label": 1 }, { "main_document": "knew that the clock was working fine because in our program we have set conditions to deal with these type of scenarios. System Requirement You should be able to input the initial time in hours minutes and seconds HH:MM:SS. Outcome The time can be set at via the keyboard at the start of the program on the terminal window. We found that you can enter invalid time on the terminal. This caused the programme to stop working. This problem can be resolved in future by having a condition in the program that only lets you input the time in a certain format. System Requirement Acceptable spoken time must be heard when a key is pressed on the terminal. Outcome When you press the enter key the time is spoken at the current time. The time was recited at moderate speed by the loud speaker. The time was heard clearly. System Requirement The time should increment like a digital clock and it should be shown on the terminal. Outcome The time was updated on the terminal window every second. Because we used the 'free running' mode Timer 1, it produces a continuous series of evenly spaced interrupts that is ideal for accurate time. Also because the frequency is not affected by the variations in the processor interrupts, it's still able to keep track of the time when you press a key on the keyboard to get the time. Although we had to use an estimate for the Timer 1 it was still accurate to 1 sec. An improvement with the accuracy of the time can be made here if we used an oscilloscope to find the number of clock cycles per second. When you enter a time on the terminal the format should be HH:MM:SS. The problem was that if you had a value that was less than 10, then a 0 is required in front of the number to make it an acceptable format. We had to place a condition so that if the number was less than 10 then place a 0 in front of the number. When stringed each allophone together we had to test it to see if it was heard clearly. This testing took a lot of valuable time because compiling the code and loading it took a lot of time. To speed up the process we disable the Timer 1. This made was enabled us to carry out test quicker as loading time was shorten. Although time in the lab was limited so we had to work relatively quickly through the tasks and this caused mistake to occur like missing semi-colons or parentheses. These were fixed eventually when we looked back through the program, but it did stop us from progressing further. We liked to have written a method that fixed the problem of being able to set incorrect times. Because we did not have enough time in the lab we were not able to carry this task out. There could be an optional continuous spoken time, which would speak every minute or hour. A stopwatch could", "label": 0 }, { "main_document": "The UK car industry is one of the most complex and interesting industries to analyse.It is characterized by a number of positive and negative characteristics. In general it is a market filled with Variety, colour, cut-throat competition and obviously wide opportunities. The UK Car Market is the second largest car market in Europe after Germany.The volume of the market for 2003 was 2.6 million cars.The market is also filled with opportunities as it is growing at the rate of 1.9%. As on 2003. The industry is really huge in terms of volume and it can be classified into various segments.Given below is a table of the various segments, their current volumes and their predicted volume in the year 2008 Thus it is very evident that the UK car industry is a really voluminous market with a variety of segments. It is said that \"With wider opportunities comes tougher competition.\". The UK car market is no exception. The Market has a typical Oligopoly with just too many players fighting for their share of their market.References1.: w Source: Thus the customers stand to gain from this situation. Thus the consumers of this market are very powerful dictating terms to the manufacturers unlike a few other monopolistic markets where the companies dictate terms to the market.On the BCG Matrix, I would place the Car market on the Star Grid because anyway people would need some means of transporting themselves. Hence the car market will remain to be a market filled with opportunities. Especially with no alternate fuels in sight and public transport failing to gain popularity in the huge developing countries where cars have a big market due to lack of convenience. The car market will remain to stay in the star phase for a longer time to come.BEHAVIOUR OF COMPETETIONLike mentioned earlier there are too many players in the market and there's cut throat competition with each manufacturer trying to kill the other. The Stiff competition is forcing companies to reduce prices and eventually losing on their margins. The consistent Drop in their margins is even leading to bankruptcy and huge losses with some companiesDaewoo had to close down because they went bankrupt in 2002.Rover MG faced huge losses and is still in trouble.Mitsubishi in Japan is facing problems of bankruptcyFord has been making a net loss world over for the last few years.Thus a lot of companies are facing problems and hence the entire car market is also characterized by a lot of take overs.Daewoo was taken over by General Motors when it went bankrupt.Mitsubishi is to be taken over by GM gradually. Many European brands were bought over by Ford like Land Rover and Volvo. Thus the default characteristic of the market to be successful is bringing a lot of brands under the flag of a bigger company, thus these companies get access to all the segments of the voluminous car market by exploiting the different core competencies of the various brands in different segments. Thus the key to profit making and leadership in the automotive market is about playing the volume", "label": 0 }, { "main_document": "of silicon microneedles is very popular in the literature (see However, silicon is not a proven bio-compatible material, which if broken off during insertion could cause harm. This could include allergic reactions to the silicon, irritation, and possibly infection. This concern was addressed in ref.5 which indicates that although the silicon is not biocompatible, it oxidises quickly in air, producing glass which is non-toxic. However, insertion tests indicated mild itching and redness in the skin after repeated insertion of a silicon microneedle array. The use of biodegradable polymer microneedles has been implemented by Georgia Tech The publication indicates that the use of biodegradable polymers to produce the microneedles, were strong enough to pierce the skin without breaking. The paper also indicates that a broken needle left in the skin would safely degrade and disappear, without causing harm. S. Henry, D.V. McAllister, M.G. Allen, M.R. Prausnitz, \"Microfabricated microneedles : a novel approach to transdermal drug delivery\", J. Park, M.G. Allen, M.R. Prausnitz, \"Biodegradable polymer microneedles: Fabrication, mechanics and transdermal drug delivery\", The research performed by the Ritsumeikan University In order to determine the strengths/weaknesses of a particular microneedle array, the designs are tested in the publications by a number of different methods. For instance, ref.6 provides confirmation of skin puncture, by applying the microneedle to the author's skin and subsequently applying a dye to the location. By using an Results showed that the dye was able to pass through the outer layer of the skin, after microneedle arrays were applied to the skin surface. This approach is also adopted in ref.13, in which the authors used hydrophobic dyes to determine if the skin permeability had been increased. The difference in this study however, is that the authors used heat stripped epidermis from human cadavers, and placed the epidermis on tissue paper to provide support. The disadvantage with this method is that the use of tissue paper as a support may not be adequate to simulate the actual support of the skin. An actual skin puncture test as shown in ref.8 is far more accurate, in terms of needle insertion mechanics. In ref.2, the authors tested the ability of the microneedles to deliver transdermal drugs. This was achieved by microinjecting insulin to diabetic rats, and subsequently monitoring the blood glucose level. This study confirmed the ability of hollow microneedles to deliver insulin to diabetic rats, by inserting a single needle for 30 minutes, and applying an insulin flow pressure of 10psi. The results showed that 32 The study also fails to address the amount of pain which may be experienced from needle insertion. The authors of ref.3 demonstrated needle insertion using a more crude method, which involved using human subjects, and by piercing the skin with extra-long microneedle samples, designed to penetrate deep enough to rupture blood vessels. This enabled the authors to determine if penetration had indeed occurred, since the pressure in the vessels of the epidermis forced blood to the surface. This study allowed the authors to perform tests at different locations on the bodies of two subjects, where skin thickness varies.", "label": 1 }, { "main_document": "entire field of view for mercury light. The distance over which the white light fringes were visible was not recorded, and so the coherence length of white light could not be found directly from figure 7. However, it can be seen that the fringes disappear after six peaks from the centre and from this we can deduce that the coherence length of white light is 3 Hecht The coherence length of mercury light could not be determined as fringes were visible throughout the whole of the micrometer's range. This implies the coherence length of mercury light is larger than the apparatus can measure. The central fringe in the white light system is black, although one might expect it to be white, since the two beams should be in phase with each other for any wavelength at this point. In the case of the Michelson interferometer, the first beam undergoes internal reflection in the beamsplitting plate, whilst the second beam undergoes external reflection - with a consequent change of phase. The beams at this (zero path difference) point are therefore not in phase, and destructive interference occurs. As a result, the central interference fringe is black. It was not possible to obtain a black field of view for the mercury light due to the imperfect flatness of the mirrors. It was also impossible to obtain the coherence length of mercury light with this apparatus. This is because the coherence length of mercury lamp light (average wavelength 546.1 nm) is 0.03cm This is longer than the maximum extent of the mirror movement, which explains why fringes were visible throughout the range. The photodiode will have a sensitivity dependant on the frequency of the incident light. Therefore, it may detect red light more easily than blue light. This has implications for the intensity pattern seen in figure 7. Whilst working under the assumption the intensity shown is for all frequencies equally, it may in fact be showing more of a red bias. In which case the fringes will be more visible to the detector and this could be the reason for finding the coherence length to be nearly twice as large as the accepted value. If a thickness The optical path through the gas segment is now The change in optical path due to the addition of the gas is therefore This introduces extra waves in the path of the beam in question. If So by varying An airtight chamber was placed in the path of one of the split laser beams having wavelength 632.8 nm. The chamber was attached to a vacuum pump and evacuated. A rubber bladder filled with the appropriate gas was attached to a tube leading into the chamber. This gas was allowed through the joining tubes and into the chamber before the experiment. The chamber was then re-evacuated. This procedure readied the apparatus for the experiment, ensuring that the bladder, tube and chamber would contain no gas other than that under investigation. A photodiode was set to record the intensity of the recombined laser beam. The vacuum pump was turned", "label": 1 }, { "main_document": "of the Council, two conditions have to be satisfied: (a) that the Heads of Government are accompanied by a member of his government e.g, a Minister (which is certainly the case because of European Council's composition Article D TEU states that the \" The Council needs only to consist of members of the governments who are authorised to commit their government to the Union and Ministers satisfy this condition. (b) that the series of procedural requirements in the Treaties for such decision-making are complied with Besides, in the Stuttgart Solemn Declaration on EU (1983), the European Council clearly stated that when it acts in matters within the scope of the European Communities, it does so in its capacity as the Council within the meaning of the Treaties \" Akin to this idea, is the view of the European Council acting as a Court of Appeal for the Council to resolve the highly technical, complex and politically hot problems that were left outstanding from discussions at the lower level. Some examples are the annual budget, the introduction of milk-quotas in the Community or the creation of a common fisherie policy This function of the European Council was also recognised in the London Declaration. But in this case, the European Council stated that \" [it] This struck a rather different note from the Stuttgart Declaration. Here the European Council is acting in its own capacity and not as a special formation of the Council. Although the European Council will conform to the decision-making requirements, there is some doubt as to whether the decisions are legally binding. The TEU clearly distinguishes between the European Council as an organ for providing the necessary impetus, and the Council meeting in the special composition of the Heads of State- implying that the European Council as such cannot exercise the powers of the Council Nevertheless, whether it is acting in its own capacity or acting as a special formation of the Council, the European Council has indeed a direct and formal role in Community decision-making. In the circumstances described above, the decisions of the European Council will indubitably have a direct impact on Community legislation. Hence under this section, we will rather focus on the impact of the unenforceable decisions, in others words those which are of a purely political nature. The decisions of the European Council are not isolated events but are seen as a very essential part in the process of Community decision-making. Indeed the high political stakes and intergovernmental nature of some matters in the EU ensure a key role for the European Council, for example the setting of the annual budgetary framework, the setting of the agenda for integration, the European and Monetary Union (EMU), the negotiating of treaty reforms Their highly political sensitivity ensures that no other EU body apart from the European Council can deal with them Thus, the shape of the legislation on these matters will be heavily dependent on the pronouncements of the European Council. Some examples are the Maastricht Summit (1991) which led to the agreement on the TEU, the", "label": 0 }, { "main_document": "1a) The ultimate aim of the World Trade Organisation (WTO) is to increase world trade by encouraging its members to reduce barriers to trade. One of the principle ways it can help to achieve this is to reduce tariffs, which are customs duties on merchandise imports. Article 36 'Principles and Objectives', paragraph 1a of the General Agreement on Tariffs and Trade (GATT) However, there is a degree of flexibility, relating to the rules and regulations that govern the operation of the WTO, which can be exercised when a WTO member is facing the burden of anti-competitive measures (e.g. dumping) or the undesirable sourcing of supplies (that violate international environmental agreements) by other WTO members. These remedies will be considered in this essay. (Note: An important assumption made in this essay is that 'State A' is classed as a developed country and therefore cannot use special measures to promote their trade and development, which developing countries are entitled to do under Article 36. The introduction to the essay question describes the concern of furniture manufacturers in State A about cheap imports. For the purposes of examining the remedies available to State A, it is necessary to determine the meaning of the word 'cheap'. If the price of these imports is cheaper than the price of State A's domestic furniture products because of lower production costs, then the government of State A would be acting unlawfully if it were to impose a new tax on imported furniture. Article 3 'National Treatment on Internal Taxation and Regulation', paragraph 1, states that 'the contracting parties recognise that internal taxes and other internal charges, and laws, regulations and requirements affecting the internal sale...of products...should not be applied to imported or domestic products so as to afford protection to domestic production.' Such a measure would be purely anti-competitive and hence a form of protectionism, which the WTO is trying to prevent. Paragraph 2 state that 'the products of the territory of any contracting party imported into the territory of any other contracting party shall not be subject, directly or indirectly, to internal taxes or other internal charges of any kind in excess of those applied, directly or indirectly, to like domestic products.' Therefore, unless the government of State A is not currently imposing a tax on imported furniture of the same level as for its domestically-produced furniture, it is forbidden by the WTO to impose a new tax. On the other hand, if these imports were being dumped (Art 5 defines dumping as 'products of one country are introduced into the commerce of another country at less than the normal value of the products) within State A's territory, then its government would have a remedy which is compatible with its WTO obligations, to prevent this activity from occurring in the future. This remedy is described in Article 6 'Anti Dumping and Countervailing Duties', paragraph 1, whereby contracting parties recognise that dumping 'is to be condemned if it causes or threatens material injury to an established industry in the territory of a contracting party or materially retards the establishment", "label": 1 }, { "main_document": "of money and energy that people may be able to save through investing in alternative energy and conservation projects. These types of campaign therefore portray energy conservation as a chance to make a gain and in doing so encourage people to be risk averse. Prospect theory tells us that in order to be more successful, campaigners should instead show people how much they are losing each month by not taking the chance to invest in alternative energy and conservation. This means people will approach the decision from a position of loss and as prospect theory tells us that people are risk seeking for losses, they should therefore choose to invest in order to return to their This effect can also be seen rather importantly in medical decision making, for example a study carried out by Savadori, Lotto and Rumiati in 2002 found that patients after a heart transplant tend to become risk averse in relation to any activity that may be able to further improve their condition after transplantation (Savadori This is because when the patient is very ill with cardiac disease, their situation is framed negatively as there is a large difference in utility between their current state and their reference point of good health. This means they are in a state of loss and are therefore motivated to take risks in order to get closer to achieving their reference point. However once they have had the transplant they will feel better and therefore frame their situation more positively, changing their reference point to one of ill health. Therefore with their new heart they will feel that they are in a state of gain and due to the function being more gradual in the positive domain, the difference in utility between the patients current state and a state of health will not motivate them to carry out any risky activity which may further improve their condition. Also they would find any activity that risks them returning to a state of ill health and therefore loss very threatening. From these examples it can be seen that the concept of 'risk aversion' can be applied to a number of real world situations, it can also be seen that knowledge of such a phenomena can help to avoid erroneous decisions made due to it. For example if doctors discuss the actual percentage of risk involved in activities that may further improve patients condition after transplant this may help to avoid patients aversion to them (Savadori The concept of 'temporal discounting' can also be seen as useful in various real world applications. It has been used in many cases to explain why people will often carry out \"suboptimally impulsive behavior\" and therefore sacrifice long-term gains for a short-term pleasures (Kirby, 1997, p.54). As such the concept can be applied in areas such as career decisions, health decisions and marketing. This is because hyperbolic discounting means that the value of something drops slowly at long delays and steeply at short delays, therefore although we may be able to resist making a decision that we know is", "label": 1 }, { "main_document": "should not be called out if the elderly person takes a nap on the couch. The result of the pattern recognition is that there will be a decrease in false alarms. For the purpose of pattern recognition the system should also skilful in pattern storage and pattern matching. These habits must be stored away and remembered and compared with the current situation at any time. When a hazard is found, a value, based on the scale of danger, should be given to the action software module. Based on this number the correct action will be taken. The software will also have cases programmed into it. For example, the case when the elderly person is lying on the floor would send a trigger to the action module; or if the elderly person is lying in bed for over twenty hours then action should be taken. This would help to deal with all situations. To best demonstrate the process of the system an example should be given. An elderly person takes a nap everyday from 4pm to 6pm everyday on the couch. The image is sent through to the analysis module, which sends the following data to the decision module: Person is lying down on couch, in living room. The decision module will prepare to send a high danger signal because the person is in a lying down position. This could mean that they have fallen over. But it then checks the location of the person, which is the couch, and the time, which is 4.30pm. It then checks for this in its habits section. It will find a match because the person always takes this nap. So the signal is not sent and action is not taken. In this case it would have been wrong for the emergency services to be called. The following day the elderly person falls over in the kitchen shouting the keyword help. The time is 5pm and the cooker is on. The decision module receives the message: Person is lying down on floor, in kitchen, keyword spoken. The system will check for a habit match, not find one, and send the following values based on the scale of danger. There would be a low value of two in the heat section due to the cooker being on. There would be a high value of seven in the elderly persons health section due to the elderly person lying down on the floor. There would be a high value of nine in the sound section due to the keyword being spoken (these number are just examples of figures that could be given). These three values would be sent to the action section where they are matched up against the correct action. In this particular case a health worker would be sent for and the speakers would output the message, \"Help is on its way.\" The cost can be broken down into two cost sections. The fixed development costs and the cost per unit. The fixed development costs are a one off payment and the cost can be divided across all", "label": 1 }, { "main_document": "a summary of many other facts such as 'Y delivered groceries to X' and 'X is solvent'. She therefore state that there cannot be a transition from \"is\" to \"owes\" like Hume said that there cannot be a transition from \"is\" to \"ought\". This whole scenario can be described as: This could however be described in another way (something Anscombe is critical of Mill and Kant for not understanding). It is depriving a person of property (the payment or the groceries). This is stealing. I use the expression \"depriving a person of property\" specifically as it is on that Anscombe herself uses. She later says that the circumstances surrounding the action will determine whether this action is 'just' or 'unjust'. I would argue that to steal is always unjust, regardless of consequence though whether it is right or wrong, depending on the circumstances may be debateable. I shall return to this point later. Anscombe herself was not prepared to accept the concequentialist view that an action could be justified by its' consequences. Anscombe does not agree with the thought that it may be acceptable to kill the innocent for \"over-riding purposes\" p.34 (1981). Like the Deontologists, she finds some actions to be unacceptable. This is part of why I consider Anscombe's ideas confused. If there are moral facts, and at this point Anscombe seems to think that there are, a person must be obligated to try to 'do' right and avoid wrong doing. Anscombe argues that just as Aristotle did not have a blanket term for wrong, p. 30 (1981) but more specific terms that do not necessarily have moral value, so too should we. Instead of calling actions wrong, we should use terms such as unjust, and should reserve moral decision making until we have her \"adequate philosophy of psychology\". If we then have a theory that deems justice to be a virtue, we would then be able to label actions right or wrong. As, in Anscombe's opinion, we do not have such a theory, it is not possible to call an act that is unjust wrong. I agree with Anscombe at this point, as there is no consensus on ethics at present; the more specific terms do help to clarify acts. This may make future theories easier to formulate and to see what should be deemed virtuous. In attempting to clarify this point with examples however, I think that Anscombe makes a mistake. Anscombe attempts to show that while some things are always unjust, others are only unjust in certain circumstances. This would pave the way for a theory that by making justice a virtue would be able to avoid some of the criticisms that Deontology and Consequentialism face. Anscombe uses the example of someone being \"judiciously punished for something which it can be clearly seen he has not done\". p.38 (1981). She correctly identifies this as \"a paradigm case of injustice\" p.39 (1981). There is no way that someone could dispute that this is unjust. Any example where consequences would be better if the person was punished do not", "label": 1 }, { "main_document": "Supramolecular polymers are polymers composed of monomeric subunits held together by directional and reversible non-covalent interactions such as hydrogen bonding, - interactions or metal coordination. These polymers are produced through use of self-assembling monomers, which have bonding sites incorporated within them. Apart from in biological systems, such self-assembling systems are mainly found in organic and non-polar solvents. Interesting properties have been demonstrated using specific monomers such as ureido-pyrimidones that can rearrange themselves to form very long chained structures. Supramolecular polymers combine conventionally known polymer properties with reversibility and responsiveness, which will be discussed in more detail later. The area of research into polymer chemistry, has become more developed due to the discovery of supramolecular polymers, which before was restricted to only macromolecular polymer species where only covalent bonding hold the repeating subunits together. Supramolecular polymers have attracted a vast amount of interest due to their unusual properties resulting from their unique bonding. Such properties include: - their spontaneous formation, which does not require an initial stimulus e.g. catalyst, their dynamic and therefore reversible formation, and finally the limited amount of termination processes that occur during their assembly, this results in the degree of polymerisation only depending factor being the strength of interaction between the subunits, which makes them a very valuable material. Many uses for these favourable reversible interactions have been developed in the past but they only occurred when the structure was immersed in solution, but after careful reconstruction of their bonding these structures have been made to show polymer properties in both solid and aqueous states. Some of these uses are in smart films, hot melts and coatings. Supramolecular polymers are a type of stimuli-responsive polymer as they can display a change in properties in response to external stimuli such as changes in temperature, ionic strength or pH, also with the addition of biological and chemical analytes, and by the presence of an electric, magnetic or mechanical field. This can make supramolecular polymers more valuable to an analyst as properties such as degree of polymerisation, lifetime of the chain, and its conformation, are dependent on the strength of interaction within the polymer, which can be adjusted by changes to external stimuli, whereas traditional macromolecules do not display these changes. These systems provide the possibility of combining different monomers to create supramolecular copolymers, or instead produce branched or reversibly cross-linked polymers by adding monomers of higher functionality. Both techniques allow the resulting polymer properties to be adjusted by simple additions in contrast to covalent copolymer production, which requires further synthesis. The formation of polymers that possess thixotropic character has been highly sort after, where for example if the structure is exposed to high stress such as that experienced during processing a film the polymer would act as a liquid but when this stress is removed such as during conditions experienced after processing it would return to its original state. These materials have potential usage in the form of smart films, which are found in sensors, actuators, and electropic devices. A recently produced class of responsive metallo-supramolecular polymers that are held together", "label": 1 }, { "main_document": "moves away from the element and collides with other particles and the vessel wall. On collision energy is transferred from the particle to whatever it hits. At low pressure there are relatively few particles in the vessel so any excited particle has the ability to bounce back and forth from the element to vessel wall with little obstruction of other particles. This results in a very efficient heat transfer from the element to the vessel. At high pressure there are relatively many particles in the vessel. This means that any excited particle will collide with many other particles on its journey to the vessel wall. This slows down the heat transfer from the element to vessel as the energy is dissipated over many more particles. Film Temperature at ambient pressure- Rayleigh Number- Where; Heat Transfer via Radiation- Heat Transfer via Convection- Nusselt Number- Where; Dimensional Analysis- The element is constructed from a copper tube with the heating element (Nickel-Chrome) mounted in alumina. The emissivity of 0.581 indicates that the surface of the copper has oxidised. The ratio of heat transfer at ambient pressure via convection to radiation is approximately 3:2. This is expected because an effective convection current system was set up in the ambient pressure atmosphere. From page 34 of the Heat Transfer notes it states that for horizontal cylinders with Raleigh numbers between 10 The Rayleigh number calculated here is only in the order of 10 The experimental procedure has some slight inaccuracies. The main one is the measurement of pressure via the manometer. The measurements are taken by eye from a wooden ruler with 1mm intervals. Therefore the readings are only accurate to the nearest millimetre, this inaccuracy then multiplied by two as two readings are taken for either side of the manometer tube. The wooden ruler shall also expand/contract with varying humidity. These inaccuracies where overcome for pressures ranging from 0.1-100 Torr as they could be measured accurately using the McLeod gauge.", "label": 1 }, { "main_document": "discuss how through non-decisions, for example by not promoting discussion on civil rights, those in power are able to maintain the status quo which is beneficial for themselves. This avoids any conflict of interests developing and any situation where an elite party A would have to exert pressure over the majority, parties B and C etc. The power being explained is less overt in its nature however it seems able to explain more accurately how power is often enacted within society. Through agenda management those in power dictate which issues are brought to the foreground and therefore about what interests are primarily formed. This supports Marx's idea of a false consciousness within society or merely the supposition that certain groups dictate policy and hold more influence than others. Schattschneider, E E. (1960) For Lukes this theory adds a dimension to the concept of power making it two-dimensional, observable through decisions as well as present in non-actions as we have discussed. However for Lukes the argument is marred somewhat in its perspective of power in that it is too committed to the overt behaviour of parties and still shackled to the concept of power in relation to conflict of interest only in the political sphere. The theory does not acknowledge that power can exist without a conflict of interests whereas for Lukes an effective use of power renders conflict unnecessary, such discrepancies of opinion should not arise. Does this mean that no group holds power over the said society, neither influencing nor shaping their decisions? Lukes, S (1974) Power, For Bachrach and Baratz power (personified) has a restraining influence in that it effectively manipulates what people conceptualise as their interest and also wins in any battle of the said interests so that those with power benefit However power (or more accurately those who wield it) is yet to actively create preferences that conform to a desired outcome, it only reacts to pre-formed preferences of other actors. To illustrate one could examine Hitler who had power over his people to the extent of entrusting many of them to exterminate over 6 million Jews. Did Hitler achieve this merely by restricting their views and shaping their interests about the rights of Jews or by forcing them to commit mass genocide? No, they passionately believed in what they did as the best thing for society in general, Hitler both created and instilled a preference into Nazi Germany which insisted upon the annihilation of an entire race. Bachrach, P and Baratz, S.M. (1962) \"Two Faces of Power\", Perhaps this explains Lukes's own conceptualisation of power labelled the 'three-dimensional' view. This sees power as thought control, the faculty to mould preferences to a desired outcome. It enables us to conceptualise power on more than one level and to recognise power in the non political sphere as well as the political. Lukes draws away from a concept of power as resting on actual conflict recognising how power can be identified in the absence of a conflict of interests where issues have been \"successfully averted\" Lukes also recognises the subjective", "label": 1 }, { "main_document": "turbulent and flame propagation is enhanced. The ignition delay period is approximately time constant. Ferguson 2001 FERGUSON. C. R, \"Internal Combustion Engines - Applied thermosciencs\", 2nd Edition, John Wiley & Sons, 2001, p256 To ensure homogeneous burning of the air/fuel mixture and therefore good flame growth the induced turbulence & swirl/tumble within the cylinder head is critical. Once combustion starts the flame propagates throughout the mixed gases within the cylinder, typical burn rates for gasoline are 1.4~1.5m/s. Combustion takes a finite time, therefore ignition is advanced to BTDC. This means there is a pressure rise BTDC and the compression work increases (negative work). Higher pressure at TDC leads to higher pressures during expansion and hence an increase in positive work. Therefore there is an optimum ignition timing that is a compromise between these two effects. Plotting ignition timing against torque produces a curve similar to that shown in Figure 6.1. The minimum advance is more critical than maximum advance for max torque and so is generally quoted, this is known as MBT (Minimum advance for Best Torque). By retarding the ignition by a few degrees from maximum torque the nitric oxide emission are greatly reduced, this also reduces the engines octane requirements FERGUSON. C. R, \"Internal Combustion Engines - Applied thermosciencs\", 2nd Edition, John Wiley & Sons, 2001, p336 At part throttle the combustion rate is slower due to lower intake velocities and reduced turbulence, this leads to partial combustion. Therefore the ignition timing is advanced to ensure complete combustion, ensuring clean emissions. This can be seen for Engine knock occurs due to the air/fuel mixture autoigniting, this autoignition produces shock waves which produced the \"knocking\" sound. One of the reasons the autoignition temperature is reached is by too high a compression ratio. This limits the thermal efficiency of an IC engine, since thermal efficiency is a function of compression ratio which in turn is limited by autoignition of the air/fuel mixture. Pre-ignition leads to higher peak pressures, and this in turn can cause pre-ignition, Figure 6.2 shows the pressure vs. crank angle graph during knock. Advanced ignition timing also is a cause of engine knock since the heat transfer from the flame front raises the temperature of the unburnt mixture to the point of autoignition. This is more of a problem at lower engine speeds due advanced ignition timing and their being more time for autoignition precursors to form. To prevent knock the ignition timing needs to be retarded, which leads to lower peak pressures. Modern engines use knock sensors which measure the pressure or vibration variations within the cylinder, when knock is detected the ignition is retarded. The COV can be calculated using Equation 4.5, and was calculated for the Peak Pressure and IMEP over the 100 cycles that were recorded by the AVL Indimeter for each engine operating condition listed in Table 3.1. The results can be seen in Table 5.1. Graph 5.2 shows the pressure variation between 6 cycles during testing. It can be seen that between these 6 cycles the pressure variation is approximately 4 bar. The", "label": 1 }, { "main_document": "to have converged with rich countries. To oppose predicted convergence, Robert Barro in his recent papers observed that developed countries appear to have grown at faster rate than the developing world for the period of last fifty years. The unexplained technological progress has led to development of Endogenous growth theory in late 1980's with chief inventors Paul Romer and his adviser Robert E. Lucas, Jr. Endogenous growth theorists emphasize on knowledge accumulation. A small improvement to neoclassical model will change the original production function to: This production function does not exhibit the property of diminishing returns to capital. One extra unit of capital produces A extra units of output, regardless of quantity of K (constant marginal product). In the Solow model, saving leads to growth temporarily, but diminishing returns to capital makes the economy to reach a steady-state. Hence, growth depends only on exogenous technological progress. By contrast, in endogenous growth model, saving and investment can lead to permanent growth. If capital can be seen in a more broadly way, rather than just equipments and plants, the effect of diminishing returns will disappear. Assume there are two sectors in the economy. In the first one, firms produce goods and services used for consumption and investment. The other half produces knowledge. The production function is manufacturing firms will look like: The efficiency of labour is meant to reflect society's knowledge about production methods. Instead of counting the number of physical bodies, we count effective labour input taking function relating per person output to per person capital into account improved education and technology. EL stands for the number of workers L and efficiency of each worker. If we increase K and E by some multiple, the output of both sectors in the economy would be increased by the same rate. The growth continues endogenously because the creation of knowledge would never stop. While saving determines the steady-state stock of physical capital, the labour force in universities (u) determines the growth of knowledge. Both of them affect the level of income, but only u affects the steady-state growth rate of income. Human capital is the value of the extra earnings made possible by education. Educated people can produce more output and a higher standard of living. In contrast to the neoclassical growth theory of unexplained exogenous technological change with no potential for policy effect, endogenous growth theory argues that policy measures can have an impact on the long-run growth rate of an economy, even if they do not change the aggregate savings rate. Romer and Lucas have built a model in which the key to growth is the development of ideas and transferring them to new goods. The incentives to production of ideas rely on monopoly power that is reinforced by patents and copyrights. Efficient international trade makes sure that consumers can enjoy all the benefits of new goods from anywhere in the world. Government policies can positively influence growth rates by taxing consumption subsidizing investment and research, shifting resources from government consumption to government investment. For instance, invest more on infrastructure such as airports,", "label": 0 }, { "main_document": "to keep the temperature at a certain level. These schemes are primarily part of control systems. Such control systems are known as; These are basically, systems that uphold the health and safety of the buildings' occupiers. The systems involved in this are; These systems reduce the dependence on manpower. For example only a single or pair of security guards is needed to protect a building instead of a team. This is because of intrusion alarms and the ability to observe the whole building via CCTV. These can also provide solid evidence for use in investigations. The HVAC/R system controls the Heating, Ventilation, Air Conditioning and Refrigeration. The UPS system provides power protection to entire buildings served by a single source and protects power sensitive equipment from the detrimental effects of power disturbances such as voltage sags, surges, transients, momentary disruptions, and complete outages. Communication and information is an important part of everybody's life. For businesses it will affect performance and in turn profits if this is not quick and easily accessible. These systems enable rapid communication and put information at people's fingertips. A Private Branch Exchange (PBX) is a telephone network with a building/s. A PBX will contain a number of outside lines for making external calls. PBX's are used because it is cheaper than connecting an external line to every phone in the building/s. Also it is quick and easy to contact other users within the PBX by simply dialling a three or four digit number. A Public Announcement (PA) system may also be integral with a PBX. Cablevision is essentially high speed Internet, cable television and digital voice service. The Internet is an important information and communication tool. Information can be obtained from websites and communication via email, instant messaging services and video conferencing. A videotext system displays information as text and simple pictures. The London Stock Exchange uses this system to display up to date information about stocks and shares. Ethernet is a network of computers and other devices linked by cables and/or wireless transmitters and receivers. This enables uses to send and receive and/or access information of other computers on the network. There are two types of Ethernet, Local Area Network (LAN) and Wide Area Network (WAN). LAN would be within a single building whereas WAN would connect several different buildings maybe miles apart. This mainly consists of software based intelligent features. Word processing includes the electronic creation, revision, storage, retrieval, and transmission of correspondence documents. These documents can be transferred in a number of different ways, either over the buildings Ethernet, the Internet or in physical form by courier or post. Computer Aided Design (CAD) is an important tool that allows the users to make adjustments and modifications to a design without having to redraft. This is a great time and money saving. The type of building I choose to build shall be a domestic property. The location of the building shall be very important. Firstly I shall want it to be a rural location but this might conflict with the second factor. The second factor is", "label": 1 }, { "main_document": "asked if their goal is to focus on what the customer wants or to provide existing products, the reply was \"How do we know what the customer wants?\" .This leads to believe that although trying to offer what suits customers demand, the company does not have a technique on how to achieve it, simply providing what they imagine the customer would like. Another reason that leads to believe Caf There is only a board in the restaurant with specials which the staff informs the customer of, however never do they recommend or try to encourage the customer, because the management feels customers might feel persuaded. They state the focus is on customer satisfaction in order to achieve the company's goals, though, customer satisfaction questionnaires are only used every few months and on a very informal basis; the staff talks to the customer and they believe this is an efficient way to evaluate their satisfaction. Concerning feedback towards unsatisfied customers and their suggestions, these are only taken into consideration if they are in a majority, not all questionnaires are read and final actions will always depend on the owner. When asked about societal issues such as noise levels and smoking in the restaurant the reply was that no measures are being carried out, however, it is being taken into consideration for future thought. They also concentrate on serving products which everyone can access and this leads to believe they consider wide distribution (production orientation) as an important factor. It seems there is an attempt to meet demand, nevertheless there is no knowledge of what demand is, and therefore too many approaches are adopted. It would seem sufficient if the company concentrated on target markets rather than trying to satisfy everyone and adopted a marketing strategy in order to attract those customers. A realistic and sales attitude with the goal to let consumers know about the products available appears appropriate and if not too aggressive, could be constructive for the business. Actions should be planned and there must be a type of organization when taking decisions and these should be pondered and evaluated in advance and thought of more seriously. A flexible protocol could be the start of a structured strategy.", "label": 0 }, { "main_document": "This essay is concerned with a description and evaluation of the agricultural development strategy of Mauritius in the period between 1970 and 2000 is. A brief introduction to the island, is followed by the paradigms that were underlying agricultural policies in Africa at the time. Finally the different policies and their outcomes are evaluated. The small island state of Mauritius is located in the south west region of the Indian Ocean. Mauritius' total area is around 2,040 km Its climate is tropical, with attendant disease burdens and problems with tropical agriculture. Mauritius was visited by cyclones in 1970, late-1975, and early-1980; by drought in 1983; and by a cyclone in 1984 all of which severely damaged crops (Gulhati and Nallari, 1990). In 1968, Mauritius proclaimed independence. At this point in time the country was poor with a per capita income of about US$260 (Worldbank, 2005). Moreover, the pattern of asset and income distribution was very skewed. The thirty large sugar plantations were owned by Franco-Mauritians and accounted for nearly half of the total cultivable area (Gulhati and Nallari, 1990). Since the 1960's Mauritius witnessed rapid growth in population as depicted in Table 1. Being a small island, Mauritius is subject to a range of problems that are associated with small economies. Economic activity is less diversified and a small domestic resource base limits the capacity for transformation. The small economy suffers from diseconomies of scale and its small domestic market limits the opportunities for economic development (Chernoff and Warner, 2002). Sugar is the dominant commodity in Mauritius both for the agricultural sector and the economy overall (Beintema, et al., 2003). Mauritius's share in the EEC quota is 38 percent, under the Lom The price guaranteed by the EEC to ACP countries is equal to the producer price for European beet sugar producers, which means that it is more stable and usually higher than world prices (Gulhati and Nallari, 1990). Mauritius' high dependence on one export commodity, leads to vulnerability to external shocks and natural hazards (Chernoff and Warner, 2002). In this section the respective development paradigms prevailing in Africa at the time are being described. Since the 1970s the basic human needs paradigm prevailed. It gave priority to the allocation of programmatic and public investment resources. In particular increased self-sufficiency was to be achieved by promoting national food production. The paradigm was part of \"the growth-with-equity era since 1970\" that was linked with the commodity boom and heightened rural inequality. It argued that improving the welfare, education, technical knowledge, and active participation of all people will increase both productive capacity and actual production more than growth strategies that rely on trickle-down for transmission of benefits. Subsidies for basic foods and redistributive policies were the chosen instruments (Gulhati and Nallari, 1990) As from 1979 the Mauritian economy was marked by recession inflation, sharply increased unemployment. A Structural adjustment (SA) program was formulated in conjunction with and supported by the World Bank and IMF (Bowman, 1991). The paradigm underlying SA programs saw emerging agricultural and overall development problems as the result of artificially distorted", "label": 0 }, { "main_document": "by measuring the reaction as a function of absorbance over time, we can calculate the concentration of CV at that time, and thus by various plots work out the rate constant.By putting OH- in vast excess of CV it can as such be treated as constant: If the reaction is zeroth order with respect to CV then a plot of Absorbance vs time should be a straight line. If a plot of -ln(Absorbance) vs time is straight then that means it is first order with respect to CV, and if a plot of 1/Absorbance vs time means that CV is second order with respect to CV. This is due to the different integrated rate laws; first order rate law: Second order rate law: To determine the order of reaction with respect to OH-, the logarithm of (5) can be taken to give: Thus by plotting log From the rate constants at different temperatures by use of the Arrhenius equation, the Activation Energy ( Where By plotting ln 1/ The kinetic salt effect is based on the principle that by adding inert salt to a system it will change the rate of reaction but play no part in the reaction itself. If the two reactive species coming (A and B) together to form reactive complex have like charges, adding inert salt increases the rate of the reaction, whereas if the two reactive species have different charges adding inert salt decreases the rate of reaction. The formation of a single, highly charged ionic complex from two less highly charged ions is favoured by high ionic strength because the new ion has a denser ionic atmosphere and interacts more strongly, and as such the reactive intermediate complex is more stable than if it were in the solution without inert salt and thus increases the rate of reaction. For ions of opposite charge coming together to form reactive complex intermediate- the charges on the complex cancel and as such the higher the ionic strength of the solution- the less favourable the interactions with the solution and decreases the rate of reaction. [4] This is the theory behind in the kinetic salt equation: Where Where Within the Debye-H The gradient gives 2 The interaction between solvent and solute is key and critical to determining kinetics of a reaction in organic chemistry especially- such as the stabilisation of carbocation intermediates by an electron rich donor solvent, or the competition of two nucleophiles. Also the solute-solute interactions also have a part to play on this effect. Linear plots of -ln (Absorbance) vs time were obtained, establishing the order with respect to CV from integrated rate laws is first; Values of (6) shows that a plot of log Microcal Origin software calculated the gradient as 1.026- rounding to the nearest integer means the order with respect to OH- is one also. Thus equation (2) can be updated to: Rate of reaction = The observed rate constants were found at different temperatures and tabulated in table 2. As expected the rate constant increased with temperature due to the more internal energy", "label": 1 }, { "main_document": "the material. The Lorries can take any one of three available points to get the goods checked. If no point is free, then the lorry has to wait in a queue for the check-in-area. After receiving the satisfactory check letter, the final stage is the off-loading stage. There are three off-loading bays designed to transfer all the material in quickest possible way. The lorry proceeds to one of the three offloading bays. If no bay is vacant they lorry have to wait in the queue. After off-loading all the material, the lorries either leave again for the next trip (They would leave through the exit gates and if its busy they would have to wait in a queue) or if its their list trip they move to final point which is the washing Zone. If the washing Zone is busy they would have to wait in Queue until they get any free available point. After washing the lorries are parked in the parking Zone. The layout of the plant is in such a way that it allows up to maximum of 10 lorries in a queue except the queue for the entrance which could usually have any number of lorries because its outside the plant area. The biggest congestion is seen during the timing around 10:30 am at the weighbridge zone and check-in points. The management has plans to add some extra check-in points and some offloading points in order to enable all lorries to finish their work by 4:30 pm. Under current operation the lorries usually finish by about 5:30 pm. The motivation to do this project came from personally worked in the firm as Software Development trainee and knowing the problem they were facing at their north Delhi plant. 17 Ranjeet NagarNew Delhi-110008 India Website: The objective is to determine the number of check-in points and off-loading points required so that all lorries finish their tasks by 5pm. The maximum number of check-in and off-loading points allowed is 5. ( All aspects of the model were continuously refined (including objectives). The report presents the final version of the model.) The main report is 14 pages with 10 pages of appendices. The lorries arrive full. Lorries are never refused at Entry or Exit points. The check-in of the material is always satisfactory. Every operation point has only one dedicated worker. The capacity of the queue is more than the actual number of lorries plant have. There are no breakdowns of the equipments or lorries. Data obtained on time it takes to travel to respective vendor and back to the plant are sufficiently accurate for the problem. Data obtained on time to perform the operations at the plant are sufficiently accurate for the problem. Travel time from Parking Zone to Exit Zone and Off-loading Zone to Exit Zone are equal. Operation of only one day is being modelled in this project because the amount of work, vendors, and pattern of work does not change on day-by-day basis. The breaks taken by the staff member is not modelled in this project. In real", "label": 0 }, { "main_document": "further indication that Blair is not only addressing fellow politicians, an expert audience, but the whole nation. There are two recurrent features in the recount. One is the thematisation of adverbial phrases of time at the beginning of paragraphs and sentences (line 42, 62, 66, 67, 70, 72, 74, 76, 89, 92, 95, 97, 102, 106, 112, 113, 122). This choice structures a potentially confusing history in a very overt way and assigns 'relative prominence' (Clements 1979 in Brown and Yule 1983: 134) to the time dimension of the conflict, creating the impression of a long history. The other recurring feature is a claim-counterclaim pattern (McCarthy 1991: 79-80) made up of the constituents \"declaration\" or \"full and final declaration\" (lines 55, 56, 65, 71, 75, 92) and \"false\" (lines 57, 73, 94, 98). Together with other lexical choices strongly suggesting falsehood, like \"blanket denial\" (line 58), \"game\" (line 59), \"undeclared\" (line 63), \"defected\" (line 77), \"denied\" (line 81, 101), \"revealed\" (line 84), \"prohibited\" (line 90) and \"lies, deception and obstruction\" (line 115), the recurring theme of \"declaration\" followed by \"false\" is designed to leave the addressees with a deep mistrust of Saddam Hussein's regime and without any hope for a diplomatic solution. The repetitive structure and vocabulary iconically mirrors the repetitive history, aimed at making the addressees feel the same impatience for a solution to text 1 that the PM and his supporters feel about the armament conflict. The overall structure of text 1 resembles a 'recycled Problem-Solution pattern' as discussed by Hoey (2001: 130). He describes the basic Problem-Solution pattern as: In text 1 however, the Response (i.e. the diplomatic effort to make Hussein disarm) is not followed by a Positive Result or Evaluation (i.e. Hussein's disarmament) that would bring the pattern to a close. Instead, the Response is followed by a Negative Evaluation (i.e. that Hussein's disarmament is disproved), which causes the pattern to be 'recycled', i.e. resumed from the Problem as the new departure point. Repeated Negative Evaluation then causes the text to become circular. To sum up, text 1 includes various devices designed to make it memorable, like structural and lexical parallelism, short and simple sentences, chronic order, and clear signposting at the beginning of paragraphs through thematisation of time adverbials and direct questions (line 15-17, 28, 124). Text 1 also includes a number of features with interpersonal function, aimed at making the addressees feel sympathy and trust for the addressor, like collective identity pronouns, concessive rhetorical moves, and modal expressions suggesting certainty. Text 1 is also designed to make addressees aware of the importance of its content, through emotive vocabulary and modal expressions suggesting urgency. All these features, jointly with the propositional content, are supposed to work together in order to persuade the addressees to adopt the addressor's point of view. Text 2 is much longer than text 1, therefore the analysis can only include the most relevant aspects due to the time and space constraints on this paper. The article starts without overt introduction with a paragraph summarising the present and potential future situation regarding North", "label": 0 }, { "main_document": "Santos and C Ratna Kapur, 'Revisioning the Role of Law in Women's Human Rights Struggles' in Saladin Meckled-Garcia and Basack Cali (eds), Of particular relevance, the Hardt and Negri \"multitude\" conception imagines an alternative global order, to be manifested in part through a project of re-thinking the mainstream political form of democracy in a global world. Caleb Smith and Enrico Minardi, Interview with Michael Hardt (North Carolina, 5 March 2004) question 2 < Chamsy el-Ojeili and Patrick Hayden, A peoples' law perspective facilitates the re-appropriation of 'law' primarily through the recognition of the 'crime of silence' and a re-claiming (by subaltern, less powerful singularities) of judgment, authorship, control and action in relation to specific violations as well as in relation to the wider complex global interactions, processes and institutions of the present hegemonic world order. Nayar, above no 25, 321-322. The primary issue for judgment is the accountability of the UN for health violations of the Roma community in the IDP camps administered by UNMIK. The Tribunal process commences with a repudiation of the normalcy of 'expected' and 'inevitable' delays, inconveniences and lack of funds in a post-conflict situation administered by the UN, and provides a platform from which the Roma can reclaim their histories, future and 'truth' by re-framing their treatment by UNMIK from a discourse of 'mere misfortune' to one of judgment of violations. Thus, the Tribunal aims challenge the 'crime of silence' in relation to the health violations of the Roma in the IDP camps and, as a wider concern, seeks to situate the violations within a broader global movement that examines the systematic, structural violation against minority communities resulting from a privileging of other interests (for example, 'development', 'nation-building', 'democracy', 'security' and the interests of transnational corporations) over human rights relating to health and life. The Tribunal is a forum for the Roma to resist the dominant ideology by reclaiming the power or right to author their version of the 'law' and to define the structures and nature of social relationships conducive to a life of security and welfare. The dominant legality promises, through myriad human rights treaties, international and national legislation, a world order of equality and compassion. In relation to the specific situation, multiple human rights standards exist that are applicable to the Roma in this context For example, the International Covenant on Civil and Political Rights, the International Covenant on the Rights of the Child, the International Covenant on Economic, Social and Cultural Rights, guidelines in relation to IDPs, Declaration on the Rights of Persons Belonging to National or Ethnic, Religious and Linguistic Minorities, among others. In light of this and as steps taken within the current legal framework were unable to assist the Roma effectively, the Tribunal is an attempt to re-think the 'law' and to generate imaginations for new political action. The adopted discourse does not derive its validity from any formal source of law, nor from any validation by institutions of power but instead derives from the assertion of the power of peoples' voice and judgment. Nayar, above no 25, 325-326. The", "label": 1 }, { "main_document": "women in his novel 'Heart of darkness'. At first glance, Conrad appears to consider the role of women as unimportant, as they are barely mentioned in the story. However, a closer, perhaps feminist reading of the text may consider that it shows women to symbolise and indeed, maintain civilisation. As they are not present in the jungle to regulate behaviour, the rampant masculinity leads to inevitable chaos. The obvious difference between the two key women in the play represents the duality of womanhood at the time. Kurz's black mistress could represent the heart of the jungle, a primal, passionate figure that is an intrinsic part of nature. She is described as \"savage and superb, wild-eyed and magnificent; there was something ominous and stately in her deliberate progress\". This woman is referred to as 'she', which could be a refernce to 'she - the one who must be obeyed', creating an idea that women are all-important. Her opposite, Kurz's wife, is white and spiritualised, intent on maintaining the 'ideal' of her husband even after death, \"who was not his friend who had heard him speak once?\". She portrays the old Victorian image of femininity and a deep spirituality, \"the sound of her low voice seemed to have the accompaniment of all other sounds, full of mystery, desolation and sorrow\", whilst the mistress shows a new aspect. The female characters are not given names which gives the impression of them being strictly in the background and unimportant to the men, but it is interesting to note that it is a woman who puts the whole story in motion as Marlow's aunt is the person to recommend him for the trip. This story could be interpreted as representing the conflicting thoughts about women at the time, as their importance in society begins to be considered and acknowledged. D.H. Lawrence felt that it was important for the novel to always be tackling new propositions and evoking fresh feelings. He was intrigued by the human subconscious and the primal instincts of possession and killing. He also had specific views on sexuality and women, on how they should dress and conduct themselves. His novella, 'The Fox' explores the thinking of two females, as he wishes to understand the workings of their minds. He presents the idea of two women living alone together as unusual. They intend to run a poultry farm together but it is altogether unsuccessful and he implies that this is because a man is not there to help them. This could possibly be a criticism of lesbianism as he insinuates that that type of relationship has no possibility of working out. His female characters do not appear to have a great deal of strength, despite their determination. Banford is physically weak and her jealousy and manipulative tendencies only work against her and make her seem like a stereotypically petty female. Although March is described as being \"the man about the place\" and \"more robust\", she still cannot kill the fox and easily falls prey to Henry's charms. The way in which she cannot control herself", "label": 1 }, { "main_document": "This article attempts to analyze the relevance of concept of 'Developmental States' and 'Competition States' in the East Asian States. It firstly defines the meanings of these two concepts. Then it investigates the role of state in East Asia State in the early stage of development and shows that they are the 'Developmental States'. And it notes that 'Developmental State' are being transformed in the globalizing world by investigating the international and domestic effect. It therefore concludes that the transformation from the developmental state to competition state is the best description and the change is diverse in different countries due to their different economic structures. In addition, it gives some limitation of the competition state concept. Between 1960s and 1980s, East Asia was the most fast-growing region in the worlds. To explain the growth in this region, the concept of developmental states which emphasizes on the importance of role of states is popularly used. In 1993, World Bank used this explanation to appreciate East Asian Miracle. In 1997, the World Bank changed to criticize the outdated state intervention of East Asia. In fact, the international organization changed their attitude, because the state is being changed by globalization. See World Bank, See World Bank, There are hot debates in contemporary IPE concerning the role of states in the globalizing world, and the states in East Asia are the popular case study. In this debate, there are two schools of thought: Hyperglobalizers and the Skeptics. The former (such as Strange) finds that the growths of global production and markets cause the retreat of state. And the latter school (such as Cameron, Garret, Swank, Rodrik & Weiss) argues that the state increases its intervention (such as social insurance) to offset the increase of the risk in the globalizing world. But the former and latter overemphasizes the international force and domestic force respectively. The former ignores the historical background of state including the state-society relations; while the latter neglects the nature of structure and the contribution of the globalization. Therefore, to study the effect of globalization, we should deploy both International Political Economy (IPE) and Comparative Political Economy (CPE) into analysis. This paper uses the transformationalist school of thought (concept of competition state invented by Cerny) to find the way of state transformation and prevent the prejudice on either decrease or increase of state intervention. But before describing the transformation, the paper defines the concept of developmental state and competition state. Then it shows the way and reason of the transformation of role of state by using IPE and CPE framework and applies into three countries (Singapore, Korea and Japan). Finally, it concludes that the competition state is the best concept to describe current situation in East Asia but it still has some limitations. Apart from United States and Europe, the East Asian States show the successful story of the economic development in the world in the post-war period. Between 1965 and 90, the average growth of GNP per Capita in these states was about 5 percent which was higher than that amount in the OECD", "label": 0 }, { "main_document": "schemes, UK national voluntary labelling schemes might still apply. [3] These schemes may be highly appreciated by consumers, and thus, become unavoidable for marketing purposes particularly for food products. The labelling guidelines are very comprehensive and are required to be adhered very strictly in the UK. Especially, food packaging needs to have information like nutrition value, salt content, and portion sizes, etc. For example, Pataks, a leading importer of Indian foods and spices in UK, have planned for changes in the product labelling in order to serve the UK customers in a better way. All Pataks branded products will carry 'Big 8' nutrition information and will include a highlighted panel communicating the fat, calories and salt per serving by 2006. [4] An exporter has to comply with theses requirements and often does not have adequate resources to test product to include the nutritional information, etc and may lose the order. This difficulty can be resolved either by taking resource and technology help from the UK importer or by investing in resources. Also it involves using different packaging for products made for UK market which again add to the production costs. As in most developed nations, the product or service quality expectation of the consumer in UK is fairly high and stringent. They even put lot of emphasis on the production process quality. So the exporters to the UK have to adopt and follow the high quality standards. Tata Auto Plastics Systems Ltd., an injection moulding company in Pune, India used to supply Air Vents for M G Rover cars through Intier Automotive (UK). A very high quality standard was expected from the Indian supplier. When the final samples for product approval were submitted, there was a quality issue of noise during fins movement. The noise was such that it could only be heard if the ear is placed very close to the vent. The project was delayed for four months due to this problem. This would have never been an issue if it was for an Indian customer. Even for parts segregation, parts identifying machine was used which otherwise could have been done manually. Since, quality requirement is customer given, customer's support in implementing product and process quality standards will be helpful. Also individual agencies offering technical support can also be used. [A] An exporting company that sells products with its own brand names runs an additional risk of managing its promotional activities in the foreign market. Different countries have different advertising rules. EU regulations on advertising are stringent compared to others like US. Exporter needs to be wary of using the same promotional strategy in the UK as in their home country. \"The Advertising Standards Authority oversees the practices of the advertising industry and enforces the provisions of the British Code of Advertising Practice (CAP).\" [3] \"Britain's strict rules banning product placement would not allow Heinz to use its name during the show \"Dinner Doctors\" on Channel Five even though Heinz was the sponsor of the programme and its products were used in the recipes. Heinz has to be content with", "label": 1 }, { "main_document": "points that policy capacity recognizes that 'the state is not an 'entity', but a complex and constituted set of relationships between frameworks of political authority and the international political economy, domestic social forces, and the broader ideational notions of authority or stateness.\" Therefore, Thailand and Hong Kong's exceptional example can be explained by its ignorance of other factor such as international factor. Kanishka Jayasuriya, 'Beyond Institutional Fetishism: From the Developmental to the Regulatory State', So far, the developmental state concept can explain most of the East Asia States in its early development (between 1950s and 1980s). But it ignores the international dimension. It has the problem of CPE concept which overemphasizes the domestic politics. Starting from 1980s, the international factor is becoming more important. The political globalization has transformed the developmental state into competition states. Competition state concept describes the international effect and domestic effect in state which is more applicable now. So the next section describes the transformation from developmental state to new competition state empirically. East Asia is also a part of world, the concept of competition state is supposed to be applicable in the East Asia. When Cerny mentioned the concept of competition state in case of East Asia in 1990, he found a certain amount of liberalization and the revaluation of currency in this region, but he didn't think that the international force or globalization can affect the East Asian States to carry out some reducing size project such as privatization and deregulation at this time. Therefore he concluded that competition state is not applicable in this region and the Japanese model (developmental state) is still alive and well. In 1980s, Singaporean government didn't think that the heavy state intervention can make them competitive. The only way is to reduce the state intervention by carrying out privatization, liberalization and deregulation in 1986.His mistake may be caused by the unobvious transformation at that time. But after Asian Crisis, several states (such as Korea, Malysia) found the problem of excessive intervention and carried out several kinds of restructuring (such as deregulation) which made a significant transformation. Many scholars (such as Hall, Jayasuriya, Pang, and Shaw) claim the end of state-led development after these events. Cerny, 1990, pp.229 See Rodney Bruce Hall, 'The Discursive Demolition of the Asian Development Model', Before 1980s, the Singapore government used the Keynesian economic (or fiscal) policy and macro-economic intervention to maintain the economic stability. They interfered in the wage levels of private sector. In order to deal with the inflation caused by oil crisis in 1979 and promote the labor into higher skill level, the Singapore National Wage Council doubled the Singapore's real wages between 1978 and 1980.But the large-scale increase of wage eroded the Singapore's international competitiveness. As the wages increased, the cost of Singaporean goods became less attractive than the other competitive East Asian States' goods especially for the Chinese goods. Therefore, Singapore faced the problem of recession in the first half of 1980s. Apart from the high wages caused by over-regulation, other heavy state interventions such as high company tax rate, rising", "label": 0 }, { "main_document": "He had a 3 year history of a mole on the right shoulder blade which had been progressively increasing in size with intermittent, spontaneous bleeding. Mole on right shoulder - increase in size, spontaneous bleeding. He was unaware of having had a mole in the same position prior to this time but could remember being severely sunburnt over \"the back and shoulders as a child\". Over the 3 years prior to presentation on the In addition, the mole was spontaneously bleeding with frequency of 4/5 times a week. This again had worsened in the year prior to presentation. Bleeding didn't tend to be of significant magnitude but did occur spontaneously and not necessarily in response to contact from clothes for example. The mole had darkened significantly in colour over the same period. His partner has told him, \"this mole is far far darker than the others\". The shape had also altered from being \"perfectly round to more oval\". He had experienced no other forms of discharge or any associated pain or inflammation. There was no associated itch or altered sensation. His mole really caused him little concern, it was his wife who urged him to consult his GP for advice. Of dermatological relevance, he has no history of eczema, psoriasis, hayfever, urticaria, asthma, dry skin, varicose veins or leg ulcers. He did appear to have excessive distribution of moles and had never had a skin cancer before. No history of diabetes, epilepsy, hypertension, jaundice, strokes, heart attacks, rheumatoid or osteoarthritis, cervical arthropathy, obstructive sleep apnoea, acromegaly or thyroid disease. No significant findings to note. He takes no prescribed or alternative medication. Uses paracetamol for occasional mild headaches. no drug or other allergies known. No known significant family history. No history of eczema, psoriasis, asthma or skin cancers. He is fully independent and describes his quality of life as \"good\". He has no children and works full time in an administration position. He does not spend excessive amounts of time outdoors or sunbathing in his spare time and does not have any pets currently. He is smoker with a habit of \"up to 10\" cigarettes a day. He drinks alcohol socially with maximum weekly consumption of approximately 16 units. Malignant melanoma of either of the following two types. Nodular malignant melanoma - most aggressive type. Presents as a rapidly growing pigmented nodule which bleeds or ulcerates. Rarely are non-pigmented/amelanotic and can mimic pyogenic granuloma. Superficial spreading malignant melanoma - a flat, irregularly pigmented lesion, large in size. Grows laterally before vertical invasion develops. Benign naevus The following clinical criteria are an aid in distinguishing between the above differential diagnoses. The Glasgow 7-point checklist To support the most likely presenting diagnosis of malignant melanoma Of the minor criteria, his mole measures more than 6mm and it bleeds spontaneously on regular occasions. History of sun exposure in childhood and intermittent exposure in adulthood are particularly important aetiological factors in the development of malignant melanoma. Other predisposing factors include atypical mole syndrome, giant congenital melanocytic naevi, lentigo maligna and positive family history of malignant melanoma. Malignant", "label": 1 }, { "main_document": "International trade involves various complex regulations, lack of which could lead to chaos and most probably unfair practices. The reality is quite abundant with situations where aspiration to achieve commercial success could easily result in other people (i.e. consumers) being worse-off. It could be illustrated for example by introduction of a new, potentially unhealthy (but also potentially profitable) product on the market, which is undoubtedly a circumstance that should be avoided in the name of public interest. It is considered that, especially with regard to safety in all its aspects, different measures should be taken to assure its appropriate standards, however it is arguable whether the same level of safety is satisfactory in countries across the globe, and indeed some of them seem to be more demanding in this respect than others. Sometimes measures they apply seem to be sound and necessary precautions against health hazards, but there are cases where their underlying motives appear to be rather disguised protectionist measures. The aim of this essay is to demonstrate the idea and operation of the precautionary principle in the framework of the SPS Agreement. It is necessary to underline that the principle may be found in other legal documents some of which will only be mentioned shortly in this paper. Food and agricultural products may constitute an important source of biological hazards to environment and humans. It is more significant in the light of expanding world economy, liberalisation of food trade and quick progress in food science and technology, as products are transported internationally and thus there is increased risk of spreading unwanted substances. It is generally accepted that importing country may impose regulatory requirements, which must be met by the exporting partner aiming at accessing a given market ( Precautionary measures used as an element of trade policy, may be imposed in order to ban or reduce inflow of goods that raise concerns in terms of protecting the public health and environment. It is hardly surprising that precautionary principle as such gives rise to controversy, since it involves discussion between groups representing conflicting interests - on one side profit-oriented companies and on the other - public good (Mbengue, Thomas, 2005). The precautionary principle is quite a problematic idea for many reasons, first being lack of generally accepted and authoritative definition, however some references in the WTO agreements as well as reports of the WTO Dispute Settlement Body (consisting of Dispute Settlement Panel and Appellate Body) make it possible to analyse its nature (Majone, 2002) It should be noted though that the very concept was developed outside the framework of the World Trade Organisation, in the German socio-legal tradition. Translated from German Decision making process should preferably be made on basis of verified, objective information but it seems to be quite rare as science often fails to provide unambiguous proofs. Put in a simple way, the precautionary principle offers some guidelines in situations where solvable problems have been already tackled. It is worth pointing out that at first it has been applied mainly in the field of environment which Hathcock (2000) expresses", "label": 0 }, { "main_document": "one billion. However, what should be noted is, as An Chen describes, the state itself lacks adequate financial means to establish a nationwide system of social welfare benefits for the poorer sections\" Lardy, N. R. (2002) Lardy, N. R. (2002) Chen, A. (2003) \"Rising Class Politics and its Impact on China's Path to Democracy\", 2, p. 151 Secondly, China's Gini coefficient has jumped rapidly during the last twenty years and reached 43.4 in 2003 Inequality is stark in China with the poorest 20% taking up just 6% of all income. Inequality is an important source for instability, most significantly when people are aware of it. There is evidence to suggest that this consciousness of comparative inequality can only have increased in recent years. Growth and wealth is centred along the east coast of China, particularly among the four 'Special Economic Zones'. However, there have been rapid increases in internal migration from rural areas to cities due to the perceived prosperity of workers there. This migration has ensured a constant supply of willing workers, particularly for foreign-invested enterprises. However, it has also surely meant a spread of knowledge and consciousness of inequality within China. Furthermore, the beneficiaries of economic growth, unlike during the Maoist years, are now free to indulge their wealth in material goods, previously unavailable to them. This disparity in wealth is evident to any individual in China where \"animosity towards wide socio-economic disparities is still deeply rooted in the psyche of most Chinese citizens, particularly the poor\" Chen, A. (2003) \"Rising Class Politics and its Impact on China's Path to Democracy\", 2, p. 148 Lastly, following the decline of ideology as a legitimating force for the CCP, the importance of providing the people with what they want should not be underestimated. Increasingly, this has meant the opportunity to purchase 'western' goods such as cars, fridges and air conditioners. However, as the production force for such products, this as Marx would point out, is almost impossible. Clearly, many Chinese people have benefited from consumerism; however, it will be many years before the country will have a sizable wealthy middle class The importance of workers to the Chinese government and indeed their own perceived importance should not be underestimated. Arguably, the decision to take military action during the Tiananmen Square demonstrations of May 1989, was triggered by the addition of workers to what began as a student protest. Gilpin, R. (2000) To conclude, Segal's conclusion that China is not a terribly important country has been contended not through a discussion of Segal's own terms of analysis, but through an analysis which stems from an alternative overall approach. It has been argued that by placing China within a capitalist world order, one can appreciate its significance to both the lifestyle of millions of people around the world and also to the continuance of global capitalism in its current form. Key to this argument, is an appreciation of the ever-changing and non-static nature of global politics, as they respond to the constant need to reconcile the inherent contradictions of capitalism. In the final section", "label": 1 }, { "main_document": "help us to understand , how a formal organisation evolves 'naturally' as a response to forces unrelated, or even opposed, to the formal rules and priorities of an administrative structure' (Hobbs 1989 46). Thus a good grounding for the establishment of the three relationships is used by Hobbs, however the detail given and space within the book given to this, is arguably detrimental to the exploration of the three relationships in further depth. The relationship between the working class in the East End and their development of entrepreneurial skills is substantially argued and sustained by Hobbs. The author closely analyses this relationship and how it comes to characterise the East End and an area functioning in a unique way. Hobbs notes the entrepreneurial skill that emerged as an inherent part of East End life and that was 'created by the area's relationship with the City and by unique economic and employment structures, the origins of which are located around the banks of the Thames' (Hobbs 1989 115). The book provides a detailed and through analysis of the life style and culture of the East End and Hobbs establishes that the potency of the East End culture had not only led to the uniqueness of the London area, but entrepreneurial skills becoming inherent in the working class in its urban milieu. Hobbs consideration of the youth of the area aims to substantiate this relationship through the idea of the entrepreneurial culture being transmitted from 'generation to generation' (Hobbs 1989 117). The dedication of a chapter of this book to the detailing of the culture and sub cultures of the youth is arguably to the loss of the further exploration of the relationship between entrepreneurial skill and the East End. Arguably this is due to his East End background. Thus there is seemingly an attempt b the author to firmly establish the zeal of the East End and the uniqueness of the area's culture. Hobb's concentration on sub culture and youth illustrates this, as he duly notes in his first chapter his scholarly background in this area. Adversely, Hobbs systematic breakdown of the types of jobs in the East End does add the additional dimension of the relationship between the working class and entrepreneurial skill as he comments on the varying success and practice of this in the every day lives of East Enders and in how its application has emerged because 'East Enders exist on the brink of the City of London's legitimate commercial enterprise and legality' (Hobbs 1989 140. Thus the author successfully uses the recognisable characteristics of the East End to establish the emergence of wide spread entrepreneurialism. The relationship between entrepreneurial skill and the work of detectives in the book is established in the penultimate two chapters. Firstly, Hobbs is adamant to assert that the entrepreneurialism of the CID division is what characterises and separates it from the uniform branch of the Metropolitan Police. Hobbs firmly establishes this during his chronology of the police force at the beginning of the book and reasserts this later towards the end of the", "label": 1 }, { "main_document": "The question of what determines long run economic growth has divided Macroeconomists over the last twenty years. According to Solows' neoclassical view, growth can only occur with the change of exogenous factors. Endogenous growth theorists suggest however that economic growth stems from a change from within the structures of the economy. At the base both models share a set of common assumptions. Both production functions are of the form Y=F(A,K,L), i.e. Output is determined by the amount of (K)physical Capital (e.g. machinery or software), (L)Labour (effective labour force) and some parameter A (Total Factor Productivity), which comprises all exogenous factors influencing production, such as technological capabilities, market sector compositions or political systems. An augmented production function would include Human capital (H) as well. By firstly describing the Solow model, secondly, analysing the most prominent of endogenous growth models, the AK model I will finally establish which model is the closest to reality by focusing my attention on their differences. a. The Solow model assumes decreasing marginal productivity of inputs. An individually added unit of K or L will thus have a smaller effect on output than a previous one. Illustrating this effect are the Inada conditions, showing that as K and L tend to 0 the slope of the production function will be steep ( In line with these assumptions, the Cobb Douglas-type production function will be the most appropriate and easy to use, yielding Barro and Sala-i - Martin p.26-29 This allows for the following diagrammatic representation: The Solow model suggests that every economy is in equilibrium, once it reaches its steady state, characterised by a capital growth level of zero. In order to compare capital and income levels across countries, one has to take Factors determining k growth over time are: Investment levels, determined by A given level of material depreciation, New entrants into the workforce or the population growth, As population grows the given level of capital must be spread over more workers, which eventually leads to a decline in k. These three elements G To determine a condition for the steady state k-level, we must rewrite equation (1) in the following way: As It shows that there exists a capital per capita level at which the savings, and thus investment levels, will exactly compensate for the fall in k due to depreciation and population growth. Graphically, the steady state is thus represented by the cross point between depreciation ((n+ Given that y = Ak As Y/L=AK suggesting that countries with higher saving rates will be richer, due to more accumulated capital, thus higher output and income. By shifting the savings curve upwards (to s'f(k)) in diagram 2, one can demonstrate that k* and y* will increase (to k'*) as investment levels rise, given all other factors stay the same. Population growth on the other side is negatively related to y*. The upward shift around the origin of the depreciation line shows that the new k* and y* values are clearly lower than the initial steady state values of y and k. A higher fraction of savings must go", "label": 0 }, { "main_document": "employer as well as the de jure employer'(IER, 2004: 51). It is also a condition of immunity that before taking a strike or other industrial action a trade union must first obtain the support of its members through a properly conducted ballot, and must provide at least seven days' notice to an employer of official industrial action to be taken against him. (DTI, 2005) The provisions on balloting in the 1980's were simple and left unions with a degree of autonomy. But the law changed soon without much evidence of rationality, prescribing a series of requirements which must be satisfied as to apply for immunity. The regulations include areas as follows: Independent scrutiny must be involved when more than 50 members are given entitlement to vote. A report on the conduct of the ballot must be written by the scrutineer, and be provided to any union member involved in the ballot as soon as reasonably practicable. A written notice must be given to any employer concerned no later than the seventh day before the intended opening day of the ballot, providing detailed information on the total number and the categories of employees affected, the workplaces influenced, together with an explanation of how these figures were arrived at. Thereout, the employer is able to 'warn his customers of the possibility of disruption so that they can make alternative arrangements or to take steps to ensure the health and safety of his employees or the public to safeguard equipment which might otherwise suffer damage from being shutdown or left without supervision' (TULRCA, cited Simpson, 2001: 197). The explanation above makes sense to some extent as that provides some protection for the public. Nevertheless, from my point of view, there is still inconsequence existing in the requirement. As Colling(2005) quoted the view of Collins, Ewing and McColgan (2002: 923): The ensuing action has to commence within four weeks of the ballot being held. A union cannot be immune from liability if it holds a properly conducted secret ballot after previously calling for industrial action without one. There are many other statutory provisions regarding the entitlement of vote, the voting procedures, the voting results, and the announcement of results, which are impossible to discuss all in this paper. These provisions are thought to involve unions in extensive litigation, as most of them are 'initiated by employers and essentially concerned with failure to follow procedural detail, rather than the substantive requirements of democratic decision-making' (McIlroy, 1999: 525). Any failures (even if just a small accidental failure which is unlikely to affect the result) to satisfy the statutory requirements relating to a ballot or giving employers notice of industrial action will give grounds for proceedings against a union by an employer. As McIlroy (1999: 525) expressed, 'In practice, the devil lay in the detail'. Therefore, in the case of Gate Gourmet, it is no wonder how the industrial action would be defined as 'unballoted' and 'illegal'. Gate Gourmet informed TGWU and all the staff of bringing in seasonal workers on 9 How could it be possible for TGWU", "label": 0 }, { "main_document": "Borrell, 2001). In connection with the structure of taxation, the labour legislation generated a powerful incentive to break up sugar estates into relatively inefficient small holdings (Gulhati and Nallari, 1990). As a consequence, the World Trade Organisation recommended in 2001 to group the small planters so as to maintain output and profitability and improve productivity (WTO, 2001). The labour laws also constituted a financial burden on large planters who had their profitability considerably reduced as wages and salaries constituted over 50 percent of operating costs. Moreover, as workers were compensated for any increase in prices occurring it the economy, wages and salaries were automatically indexed to the consumer price index which led to substantial inflation (YeungLamKo, 1998). Finally, although legislation aimed at protecting existing sugar workers on the estates it created an incentive for management to mechanize rather than hire more labour (Gulhati and Nallari, 1990). The economics of the output price policies associated with the diversification program are difficult to assess, as sugar estate owners were obliged by law to retain workers through the slack season which they utilised to grow the food crops during this period (Gulhati and Nallari, 1990). The estates have indeed made some efforts to expand interline cropping and to rent land to small farmers, but there is only limited incentive to diversify as long as sugar production is profitable to both large and small producers (Bowman, 1991). Some agricultural diversification has taken place with the production of potatoes, peanuts, and peas in the \"inter-lines\" of the sugar cane fields for domestic consumption. However, the fact that these crops do not earn export income (YeungLamKo, 1998) means that the diversification strategy away from sugar and towards import substituting crops has contributed to a decline in the importance of agriculture in the economy. Despite of that, sugar remains a leading industry (WTO, 2001). The government has been successful in improving social indicators and achieving income distribution by means of the sugar tax (Mistry, 1999). However, its rate structure discriminated against the relatively efficient large estates and, in conjunction with labour legislation, provided an incentive to split them up (Gulhati and Nallari, 1990). The tax could be justified in the light of Mauritius's limited quota in the EEC preferential market and the low demand for sugar in the world free market. Providing strong incentives to expand sugar production or to allow owners of sugar estates to retain the rent created by the EEC preferential price would not have been sensible (Gulhati and Nallari, 1990). Moreover, the sugar tax helped redistribute incomes and supported small holder farmers. It led to and improvement of the lives of workers without increasing wages so much that potential investors would turn away (Meisenhelder, 1997). As regards the consumer subsidies on food crops and water and electricity, they were not confined to low-income groups, and there were considerable leakages to middle- and upper-income groups. Apparently, subsidized foods were also used as animal feed. Inspite of these inefficiencies, alimentary status and welfare increased among low-income groups have increased substantially (Gulhati and Nallari, 1990). As input subsidies", "label": 0 }, { "main_document": "membrane has a different electrical potential when compared to the outer part or in other words the two sides of the cell membrane. In the picture below, one end of the voltmeter is connected to the inner part of the cell while the other part is placed outside and it shows a negative potential of -70mV.Hence the inner part of the neuron is slightly negative with respect to the outside and this potential difference is called the The resting potential of a neuron is normally of the order of -70 mV to -65mV. This potential difference is produced by the positive ions of sodium and potassium. There are carrier mechanisms that constantly transport the sodium ions from the interior to the exterior part of the cell membrane and the potassium ions are transported from the exterior to the interior part of the cell membrane. This transport mechanism is called the sodium potassium pump. There are sodium and potassium gate on the surface of the cell membrane that allow the diffusion of the sodium and potassium ions across them respectively. During resting potential, all the sodium gates are closed while a very few potassium gated remain open. Hence there are a lot of potassium ions inside the cell membrane while there are a lot of sodium ions along with a few potassium ions outside the cell membrane. During an action potential caused by a stimulus, all the sodium gates open up and the sodium ions surrounding the region diffuse into the membrane causing a depolarization of the exterior of the cell membrane while the interior gets positively charged or acquires a positive potential. All the potassium gates on the membrane then open up causing all the potassium ions to flow to the exterior of the cell. This depolarizes the cell membrane. This action potential causes the potential difference of the cell membrane to rise to about 30mV. Then the potassium channels on the cell membrane open up causing the potassium ions to flow out and restore the positive charge outside the membrane. This is called repolarisation and this process restores the potential difference of the neuron back to its resting potential. b The picture above shows the action potential. The potential rises up to a positive value as the sodium gates open up and is gradually restored to its resting potential as the potassium gates open up. There is a minimum stimulus required to open the sodium channels completely and this is called the threshold stimulus. The Neurons are also governed by the all or none law by which the neurons either achieve complete action potential or do not achieve any potential at all. There can nothing like a weak potential. The Spinal cord is the main structural support of the entire human body and also acts as the pathway for nerve signals. It comprises of 33 bones called vertebrae, 31 pairs of nerves, 40 muscles and numerous other connecting fibres called Tendons and Ligaments. There are numerous fibrous cartilages between the vertebrae called discs. These provide cushioning for the bones when the", "label": 0 }, { "main_document": "parties' conduct in general was proof of this. The case, in effect is similar to the cases decided previously on the issue of constructive trusts and the deputy judge, for the most part relied on the Court of Appeal decision in The 'fairness principle' was applied very aptly in deciding the case. The deputy judge complemented the legal doctrines of constructive trusts and tenancies in common with the equitable principle of 'Fairness', which in my view is very important. On a consideration of the facts, this case is different since almost all the cases that have come before the court till now have been cases of matrimonial home where the cohabitees claim interest on the breakdown of the relationship. In that sense the case may set a precedent if the House of Lords decides a similar issue in the future. The deputy judge applied the same law applied in the other cases of determining the interest although the legal title is owned jointly, saying there is no real difference in the underlying legal analysis due to this. On the whole the case has brought about a desired result. And has been fair to the claimant, who would have otherwise been the weaker party.", "label": 1 }, { "main_document": "Strategic management is the key to success and standing out from the crowd under the competitive business. It is therefore necessary for a business to implement the appropriate long-term strategic plans whilst having the flexibility to tackle developing changes. The discussion which follows will address on the execution of long and short term strategic planning. The importance of the two will critically analyze. Long term strategic planning generally means an idea is developed in a structured, formalized process as well as the organization will use it in the coming 5 years. In the process of executing the planning, organization should first familiar with the internal situation of the company like structure, systems, resources (e.g. people and money). Managers may find that information with the help of the SWOT analysis. This analysis studies the strength and weakness of the company as well as the opportunities and threats they may have. Besides the internal audition, it also has external audit. PEST analysis could be used to understand the general environment in the areas of politics, economy, socio-culture and technology. Predictions help the organization to have a better view on what are likely to happen in the future or what they should pay attention on. Therefore, in preparing the long-term planning, there are many predictions. If a company is inability to predict, they are inability to plan. \"Predict and prepare\" (Ackoff, 1983:59) became a motto for a business. With reference to the data analyzed, an organization will set the long-term strategic planning which align business objectives and is in favor of the benefits of the company as a whole. As suggested by Steiner, \"all strategies must be broken down into sub strategies for successful implementation\" (Steiner, 1979:177). Evaluating the strategic planning is crucial because managers can make the adjustment before it is implemented. If everything is fine, the strategic planning can be launched. During setting the long-term strategic planning, firm will keep updating the information to beware of the sudden changes in market. To trace the fashion and the latest taste of the customers, they may hold the target group discussions, do surveys on the street, and telephone interviews. Participating in different kinds of social or exhibition let organization update the movement of the industry. For the rest of the changes, for example the government policy, economic, politics or other factors which affect them. They may depend on the media globally for example the Financial Time (FT), British Broadcasting Corporation (BBC) or others and some critics or scholars who familiar with particular area. Once changes emerge, managers will reconsider and set their short-term direction immediately. Wit mentioned and used the Fig 1 to explain that strategic renewal which constantly enacts strategic changes to remain in harmony with external conditions; it can transform for the firm to stay up to date and competitive (Wit, 2005:74). To make the firm competitive, it is essential for a business to implement business strategies which demand both discipline in the execution of long-term strategic plans and flexibility to address emergent changes. The first reason behind this statement is the long-term", "label": 0 }, { "main_document": "chemists; with more article cites than any other journal in 2004 [11]. JACS communications are designed to be concise reports of work that the authors feel should be delivered with some immediacy to a general audience. This paper is easy and enjoyable to read and conveys the necessary information clearly and concisely in the style expected in JACS. It deals with a fundamental area of chemistry, We are presented with a report that gives a logical argument for the conclusions made based on the evidence shown. However the evidence provided alone is not conclusive. The supporting information gives details of synthesis and the procedures employed to test the responses of the hydrogels. Unfortunately, no information or data is given regarding why the authors decided on the proposed mechanism of disruption of the hydrogels by Van. Figure 1 in the paper gives a CD and emission spectrum of the hydrogels Rather than conveying much scientific meaning, these simply illustrate the text. It would be appropriate for data such as these to be available as supporting information for closer inspection. We are told that further study is underway on the relationship between monomer structure and gelation properties. The paper suggests that the hydrogels are formed by a combination of hydrophobic and hydrogen bonding interactions, but no convincing evidence is given for either of these interactions. NMR nOe data on monomer interaction would support the claims made more solidly. In summary, this work represents a significant step forward in this field toward creation of a novel drug delivery system. This paper's publication in JACS is well justified, although more evidence to support the conclusions made would be welcomed.", "label": 1 }, { "main_document": "in his assessment of French initiatives with regard to European consultative bodies. Rehfeldt highlights that 'it is the declared aim of managements to use European (...) dialogue arrangements to promote a common awareness of the problems and a European 'corporate identity' amongst employees' (1998:219). It can be therefore concluded that another benefit that EWCs could and actually present to managers as judged by survey information is the creation of a common European corporate identity among employees and the promotion of a shared understanding of common problems, which, in turn, facilitates corporate governance in solving these problems. Although trade unions might have justifiably expected a greater role in shaping managerial decisions thorough EWCs, this has not actually materialized in practice. As Wills notes 'information sharing has been afforded a higher managerial priority than any real consultation and the EWC agenda appears to consist of managers giving reports to the employee representatives that assemble' (1999:29, see also Wills 2000). Lecher and R Although the authors go on to acknowledge the value of the requirement of information sharing for employees' ability to influence company policy (see also Lecheral. 1999), the consultative element of EWCs that could actually affect management's decisions with regard to contents and, to a lesser extent, implementation seems to be limited (see case study research done by Marginson et al (2004)). This idea is supported by the EIRO (2004) report which concludes that 'consultation, where it takes place, seems largely to occur only once decisions have been taken by management, focusing on the implementation of those decisions rather than their framing or main parameters' (2004:14). It is clear that EWCs still predominantly fulfill the information part of their functions and the very restricted consultation element allows managers to retain their power in decision-making. This would allow employers to benefit from information exchange and overlook the consultative part to preserve their decision-making prerogative - inherent flexibility in EWCs' nature which serves employers' interests in yet another way. Scholars appear to be divided in their views on whether EWCs actually support management in restructuring processes. Researchers like Weberal. (2000) and Nakano (1999) provide evidence of managers seeing little or no impact of EWCs in aiding organizational change. However, Lamers suggests that consultative bodies 'can contribute to the transparency and speed of the decision-making process, and its implementation' (1998:180). Even more directly, Hanck In addition, a survey executed by the American consultancy Organization Resources Counselors (ORC 2003) found that four-fifths of the 24 multinationals it had researched consulted their EWCs over restructuring situations as they found them useful in improving internal management co-ordination. In summary, there are indications that managers regard EWCs as beneficial in supporting restructuring processes and the scepticism in some cases seems to be the result of companies not having experienced such proceedings yet (Nakano 1999) or employers presuming more remote possible effects (Weber et al 2000). Last but not least, this text will address the advantages of earlier agreements concluded under the Article 13 clause. Managers see these as more advantageous with regard to the establishment and operation of EWCs for", "label": 0 }, { "main_document": "I would advise Albert that it may be difficult to recover all of his losses. This case is an example of Economic Loss arising from 'negligent mis-statement' and as such, it must be proven that a duty of care exists between the two parties. The duty is established by the relationship between the claimant and the defendant, a 3 stage duty test. The 'proximity test', requires the parties to have been brought adequately close to one another i.e. 'the statement must be made directly to the claimant by the defendant' (1). They were friends from university, and were having lunch thus establishing proximity. There are grounds to disagree with this as their meeting was coincidental and they hadn't seen each other for a long time. The next, is 'reasonable foreseeability'. \"Defendants will not be liable unless they should reasonably have foreseen that the claimant would reasonably rely on the statement and voluntarily assumed responsibility for it.\"(2) In the case of ' Amongst these were:- 1) The defendant's ability to give reliable advice - Barry was a Director of the firm and we can therefore assume that he knows about the company.. Even so given ' Given his position in the company, this is likely to be the case. 2) The circumstances in which the advice was given - ' It said that it had to be 'considered advice which someone would act upon' and not 'off the cuff'. We know that it was an off the cuff remark made on a social occasion. Inspite of this, in ' In ' Hence \"the defendant must reasonably have anticipated that the advice would be acted upon without the claimant seeking further clarification or independent advice.\" I feel this may prove difficult to argue given the size of the investment and the possibility that Barry may have ulterior motives. Even so, as a company director we would expect his advice to be relied upon especially given the certainty of his words. There are possibilities that the defendant may not be liable in this situation. However, the law suggests that it is likely Barry will be liable in tort. I would therefore tell Albert that he would be well advised to attempt to recover damages from Barry, but be prepared that he may not recover the full amount. Even if a duty of care could be established, it may also be possible for the defendant to prove that said duty wasn't breached. We are told only that the advice was given & 'some months later' the firm went into liquidation. We would have to find out whether or not the advice given was accurate at the time & Albert's losses were unforeseeable. If this was the case, then there was no breach of duty. The second case is another example of Economic Loss arising from 'negligent mis-statement'. This case is slightly different to the previous in that Mrs Smith sought the advice and Jim gave it to her in a professional capacity. This fulfils the second of the criteria set out above ( We can also", "label": 1 }, { "main_document": "urban areas. Deliver affordable housing alongside market houses. This is mainly used in urban areas, but could be used more in rural development. Affordable rural housing needs to be provided to maintain and enhance sustainable rural communities. Without affordable housing, many rural areas would see the trend for aging populations to increase, and this would be contradictory of developing sustainable rural communities. Transport links are an important aspect in sustainable rural communities. In rural settlements, only 51% of people live within walking distance of a bus stop that have regular services (Bradford et al 2006). This can affect the quality of life of individuals as it will limit access to certain services and also jobs. However, the reverse of this situation is that people are using cars much more and therefore are using larger services in other areas, such as out of town shopping centres, instead of supporting smaller local shops. This will need to be addressed in developing sustainable rural communities, as quality of life is an important aspect in sustainable development for the future of rural areas. The main trend in the provision of services at a local level has been a 1 or 2% decline in services such as post offices, pubs and village stores. The main reason for this has been due to the greater mobility of customers (as mentioned above). This is leading to social exclusion is some areas due to a number of people not having the mobility to access other services and also because it is leading to a loss of community meeting places. This factor can also be linked to the issue of climate change, because with more people opting to use private transport to access other services, it is leading to an increase in carbon emissions. Conserving the countryside is a significant issue when planning for the sustainable development within rural areas. It is important to remember that the environment is a major aspect in attracting visitors to rural areas, and therefore supporting the rural economy (Bradford et al 2006). Planning Policy Statement 1 (PPS1) has a section dedicated to 'Protection and Enhancement of the Environment'. This includes factors such as valuing landscapes and protecting wildlife habitats and natural resources. This indicates that the government understands the significance of the countryside in developing sustainable rural communities. PPS1 highlights that developments should be sustainable, durable and adaptable to the specific area and also make use of local resources. This would be supportive of a sustainable rural community, as it is focusing on the local economy by trying to use local resources and it is considering the environment by looking at sustainability and adaptability to the local environment. Climate change could have serious implications for conserving the countryside. Unpredictable weather patterns are likely to lead to changes in the landscape, which in is turn will likely affect the biodiversity targets that the government has set (Bradford et al 2006). This therefore indicates that climate change is likely to be a major factor that affects sustainable rural communities. The different aspects of sustainability that have been", "label": 1 }, { "main_document": "is evaluated in each generation, and based on fitness the individuals who have higher relative fitness are more likely selected to be the parents of current generation to produce \"kids\". Using those found out parents, process the crossover which is used to exchange the information between the parents to form offspring. By recombining parts of good individuals, this process is likely to create even better individuals With some low mutation probability, the offspring which is just produced are mutated through specific approaches to replace chromosomes in new generation. Its purpose is to maintain diversity within the population and inhibit premature convergence. Update the current population with the new generation. Repeat from step 2 which becomes current to start the next iteration of the algorithm. GA offers significant benefits over more typical search of optimization techniques, variation on the main structure of GA have been widely applied in diverse scientific and engineering topics such as optimization, automatic programming, machine learning, economics, immune systems, ecology, population genetics, social systems and so on. Because of the success of GA in these areas, more and more interests in GA have been sharply raised in the recent years by the researcher from any area. With the same theory of Darwinian concept which is survival of the fittest applied on GA, GP comes from the original work on genetic algorithm. \"GP is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task.\" [6] Although much of the theory associated with genetic operators is relied in GP as well, the hierarchical expression, which is manipulated in GP, is far different from the coded strings of GA. The hierarchical structure is a tree manipulation routine but not a flat and one dimensional string. Similar as GP, based on the fixed structure of program which is the only fixed thing, EP is evolved by its numerical parameters. Traditionally, Representation and operators were specialized for the application area which is evolving finite state automata for machine learning tasks. Nowadays, it is often used as optimizer with any representation, such as real valued vectors are using in population to solve the real valued optimization problems. Also for the traveling salesman problems, ordered lists are called and graphs orient to applications with finite state machines. From the lecture note of week 6, the basic EP was formed by three basic steps (1) Choose an initial population of trial solutions at random. (2) Mutate each offspring. (3) Select a number of solutions based on fitness. Compared with EP, typically, ES is applied to real-valued parameter optimization. The main characteristic feature of ES concentrates on self-adaptive mutation using standard deviation of Gaussian distribution to make each individual have an adaptive mutation. Typically ES is applied to real-valued parameter optimization problems Nevertheless, during the recent years, interaction and communication among various evolutionary computation methods", "label": 0 }, { "main_document": "same thought experiment, and yet the conclusions are contradictory. From this Williams concludes that the subject's, and hence our, ideas about identity are confused. The first perspective suggests we can indeed transfer our mind, and hence ourselves, into another body, and it is only when the second perspective is introduced that we see how we may remain more attached to the body than we had first thought. However, Williams' second scenario differs from the first in one crucial aspect. The subject is not informed that as well as his body being tortured tomorrow, another body will wake up with all of the contents of his brain. If told this he may still remain fearful, and from this Williams concludes that the additional information has no impact. But, as in the first scenario, upon waking up in a new body with all of his normal thoughts and memories, this personality or mind, whether it is the same person or not, will no doubt be relieved at escaping torture. If we add this element to the second scenario, and equate this personality and collection of memories with a person, as we did in the first scenario and as Locke does, then the second scenario no longer contradicts the first and we can conclude that the person moves with the mind into a new body. Williams set out to show how our ideas about identity are confused. But as has been shown, with a minor adjustment to the second scenario, his thought experiment ends up supporting Locke's purely mental conception of identity. The answer to our original question depends on a definition of personhood. If we accept Locke's definition of personhood as a mental affair without reference to a physical body, even if we don't accept the vital role of memory, then swapping bodies is possible. When the subject of Williams' second thought experiment is tortured he will not be the same person as he was when first informed of the impending torture because the brain now contains a new personality, thoughts and memories. In my view it is our memories and personality that form our thoughts, and it is our thoughts that make up the 'self'. If individual identity exists at all, it resides in our thought and our reflections on these thoughts. While undoubtedly strongly influential, the body is not a necessary component of a person, and so transferring or swapping into a new body is possible.", "label": 1 }, { "main_document": "principle is pivotal to Oticon. For in Oticon, people are given freedom to control the project process, they are mostly judged by the output, and what else really matters is the project deadline. Act as a reward for effort or performance It's absolutely correct that people are rewarded mainly based on their outputs, for which contributed to organization's profit directly. Nevertheless, sometimes staffs efforts or performances should also be taken into account, especially to High-tech corporation like Oticon. Because generating a wonderful idea and making it come true might take them a rather long period and no one could guarantee every effort will lead to success. Keep the amount received & amount individual expects to receive in balance: Organization Perspective: Work (amount individual expects to receive) = Benefit to the individual/Cost to the individual A meaningful reward system would ensure both organization and individual receive more than what they put in (WMG, OPP module). And if the distance between the actual situation and expectation is too large, employee will be de-motivated. Also an economic reward system should offer individual high value rewards at a comparatively low cost to the organization. Recognize the relationship between satisfaction and performance: Job satisfaction is based on perspective of extrinsic and intrinsic rewards, combining with equity theory. In addition, it also reacts on the perceived value of rewards (See figure 1) This model illustrates that performance influences satisfaction, and not the other way round. Satisfaction relies on need fulfillment. Need fulfillment comes from intrinsic and extrinsic rewards generated by job performance. Job satisfaction is just one feeling, or a set of attitudes, potentially linked with performance. Generally speaking, with appropriate rewards, good performance is mostly likely to lead to job satisfaction, but not consequentially. (Buchanan, 2004) It has long been generally accepted that psychological contract has two forms, relational and transactional. The former refers to an open-ended relationship based on trust and mutual respect. The employee gives loyalty and commitment to meet their employer's requirements and goals, and believes their employer will treat them fairly and consistently. In return, the organization potentially offered job security, promotion opportunities, training and development programs, supportive leadership and other maneuverability depending on employee's demands (Beardwell, 2004). And now it's believed that psychological contract could be utilized to underpin and replenish reward system. Oticon give its high levels knowledge workers maximum freedom as reward of finishing projects before promissory deadline. As response, employees are willing to try their best to contribute to the corporate profit regardless extra pay. Also in Oticon, for there is no place to go in the flat organization, professional development will not be rewarded with promotion which is one of the most useful traditional reward method. However, the corporation still operates with little turnover, because their staffs trust they will be cultivated to be multi-functional after few years work here. In the real world, almost every employee receives two types rewards: intrinsic and extrinsic, So Oticon's new reward system should also include these two parts. The key question is, as one of the most useful motivation methods, how", "label": 0 }, { "main_document": "in which the middle class increasingly defined itself in material terms, while within its body others based their elitist or democratic values upon a paraphernalia of objects symbolically highly charged. Initially, the growing number of middle class women engaging themselves in the pastime of shopping was a sight that conflicted with the traditional Victorian 'angel in the household'. While some critics despised the idea of women drinking liquor during such shopping trips to the department store, others went even further to compare these consuming women with prostitutes (Finn 1998:36). In this criticism we may find a fear that was both classed and gendered; after all, prostitutes represented a disrupting female working class challenge to middle class Victorian idea of the gentile, meek and submissive wife. Shopping, however, remained viewed and actively gendered as a feminine task (Loeb 1994:12, Rappaport 1996:67) and the role of middle class women became increasingly an ornamental one. In the streets of London, 'Female shoppers, like the glittering objects on display, became a central part of the urban spectacle' (Rappaport 1996:80), allowing them to become walking statements, sometimes of middle class democratising success at large, other times drawing from an individualistic elitist identity (McCracken 1990). Advertisement did not play the least role in this newly constructed female middle class identity. From an early point, advertisers came to accept that 'the hand that rocks the cradle is the hand that buys the boots' (Loeb 1994:8). Throughout the second half of the nineteenth century, techniques of persuasion became increasingly refined (McCracken 1990:29), shifting the portrayal of the household mistress in a shape of the 'perfect lady' through images in an often historical, near-mythological style (Loeb 1994). Such portrayal of rather powerful, controlling women attempted to equal the act of shopping to having 'the opportunity to', as a critic wrote disdainfully, '\"luxuriate\" in a deep and intense \"sense of power\"' (Rappaport 1996:76). Growing consumerism, besides reforming the gender ideal, was also championed as a new definition of class difference. As goods on sale in the department store increasingly became 'material symbols', which set prescriptions of how the middle class was to dress, what material conveniences they could posses, which, in turn said something about 'how they should spend their leisure time' (McCracken 1990:27). Businesses and advertisers were eager to support this civilising, 'rational' view on middle class consumption. Their profits came mostly from the middle class, as a journal of the trade wrote: 'The buyers of the world are the great MIDDLE CLASS PEOPLE [sic]' (Loeb 1994:8). Thus, the Army and Navy Co-operative Society, a private department store, instructed the doorkeepers to keep servants and messengers out to avoid class mingling and provide safety for the female audience (Rappaport 1996). On the other hand, products that served to define middle class status were sold as necessities through the creation of great fear. Parents read in advertisements that without Mellin's Food, their child '[would] not last long' (Loeb 1994:14), while \"Raw Milk\" was advertised as so dangerous, that 'those who allow it to be served to their families take a great responsibility'", "label": 0 }, { "main_document": "demand, speed and trend for technical evolution, cost for building another plant and possible behavior of competitors. Capacity is about the amount of throughput, type of plant and timing of changes. The main problem is to decide the relationship between amount of capacity and demand. Should the capacity is based on expectant demand or the existing demand? Here in automotive component industry, most of world class manufactories adopt lean manufacturing which means producing right productions with right material at right time. It determines the company's amount of capacity should depend on demand rather than lead demand. In other words, the company uses \"make to order\" management tools to control the capacity. It is a relative traditional method and requires less investment than normal. It insures that the capacity is highly utilized and provides high rate of return on manufacturing investment. In addition, if the new capacity is added, they will be fully utilized at once. Based on the company's situation, the strategy is suitable and practical with low risk. Facilities Facilities concern with company's location and specialization. The company's location is near the major customers. Until now its business is in UK and has only one plant in Telford which is located in the center of west midlands. About 45 automotive industrial manufactories in the town and the location provide the JIT delivery access. Its cost of property, rent, rates and service charge are all below regional average. At the same time, the salary and employment cost in west midlands is the lowest in UK regions. The key success factor of the location is convenience to the major customers like BMW, Land Rover, Toyota and Peugeot. Because of the JIT production process, the layout will be changed from process based to cellular organization. The cellular process has more advantage than the old one, such as less work in process products, better human relations, improved operator expertise, and faster production setup. In conclusion, the facility strategy should be an active part of whole manufacturing strategy rather than a passive one. For example, company should not wait for growth of demand imposes it to add capacity or decide a big change of facilities until an increasing unprofitable. Company need to consider it as main basic strategy for long term's mission and objective. Technology Technology always refers to process technology in manufacturing aspect. (Slack, 1991) mentioned three dimension of process technology in manufacturing: size of capacity, degree of automation, degree of integration. Size of capacity has been discussed in the capacity chart. The degree of automation concerns with the ratio of machine and human. It impacts on the number of employees. The company intended to reduce half of its shop floor workers which means the automotive degree will increase by introducing more machines and robots. Two main benefit of automation are that the reduction of direct labor cost and changeability of manufacturing system which mean the ability of automation is to make standard process flow and makes operation and output forecasting easily and accurately. The other aspect is the degree of technical integration. High degree", "label": 0 }, { "main_document": "With the increased interest in nuclear power as a viable source of energy, little consideration has been given to waste management. In this paper the two current methods of the disposal of high-level radioactive waste have been discussed and their associated problems critically evaluated. ______________________________________________________________ With the current energy crisis and the imminent demise of fossil fuels world governments are looking for reliable energy sources. One of the more favourable solutions they have chosen is nuclear power. The main disadvantage is that the unwanted by-product of nuclear fission process is a large amount of hazardous radioactive waste. For each year that a reactor is in operation a third of its core material - around 30 tons - has to be removed and replaced. During their time within the reactor these uranium fuel rods increase in radioactivity to the extent that when they are removed they contain hundreds of radioactive chemicals that generate large amounts of heat and radiation. This is regarded as high-level waste. The equipment and tools involved in manipulating the materials is categorised as low-level waste, given its relatively short contact time with the reactor. At the dawn of the nuclear age nobody foresaw any problems regarding the disposal of the radioactive waste. Whilst low-level waste is easily dealt with there are currently two ways of dealing with high-level waste, each of which has obvious disadvantages. 'One method preferred by Canada and the US is to bury the waste deep underground in a remote location' This is proving increasingly difficult due to the lack of geographically suitable areas. There are also political consequences of storing dangerous nuclear waste in populated areas that will remain a threat for as long as twenty-four thousands years The alternative approach practiced by European countries Of the three commercial reprocessing centres planned for construction in the United States during the 1960's, only one saw completion and it was closed down in 1972 under a cloud of controversy concerning accidental contamination at the site. Although reprocessing was and still is a viable option, it merely separates the much sought after plutonium from the remaining fission products which still remain radioactive for over a millennium. Unfortunately the political situation throughout the latter twentieth century has forced laws to be passed to prevent governments from reprocessing radioactive waste. While this does remove the threat of countries obtaining nuclear missile capabilities, it also results in an increase in high-level waste being deposited at the reactor sites. The untreated waste has been allowed to accumulate to the extent that today 'approximately 7000 tons of spent fuel from commercial reactors is stored on racks submerged in cooling ponds or \"swimming pools\" at each reactor site [in America].' Before understanding the severity of the radiation emitted by radioactive waste, one must understand the units and scales used. Radiation dosage is measured in Table 1 shows how differing values affect the human body. The reason that nuclear power plants are so low on the scale is due to heavy shielding around the reactor core, but once the waste leaves the core the radioactive", "label": 1 }, { "main_document": "The New Deal implemented changes in American society that appeared radical and progressive in the 1930s. In reality, however, Roosevelt's policies were \"fundamentally conservative,\" Though the \"chaos of experimentation\" Roosevelt was able to bring optimism back to the American people, but his policies really only \"acted as a painkiller rather than a cure for the nation's economic ills;\" Parker, Henry, (January 2001) Hofstadter, Richard, \"From Progressivism to the New Deal\" in Rauch, Basil, Parker, Henry, (January 2001) McElvaine, Robert S, \" The \"New Deal\" is a term \"taken from Roosevelt's speech whilst accepting the 1932 presidential nomination.\" Though Roosevelt has taken the credit for the policy, it was admitted by Tugwell, one of the architects of Roosevelt's policies of the 1930s, that the President \"extrapolated [the New Deal] from programs that Hoover started.\" Enacted during the first three months of 1933 (Roosevelt's \"Hundred Days\"), it established such agencies as the Civilian Conservation Corps and the National Recovery Administration. Later, in 1935, the Second New Deal established the National Labour Relations Board, the Works Progress Administration, and the social security system. Though these policies ensured that \"big government,\" like big business, was to have an impact on the American future, they did not solve the inequalities in the society or end the Depression. Encyclopaedia Britannica, Reed, Lawrence W \" One of Roosevelt's primary objectives of the New Deal was to rebuild the economy and restore peoples' faith in it. His initial policy was to declare a national ten- day bank holiday, purportedly \"in order to give inspectors time to review their solvency.\" When they reopened, the American public entrusted them with their money once more, rendering them solvent at no expense to the bankers or the government. From 1933- 1939, \"GDP increased by 60%, the amount of consumer products bought increased by 40%, whilst private investment in industry increased by five times.\" Roosevelt's policy, therefore, did encourage an economic upturn. It did not restore the American economy to its pre- depression strength, however. Though this has been attributed to the failure of Keynesian- style economics, it is more likely that it was the half- hearted nature and lack of commitment to his policies that resulted in the poor result. Keynes argued that intentionally unbalancing the budget would boost demand to the point where recovery would take place. Roosevelt was reluctant to accept any increased deficit spending, however, to prevent mass suffering. Only when the Second World War increased production and forced an unbalanced budget on the scale that Keynes advocated did the depression end. Therefore though Roosevelt did encourage an economic upturn, his policies were too conservative to enable a full- scale economic recovery. Schultz, Henry K (Professor at the University of Wisconsin- Madison), Trueman, (History Learning Site: us.history.wisc.edu/hist102/lectures/lecture19.html) Failure to ensure full economic recovery, along with badly planned New Deal policies, also meant that American businesses faired poorly. The Federal Securities Act, for example, required full disclosure of information on stocks being sold which was \"not pleasing for businesses.\" The National Industrial Recovery Act was also particularly unsuccessful and later declared unconstitutional.", "label": 1 }, { "main_document": "during the day with dogs or people driving foxes from cover towards a waiting gun pack. Lamping is usually carried out from a vehicle and therefore requires an area with vehicle access and open terrain so shots can be fired safely (MacDonald Using dogs or beaters to flush out foxes is predominantly used in dense woodland and the usual weapon is a shotgun, the recommendation with this method is that large dogs be used to avoid the chance of them entering the den (DEFRA, 2005b). The use of bows, crossbows and explosives are all banned under the Wildlife and Countryside Act 1981. Shooting is not suitable for the control of urban foxes. Snares are also a popular control method for rural fox control, favoured by gamekeepers (MacDonald The Wildlife and Countryside Act 1981 dictates that neck snares must be free-running and self-locking, and should be inspected once a day. Snares are placed on known fox runs at a specific height that should only entrap foxes, if other animals are caught by accident they can be released unharmed as the snare is not lethal; foxes that are caught are usually shot at close range (DEFRA, 2005b). Due to the fox's wary nature snares are a good method of control as they are not seen by the fox until it has been caught; DEFRA recently released a code of practice on snaring, supported by the Game Conservancy Trust (GCT website, 2005; DEFRA website, 2005). In some regions, particularly upland areas, snares are used in conjunction with 'middens' - these are something buried that will attract a fox, such as a rabbit carcass, surrounded by a fence that allows the fox access but prevents livestock entering; snares or traps are placed in the fenced area (MacDonald Snares are not really suitable for catching urban foxes. Trapping is considered a fairly ineffective method of control in rural areas; live-capture cage traps are used by some gamekeepers but do not account for many of the foxes caught (MacDonald Leghold traps are banned under the Spring Traps Approval Order 1995 and the use of live baits and decoys are banned by the Wildlife and Countryside Act 1981. Again, foxes caught are usually dispatched. Traps can be used to catch urban foxes too, and some people prefer to see these foxes relocated rather than dispatched. Translocation of foxes is not a very good idea for several reasons: it is simply moving the problem from the area rather than getting rid of it completely; foxes are extremely territorial and the removal of one fox will just make room for another (DEFRA, 2005a), though this is a problem with any fox control; an urban fox translocated to a rural setting would probably not survive very long, foxes are adaptable but it would lack the immediate skills to survive, and it may have been placed in another fox territory (DEFRA, 2005a), in which case it would probably be attacked. Before the Hunting Act was passed in 2004, banning the hunting of wild mammals with dogs from February 2005, there was great debate", "label": 1 }, { "main_document": "be broken with the consent of the Justice of Peace, whereas the day-labourer was employed casually, frequently changed employers, often worked on a piece-work basis and could be dismissed at will. In the wage assessments by the Justices of Peace, wages of servants were given by the year and labourers by the day or week. Thomas, 'The Levellers and the Franchise', p. 71 MacPherson does accept that there were certain administrative and statistical classifications the term servant was sometimes only used for a sub-class of wage-earners. For example, parish lists by household detailed the composition and the occupation of the head of the household. The term servant was used to indicate an in-servant, whereas wage-earners living out and being heads of their own households were listed by occupation, such as labourer or thatcher. MacPherson argues that 'in each class of cases there is a simple reason, of logic or convenience, for the narrow usage' and thus can be treated as 'subordinate or exceptional'. Yet, MacPherson does not consider that the Levellers and their opponents at Putney might both have understood that servants in this narrow sense, domestic in-servants and not the whole body of wage-earners, were being discussed. MacPherson, Disregarding this MacPherson concludes that as a 'general rule' the term servant meant all wage-earners. This, he asserts, is strengthened by the historical continuity of the terms master and servant. Where social and legal relations were being described, he argues, the most natural term for a wage-earner was servant, as it described on part of the master-servant relation, which had been in usage far before relations of a contractual nature were common. But why, therefore, if labourer was not appropriate due to its other possible meanings, was servant, for it too, as MacPherson concedes, had other implications. In any case why is 'some word needed' for all wage-earners? This period was characterised by the rapid increase of wage-earners and it is very likely for there to have been a 'time lag between development of new social relationships and the invention of a vocabulary adequate to describe them'. MacPherson, Thomas, 'The Levellers and the Franchise', p. 72 MacPherson, Thomas, 'The Levellers and the Franchise', p. 72 The term servant therefore had a great number of connotations in the seventeenth-century, but the Levellers when they excluded servants were more likely to have been describing the traditional meaning of the term, the domestic in-servant, than the emerging wage-earning class as a whole. The Leveller programme contained proposals that increased 'the opportunities for apprentices and servants to become masters'. If the they believed that servants were only under the temporary control of their lord, and could under Leveller reforms break free of these shackles, then it is 'easier to explain why their spokesmen apparently attached so little importance to the disenfranchisement of servants and took so little pains to justify what would otherwise have been in glaring contradiction to their very sweeping and emphatic assertions of the rights of every man to have a voice in choosing the government he lived under'. Merson, 'Problems of the English", "label": 1 }, { "main_document": "them instead to do so only through intermediate customers, market counterparties or to private customers, unless they qualify under some exceptions contained in Rule COB 3.11 of the Conduct of Business Handbook It should be noted, however, that the section 238(5) of the FSMA gives the FSA power to make rules exempting from the scheme promotion restriction certain promotions relating to unregulated collective investment schemes such as Hedge Funds, provided, however, that they are not made to the general public, for the purposes set forth in rule COB 3.11.2 R is to make appropriate use of the power which the FSA has under section 238(5) of the FSMA. Available in Available in See The Financial Services Compensation Scheme (FSCS In principle, it applies only to UK-based retail institutions: therefore, overseas-controlled funds aimed at sophisticated investors, such as hedge funds, fall outside its scope. In any event, the threshold for compensation ( See See op. cit., at In keeping with current international standards, FSCS requires authorised firms to submit periodically their financial statements so as to assess their financial situation, as it is the current practice of financial regulators in the world. The rules in the FSA's Conduct of Business sourcebook It is applicable to persons authorised by the FSA to carry out designated investment business. Available in The Code of Market Conduct/Market Abuse regime FSA's analysis ( The Market Abuse Directive establishes common grounds within EU to curb rules on disclosure and modalities of market abuse - namely insider dealing and market manipulation, encompassing all traded instruments. It provide for the maintenance of insider traders' lists and determines the reporting by players of suspicious transactions to authorities and the maintenance of insiders lists as well. See This Directive It provides general principles of authorisation and supervision by regulators so as to favour supplying of financial services within the EU as a whole. Amongst others, it sets up new standards for asset management. See The introduction of the Capital Requirements Directive will mark a huge step forward in developing a modern capital framework that will improve the risk sensitivity of capital standards for firms across the EU. Considerable uncertainties remain, however, not least in the timing of its introduction It should be noted that, despite apparent (arguably, intended) loopholes, the 2000 Act provides for a own-initiative power, which provides the FSA with considerable discretion in activity. Under Section 45. (1) of the FSMA, FSA may exercise its power under an authorised person not only where he judges that is he is failing, or is likely to fail, to satisfy certain threshold conditions, but also, in a very broadly written clause, where \"(c) it is desirable to exercise that power in order to protect the interests of consumers or potential consumers.\" It may go as far as varying permissions already granted to UK authorised firms following acquisition of their control by foreign firms as well as, under Section 47. Most significantly, it may assist overseas regulators in respect of an authorised person. In that event, it must, while deciding whether or not to exercise", "label": 0 }, { "main_document": "for international traders), yet this is counteracted by rising domestic demand and opportunities for hotels to expand domestically (FXCM, 2006). The combination of controlled inflation, money supply and interest rates align consumer expenditure power and price stability (Bank of Canada, 2007), hence attracting lasting business in a steady economy. Adding to corporate tax reduction, the ease of starting a business and lack of wage and price controls (Appendix 1) encourage hotel development. Unemployment has remained relatively unchanged and the increase in jobs in the accommodation/food services industries (Statistics Canada, 2007c) has proven one of the main contributors to Canada's record high in employment. Considerable increases are seen in the same sector in the US (Appendix 1). Population growth rates in both Canada and the US have declined since the 1950s with relatively low annual averages of about or less than 1% between 2001 to 2006 (Country Profile, 2006). However, against this backdrop, birth rates in ethnic minority communities are high, thus showing the region's cultural diversity. The significantly aging population (Appendix 1) is caused by rising numbers of those aged over 65, and longer life expectancy (Country Profile, 2006). Younger generations (Gen-X and Gen-Y) (Appendix 1) are also exerting their influence on patterns of change, shaping social trends, consumer demands and levels of affluence (ISHC, 2007) as their fast paced lifestyles (Orbitz, 2007) may reveal the shortening of business and leisure travels but the need for more luxurious hotel experiences. Both trends affect the availability of human resources, concept development, design, the influence and application of technology and the characteristics of the guest experience. Most of the population in this region is concentrated in urban areas in eastern US, adjacent parts of Ontario and Qu However, the recent economic boost in western Canada and south-western US indicate a regional shift in population to the sparsely or moderately populated west and south parts, affecting the distribution and location of hotel development. Both countries have high living standards as well as highly developed education and health systems, which prompt the population to have high career expectations (UN, 2006). Rapid technological development and the expansion of e-commerce have significantly changed lifestyles in North America with online shopping and booking becoming one of the most popular means of purchasing (Country Commerce, 2006). North America has consistently maintained the leading role in the field of technological progress (Eule, 2006) as both the government and industry make great efforts on various types of research and development (Appendix 1). Comparatively to the US, Canada allocates less of its budget to technological development but places more emphasis on environmental concerns and energy efficiency. The US also actively takes part in technological development and its energy saving program, encouraging increasing partnerships with the hotel sector (Appendix 1) (Energy Star, 2005). Given the close relationship between the two countries, the sharing of intelligence and technology between them is positive and prompt, greatly benefiting research and development (Wilsoncenter, 2007), and establishing the region as the hub for technology innovation and contributing to the hotel sector's advancement. The Internet has changed the hotel sector's", "label": 0 }, { "main_document": "the reliability gap is highly based on the \"neighbourhood\" and \"promised services\" elements (see Appendix 5). However, the groups do not show any discrepancy between their expectations for these factors. This gap is not supported in the literature but, as an auto-assessment of managers, can be interesting to investigate and can show how far managers are aware of the gap 4, of the quality they are delivering. Clearly, managers are aware of the reliability dimension (perhaps for other reasons than tenants') but do not spot the gap in the tangibles, where they believe they are delivering slightly more than expected (see table 4). Given that the SFAS managers are conducting a survey every academic year, they could have been directed to the reliability gap by the tenants' answers. In summary, the gap analysis shows the following areas of improvements: -The telecommunications factor seems to be the utmost point to focus. It is unclear however which item of the factor is causing the gap: internet, intercom, TV, telephone or another unlisted variable. - The security perception of the houses is second in priority. Yet, looking at customers' comments (see Appendix 6), none identified any item of the telecommunications as a bad experience. However, the security was mentioned pointing at \"aggressive swans, geese and ducks in the spring for the children\". Regarding the security, only one comment addressed the \"alarm which is spread in all blocks of the area even though it only concerns one building\". That shows that gap analysis solely is not enough to uncover the cause of customer dissatisfaction and need to be adjoined to the Service Transaction Analysis for example. As for any methodology, the one used to assess customer satisfaction of the SFAS is biased by some factors: -We have chosen not to use the reliability coefficients (alpha) and assess the correctness of the test structure as suggested by Cronbach (1951). Instead, we assessed the questions qualitatively to insure they were easy to understand and covered the 5 dimensions of SERVQUAL. -The dimensions were not weighted to understand their relative importance. That was deliberate to decrease the burden of filling the forms and have an acceptable number of respondents (17 out of 40 questionnaires). The measurement of service quality would probably be more accurate with weighted importance of dimensions, with areas of improvement being factors of high importance and low performance. -The sample is not large enough to create more segments than we have used in this study. It would have been interesting to consider segments such as staff, visiting academics or students. Another segmentation could have been single/couple/family or 1/2/3/4 bedrooms property. -All gaps have been aggregated by using means of answers or groups. However, the answers are rather homogeneous within any item: either 90% of answers (or more) are one point from the mean or differ by 2 points maximum between the higher and the lower value. -We noted that there was 2 very similar questions in the perceptions list which were (differences shown in bold): Systematically, respondents rated the second question 1 point lower than the", "label": 0 }, { "main_document": "used to assign value to the three records: In other words, my array would in fact contain the information like this: At some points in the program, the user will be asked questions (i.e Would you like to add a product to your order?) and the system will expect him to give his answer using 'y' for yes and 'n' for no. In order to know what the user wants to do, the program will use a variable char (can only hold 1 character) that will hold the answer. It will then read the variable to see what the user's answer is. If its value is 'y' then it means the user's answer to the question is yes. On the other hand if it is 'n', it means that his answer is no. Once the user has inputted the three products, and assuming that he wants to place an order and add a product to the order, he will have to enter the product name that he wants to add in order to show the details and then input the quantity that he wants to order. The first thing that will need to be checked is the name of the product entered. If the product does indeed exist, the program should continue and show the details of the product. On the opposite case, an error message should let the user know that this product doesn't exist and should loop until a correct name was inputted. To do so, the value entered has to be compared with all the records names. Here is an example on how it could be done: There are two steps involved to display the product details. First the program has to find what product has to be displayed (this will be done in the 'if statement' shown above). Secondly the program will then display the details on the screen using a simple 'write' statement. Note: As you can see in the box, the code to display the product details, to update the stock value and to calculate the cost are there in only 3 lines. This is because we will call a procedure at that point to make the program more structured. During the order process, the user will have to enter the quantity of products that he wants to order. This will have several consequences: The program has to check that the quantity entered is valid. It must neither be higher than the quantity available in stock or below 0. The total cost of the order will have to be updated with the cost of the quantity of products added to the order. To do so, the total cost value has to be updated with the multiplication of the cost of the product by the quantity ordered. The stock available of the concerned product also has to be updated as the user ordered a certain quantity of product. To update it, the quantity of products ordered will need to be subtracted from the actual stock quantity. e.g. 1 e.g. 2 e.g. 3 To describe a program, as", "label": 0 } ]