title
stringlengths 6
301
| keywords
listlengths 0
23
| messages
listlengths 2
2
|
|---|---|---|
Organisational performance improvement through quality award process participation
|
[
"Quality awards",
"Organizational performance",
"Self assessment",
"Total quality management",
"Sweden"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Values, methodologies and tools of TQM\nMethodology: A multiple case study (see Yin, 1994), was chosen as the most appropriate research strategy. Input regarding case selections came from Eriksson (2004), who identified 46 organisations that have applied for the Swedish Quality Award between 1998 and 2002. The organisations in the present study were selected from these 46 applicants. The following criteria were used during the case selection:* The organisations should uphold systematic improvement work.* The organisations should have used the SIQ Model for Performance Excellence in the application to the Swedish Quality Award.* The organisations should not have gone through any major organisational changes subsequent to their latest award process participation.One of the aims of this study is to transfer \"best practice\" regarding the improvement work that should follow participation in a quality award process to other organisations. Therefore, the examined organisations must be successful and systematic in their work with improvements. The second criterion was chosen due to the fact that the study aims at analysing core values. Since the core values differ slightly between the SIQ Model, the EFQM Model and the MBNQA Model, it would have been hard to make a comparison between organisations using different models. Most organisations participating in the Swedish Quality Award process have used the SIQ model (see Eriksson, 2004), Therefore, the criterion that the included organisations should have used the SIQ Model was established. The third criterion was chosen because effects on organisational performance due to participation in a quality award process would be difficult to isolate, if major organisational changes had also been performed.The authors found that three organisations conformed to the criteria above, namely: Sydkraft Vattenkraft, Agria Djurforsakring, and Kronans Droghandel. Data were collected through the use of interviews, document studies and direct observations during site visits. The main reason for using many sources of data was to increase the validity of the study.To obtain a broad picture of the quality award process, and thus increase the sensitivity of the study, two employees, with different levels of responsibility, were interviewed in each organisation. One of the informants that were selected in each organisation had an overall responsibility for the organisation's participation in the quality award process. The other informant that was selected in each organisation had not had any overall responsibility for the application, but had instead been operationally involved in the quality award process. Interviewing a third person that had not been involved in the quality award process was also a possibility in order to increase the validity of the study. However, this idea was rejected due to difficulties for such a person to understand the terminology used in the quality award process (see Eriksson, 2003), The interviews were focused (see Yin, 1994), and followed a certain set of questions and procedures. The actual interviews were performed during March 2003, and all the interviews were recorded. All questions and answers have been documented by Eriksson and Palmberg (2003).To further strengthen the validity of the present study an internal test was executed, in which colleagues of the authors and a project group commented on the structure and the questions that were going to be asked during the interviews. The quality award process was explained to the informants before the interviews were started in order to reduce possible misunderstandings. Furthermore, after making a fair copy of the interviews the informants were allowed to comment on their answers in order to eliminate misinterpretation before documenting the answers.Direct observations were made during one day in each of the three examined organisations. The observations could be classified as fairly informal (see Yin, 1994), and provided additional information about the selected cases. Document studies were also made in the three examined organisations. Documents that were requested from each organisation included:* the latest application to the Swedish Quality Award;* the latest examiner feedback report;* annual reports of the organisation; and* documents that describe improvement work performed in the organisation.Documents that described the improvement work that had followed the quality award process participation were difficult to collect due to the reason that the organisations stated that they did not have such explicit documents. However, other documents that described activities in order to improve organisational performance were examined.The informants were asked to describe the activities initiated based on the quality award process participation and in order to improve the performance. Some of these activities could be labelled methodologies, according to the definition described above and in Hellsten and Klefsjo (2000). Also, some tools that supported the methodologies were identified. Furthermore, the activities were categorized according to the 13 core values of the SIQ Model for Performance Excellence. This data is presented in the comparative analysis. One may argue that these 13 core values do not constitute the base of TQM (see, for example, Table I and Sila and Ebrahimpour, 2002), However, due to the fact that the organisations had worked and were familiar with the 13 core values of the SIQ Model, these were used during the interviews and the following analysis. In addition, direct observations, the organisations' applications to the Swedish Quality Award, and the examiners' feedback report were used in the comparative analysis to describe how the different organisations had worked.\nCase descriptions: Sydkraft Vattenkraft\nComparative analysis: Customer orientation\nDiscussions: On the whole, the organisations have initiated many activities, both systematic and integrated, based on their participation in the quality award process, which are intended to improve the performance. In general, the remaining improvement potentials for the organisations were found in the results area. Results were presented in the applications, but according to the examiners did the organisations need to improve measurements and the analysis of trends. Furthermore, comparisons with leading organisations and competitors were lacking. None of the studied organisations claimed neither that they had, in monetary terms, been able to estimate how much they had earned or saved due to the activities initiated.Table II shows which core values were considered by the informants to have been most improved due to the participation in the quality award process and the activities performed. Furthermore, the results of the comparative analysis, with regard to which core values have been most improved due to the participation in the quality award process, are presented in Table II. The core values of customer orientation, committed leadership, participation by everyone, process orientation and continuous improvement were considered to be most strengthened due to the activities performed at the organisations. The organisations have developed different methodologies and tools in order to strengthen these five core values. Some of the other studied core values have also been strengthened, but not to the same extent as the ones mentioned. The results of this study indicate that use of the SIQ Model strengthens certain core values more than others. This is in accordance with the findings of Eriksson (2004), who also found that the five previously mentioned core values had been mostly improved due to organisations having participated in quality award processes. In contrast to the findings of the present study, management by facts was also considered to have been strengthened by a large number of the informants studied in Eriksson (2004), This difference between the studies might be due to the fact that the methodology used in this study were not able to show a great impact on management by facts or that the organisations in this study are already relatively good at management by facts.The six mentioned core values are considered by some authors to be the essence of TQM today (see, for example, Bergman and Klefsjo, 2003), Hence, one could question whether the SIQ Model for Performance Excellence really supports the other seven core values in the SIQ Model in a sufficient way. An interesting discussion would be whether one should reduce the core values in the SIQ Model for Performance Excellence, so it better fits the theory and the real experience of organisations participating in the quality award process. A reduction of the number of core values in the SIQ Model for Performance Excellence might also make it easier to comprehend for the organisations using it. As a result, the organisations might not experience the SIQ Model for Performance Excellence as elaborate and difficult (see Eriksson, 2004).The two informants at Sydkraft seemed to have similar views on which core values that had primarily been strengthened due to the participation in the quality award process and the activities performed, see Table II. The informants at the other two organisations, however, had different views internally on which core values had been improved. The informants that had overall responsibility in the organisation for the quality award process all indicated that committed leadership was one of the core values that had been most strengthened. This was not the case for the informants with no overall responsibility. The answers of the informants seemed to differ systematically depending on the role of the interviewee.As reflected in Table II, there were some differences between the studied cases, regarding which values had been affected by the quality award process participation. All organisations had experienced improvements in the areas of customer orientation, committed leadership, process orientation and continuous improvement. At Sydkraft and Agria the participation by everyone had also been significantly strengthened. These two organisations have involved almost all their employees in the quality award process, and also, to a large extent, in their business planning. This was not the case at KD. However, KD has a much larger organisation in terms of the number of employees, and hence has a more difficult task involving all their employees. This could be an explanation of the observed differences.In general, the informants were positive to their organisation's participation in the Swedish Quality Award process, and they also recommended other organisations to participate in an award process. As one informant stated, \"It is much better to perform a systematic improvement program with a structured model that covers all aspects of the business, than to have ad hoc and unsystematic improvement work\". The informants also claimed that it is important to have a long-range perspective in order to fully take advantage of the award process. They argued that the use of the SIQ Model had been important for their success, but some of the informants questioned the frequency of quality award participation. All organisations also complained that the participation in a quality award process had been very resource demanding. Furthermore, Ghobadian and Woo (1994) showed that the major disadvantage of self-assessment is the resource-demanding and time-consuming work involved. The authors' conclusion is that to participate in a quality award process every year could be too intense for organisations. For that reason, an application every second year could be more beneficial. By extending the time between applications the organisations get more time to complete improvement activities initiated as results of the evaluations. It is also clear that one needs to participate once in the process, in order to be familiar with the model and the method of working. This would suggest that a second application could give a more beneficial outcome than the first. Simpson et al. (1998) also argue that it is widely accepted that subsequent self-assessment is more successful than the first attempt. If the studied organisations were to participate in a quality award process once again they would do it somewhat differently. The informants of Sydkraft argue, for example, that they would train more employees in the SIQ Model. KD would also involve more employees in the award process. In the planning phase, Sydkraft argues that one could learn more from others, instead of participating in a quality award process directly.The value of TQM, through participating in a quality award process, seems to be beneficial according to this study. This is in line with, for example, the findings of Finn and Porter (1994) and Eriksson et al. (2003), As in earlier studies of self-assessment, the major benefits of participating in a quality award process were found to be a greater focus on improvement work (see Finn and Porter, 1994; van der Wiele et al., 1996), processes (see Gadd, 1995), and customers (see Brown and van der Wiele, 1996; Eriksson, 2004), In addition, participation in a quality award process is also perceived to have an impact on committed leadership and participation by everyone. In contrast to Loomba and Johannessen (1997) and Conti (2001), who argue that participating in a quality award process can divert organisations from the mission of continuous improvement, the organisations in this study show how a continuous improvement can be supported by a participation in a quality award process. Furthermore, one of the main benefits of a participation in a quality award process for the organisations in this study is also a strengthening of the core value continuous improvement.In summary, critical success factors identified in this study of participating in a quality award process, which might be helpful for practitioners, include the involvement and empowering of employees in the process. Furthermore, the leaders of the organisations must show strong long-term commitment to supporting the values of customer orientation, process orientation and continuous improvement. These recommendations are in alignment with the recommendations on how to successfully implement TQM and perform a successful organisational change program (see Grieves, 2000; Hansson, 2003).An interesting area of further research would be to compare different methodologies for performance improvement with each other. It is clear that participation in a quality award process generates improvement suggestions. However, the organisations do not always have resources to perform the necessary activities. Organisations participating in a quality award process also have difficulties in translating the improvements into monetary units. However, organisations working with six sigma (see, for example, Magnusson et al., 2003), claim that the improvement performed has a tremendous economic impact. An interesting issue to study is whether introducing a six sigma programme generates more improvements and has a greater impact on performance than participation in a quality award process.\nConclusions: The three studied organisations were selected partly due to their systematic improvement work. Examples of the application of methodologies and tools in order to improve organisational performance are provided in a number of areas. Specifically, the areas of customer orientation, process orientation, improvement work, committed leadership and participation by everyone have been strengthened through initiated activities based on the participation in the quality award process.The studied organisations may serve as good examples of how to perform an organisational change. In particular, the studied organisations have been successful in developing and communicating their vision, thereby empowering their employees. Like the studied organisations, other organisations considering participating in a quality award process need to have strong long-term commitment. Participating in a quality award process only once seems to be ineffective use of resources. The first time one participates in an award process one mainly learns the craft. Often benefits cannot be measured until the second participation. It is also of importance to get enough time in between the applications in order to be able to complete as many as possible of the improvement projects. It is the authors' intention that these guidelines might be helpful for organisations considering a participation in a quality award process.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of the study is to describe the activities initiated based on participation in a quality award process and with the intention to improve performance.\nDesign/methodology/approach: - This study presents a case study of three organisations that have participated in the Swedish Quality Award process. The cases were selected in order to clarify how this award process could be used to improve organisational performance.\nFindings: - Several examples of activities on how to improve organisational performance are provided. Specifically, the areas of customer orientation, process orientation, continuous improvement, committed leadership and participation by everyone have been improved due to the initiated activities. Furthermore, the studied organisations have been successful in their development and communication of visions, and in their empowerment of employees.\nResearch limitations/implications: - An interesting area of further research would be to compare different methodologies for performance improvement with one another.\nPractical implications: - Findings from the case studies, and of importance for organisations applying for quality awards, indicate that, if the goal is to get lasting results, it is not sufficient to participate in a quality award process only once. Instead one should participate in the process several times, with enough time in between the applications in order to complete as many as possible of the improvement projects resulting from the evaluations.\nOriginality/value: - It is the authors' intention that the guidelines presented in the paper might be helpful for organisations considering a participation in a quality award process.\n",
"role": "assistant"
}
] |
The measurement and dimensionality of e-learning blog satisfaction: Two-stage development and validation
|
[
"E-learning",
"Measurement",
"User satisfaction",
"Blog-based learning"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Together with the rapid development of Web 2.0 technology over the last decade, blogs have been recognized as a potential transformational technology for teaching and learning (Williams and Jacobs, 2004). A blog (a contraction of the term weblog) is a type of web site that is usually maintained by an individual; users typically include updates consisting of commentary, descriptions of events, or other material such as graphics or video. The activity of updating a blog is known as \"blogging\" and users who keep blogs are \"bloggers.\" Prior studies suggest that blogging is a useful practice for the development of higher order learning skills; active, learner centered pedagogies; authentic learning; associative thinking; and interactive learning communities (O'Donnell, 2006; Farmer, 2006; Farmer et al., 2008). Using blogs, a community of student bloggers can create an interactive social learning environment where they can learn from others' ideas, share knowledge resources, and compare/compete with each others' entries (Du and Wagner, 2007).\n2. Theoretical background: 2.1 Traditional web-based learning vs blog-based learning\n3. Research methodology: 3.1 Generation of scale items\n4. Scale development: 4.1 Item analysis and reliability estimates\n5. Scale validation: The advantages of applying confirmatory factor analysis (CFA) as compared to classical approaches such as common factor analysis or multi-trait multi-method analysis to determine construct validity are widely recognized (Anderson and Gerbing, 1988; MacCallum, 1986; Marsh and Hocevar, 1985; Segars and Grover, 1993). Using the validation sample (n=226), this study attempted to establish strong construct validity of the ELBS measure, including its convergent validity and discriminant validity, which can be directly tested by a higher order CFA (Marsh and Hocevar, 1988).\n6. Implications for research: This study suggests that blog-based learning systems differ significantly from traditional asynchronous e-learning systems, thereby necessitating the development of a new scale for measuring learner satisfaction in the context of blog-based learning. In turn, this study conceptualizes an ELBS construct and develops a generic scale with desirable psychometric properties to measure the construct. The validated 20-item ELBS instrument consists of five factors: content quality, context and linkage quality, learner interface, learning interactivity, and personal learning log. The ELBS construct in the blog-based learning environment was also found to be somewhat different from the e-learner satisfaction construct in the context of asynchronous e-learning systems (Wang, 2003) as well as the end-user computing satisfaction construct in the context of end-user computing (Doll and Torkzadeh, 1988). In fact, the proposed ELBS can be characterized as containing three distinct components:\n7. Implications for practice: Through the above analysis, a five-factor, 20-item instrument with good psychometric properties for measuring ELBS was developed, and the instrument's factor structure was also validated through the exploratory and CFA. The results provide several implications for the development and management of blog-based learning systems. Based on Wang and Tang's (2012) e-learning blog success model (see Figure 1), student satisfaction is an important antecedent of e-learning blog use and learning effectiveness. More specifically, given that the usage of blog-based learning systems is quasi-voluntary, student satisfaction is also a critical determinant of the behavioral intention to use the systems, which, in turn, affects students' e-learning effectiveness. Thus, the concise ELBS instrument with good reliability and validity provides an effective tool for educators and blog-based learning systems developers to understand users' ELBS periodically and take corrective actions for improvement of system satisfaction, usage, and effectiveness.\n8. Limitations: While this study developed and validated the generic ELBS instrument using a rigorous procedure, this research still has some limitations that could be addressed in the future. First, the language used in the stages of item generation, personal interviews and data collection was traditional Chinese. However, the final form of the ELBS instrument was expressed in English. Cultural and language differences are important concerns in the study of scale development. Future research could validate the proposed instrument in different languages.\n9. Conclusions: A primary contribution of this work was to have started a stream of work to develop a generic instrument for measuring the success of blog-based learning systems given that user satisfaction is a critical success surrogate of information systems/e-learning systems (DeLone and McLean, 2003; Wang, 2003). The development and validation of an ELBS instrument also represent an important step in the development of the theories concerning blog-based learning and its usage. Based on previous literature, this study conceptually defined the domain of an ELBS construct, operationally designed the initial ELBS item list and empirically validated the generic ELBS instrument. The results reveal that the proposed ELBS instrument has desirable psychometric properties, which is of value not only to educators and systems developers responsible for the implementation and utilization of blog-based learning systems, but also to researchers interested in developing theories concerning blog-based learning. The final instrument indicates adequate reliability and validity across a variety of blog-based learning systems. The generality of this proposed instrument provides a common framework for the comparative analysis of results from various studies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to develop and validate a multi-dimensional instrument to measure e-learning blog satisfaction (ELBS).\nDesign/methodology/approach: This study introduces and defines the construct of ELBS, provides an empirical validation of the construct and its underlying dimensionality, develops a generic ELBS instrument with desirable psychometric properties, and investigates the instrument's theoretical and practical applications.\nFindings: After analyzing data from a calibration sample (n=238) and a validation sample (n=226), this study proposes a five-factor, 20-item ELBS instrument.\nPractical implications: This empirically validated instrument will be useful to researchers in terms of developing and testing blog-based learning theories, as well as to educators in terms of understanding students' ELBS and promoting the use of blog-based learning systems.\nOriginality/value: This study is a pioneering effort to develop and validate a multi-dimensional instrument to measure ELBS.\n",
"role": "assistant"
}
] |
Designations and consumer perceptions: An experimental study and implications for agricultural policy
|
[
"Consumer behaviour",
"Food policy",
"Generic names",
"Olive oil"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, the consumption of healthy and quality food has increased. Several previous studies demonstrate growing consumer concern regarding these two characteristics in making decisions about food consumption (Fotopoulos and Krystallis, 2003; Mili, 2005; Grunert, 2005; Rohr et al., 2005; Trienekens and Zuurbier, 2007). Consumers choose food based on its perceived value, which includes health and quality features (Shih-Tse Wang, 2010). These features even outrank price in the decision-making process of whether to purchase a food product (Feng et al., 2012). Furthermore, previous research has shown that consumers consider the factors to be closely related (Rijswijk and Frewer, 2008). The increase in demand for higher quality products has also meant an increase in the variety of food that can be found on sale; in turn, marketing strategies are developed toward a greater diffusion of the distinctive attributes of these new categories (Caswell and Mojduszka, 1996).\n2. Objectives and hypothesis: Our revision of the previous research shows that the level of consumer ignorance about the categories of olive oil is high. This ignorance may create confusion in the consumer that relies on their association of olive oil as a quality product, and do not differentiate among different kind of olive oils. Terms used to refer to different varieties of olive oils have different linguistic connotations (e.g. higher quality vs lower quality). However, if consumers do not associate the terms with the proper variety of olive oil, they may consider an olive oil as a less quality one and vice versa, creating confusion in their perceptions and affecting their consumption decisions. Thus, our study goes one step further by analyzing the association grade that some terms have in relation to olive oils. Our goal is to create proposals based on the perceptions of consumers, which could be used to support agricultural policy, to clarify the sector and to improve the marketing of olive oil. We conduct an experiment in which we test how consumers perceive olive oil terms and other terms that have been proposed in the literature along with additional terms that we include testing consumer perception.\n3. Experiment: In this section, we detail the sample, the instruments and the experimental procedure.\n4. Results: For the sake of clarity and brevity, we present our results in the following two sections: those related to the initial perceptions of the participants and those occurring after the intervention.\n5. Conclusions: Based on our results, we reach the following conclusions.\n6. Proposals and recommendations: In this paper we show our disagreement with the oleic policies regarding the designations of olive oils because we believe these policies do not facilitate consumer decision making. For this reason, we present a series of proposals and recommendations endorsed by our results. These are merely indicative and for the sole purpose of calling attention to the need for change. Parras (2000) introduced several proposals intended to change existing names to new ones, e.g., the name \"olive oil\" to \"regular olive oil.\" Based on a recent change in legislation, \"olive oil\" has been renamed \"olive oil-contains only refined olive oils and virgin olive oils.\" Although we understand that this name change was made with positive intentions, the result was not expected. Confusingly within this designation occurs the term \"refined.\" As we analyze in our study, this term generates consumer connotations that are modified when the real meaning is explained in the context of olive oil. We therefore believe that the proper term should be \"rectified,\" which has connotations that are more approximate to the term's actual meaning.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyze the labeling and denominations of olive oils and to examine to what extent these factors confuse consumers. Specifically, the authors analyze the extent to which the different denominations of olive oil prevent consumers from distinguishing among the types of olive oil. Furthermore, the authors analyze whether the current generic names of olive oil affect consumer perceptions regarding the product's various qualities and characteristics.\nDesign/methodology/approach: The authors performed an experiment with 128 participants divided into two groups (experimental n=64 and control n=64). In the pretest, participants completed a survey with 12 terms related to olive oil. The experimental group was then trained in the meaning of each term, after which the group returned to complete the same survey.\nFindings: The authors can confirm H1 and H2. Results show that there is no clear knowledge regarding some of these terms.\nSocial implications: This study provides positive implications to both consumers, providing them a clear information, and producers and marketers, helping distinguish in the market olive oils of more quality.\nOriginality/value: This paper is pioneer in the literature. The authors provide a number of proposals and amendments regarding olive oil names to improve the knowledge and clarity of olive oil with direct implications for agricultural policy.\n",
"role": "assistant"
}
] |
Undeclared work: a new source of employment?
|
[
"Working practices",
"Employment",
"Public policy",
"Belgium",
"Germany"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Undeclared work, in all the ambiguity of its connotations, has come to constitute a structural feature of European societies. In recent years, a wide array of colourful names has been used to describe this phenomenon, such as informal, subterranean, underground, twilight, hidden unobserved, or illicit economy. When it comes to defining undeclared work, however, there exists a strong consensus. Here, and following a study for the European Commission (Renooy et al., 2004), undeclared work is defined as productive activities that are lawful as regards to their nature, but are not declared to the public authorities, taking into account the differences in the regulatory system between Member States.For some years now, there has been a growth in the attention being paid to this part of the economy in western European countries and beyond (Baculo, 2006; Harney, 2006; Jones et al., 2006; Marlow, 2006; Trinci, 2006; Williams, 2003; Williams and Windebank, 2004). One of the principal reasons for this growth in interest has been because governments have recognised the possibility that new jobs might be created if undeclared work can be transformed into regular employment. It is a fairly undisputed view in western European countries that a major reason for engaging in undeclared labour are tax and social security burdens that force a whole employment segment into this segment of the economy. Employment, for example, in personal services, household services and unskilled activities is considered to be too costly to be executed legally on the formal labour market. Social security contributions in particular are seen as driving labour costs beyond productivity, thus rendering legal employment in these areas uneconomical. A high unemployment rate for unskilled workers and growing demands for the provision of the above mentioned services in the informal economy are the result. Fighting unemployment by reducing labour costs therefore creates legal employment in areas that were hitherto dominated by undeclared labour. To tackle this problem, several countries have introduced policy reforms.The aim of this paper is to focus upon some of the more innovative projects that are being pursued in western Europe that seek to provide positive incentives and new institutional arrangements to enable undeclared work to be transformed into legitimate employment. What is so important about these initiatives is that they are pioneering a shift away from the conventional deterrence approach towards undeclared work that sought to eradicate such work by punishing those caught (see Williams, 2004, 2006; Williams and Windebank, 1998). Instead, they are seeking to provide \"carrots\" rather than \"sticks\" to encourage a transformation of undeclared work into formal employment. To review these more positive initiatives that have so far received little attention in the English-speaking literature, attention here focuses upon two particular initiatives taking place in Belgium and Germany that have developed new institutional arrangements are to undermine the reasons for engaging in undeclared work. Similar initiatives had been taken in France and Denmark as well in the early 1990s (see Williams and Windebank, 2006; Windebank, 2004, 2006).\nBelgium service vouchers: Service vouchers, according to the Belgian minister Vandenbroucke, should be seen as a huge social experiment, in which the latent demand for a whole range of goods and services can be transformed into paid employment, with thousands of people who are only weakly attached to the formal labour market provided with employment opportunities and a whole range of forms of undeclared work are turned into normal work.Service vouchers are a means of paying for everyday personal services. The scheme itself is quite straightforward. A household can buy vouchers for a price of EUR6.70 which pays for an hour of work. With these vouchers, the household can then pay for services rendered by certified companies. These companies hire unemployed people. At first, the contracts offered to unemployed people can be rather small and flexible. After six months, however, the company has to offer a permanent employment contract for at least half-time employment if the person was formerly registered as officially unemployed. For people other than those formerly registered as officially unemployed, the rules are slightly different. An employee of a certified company can do the following activities:*housecleaning*washing and ironing*sewing*running errands*preparing mealsThe household pays for these services with the vouchers. The cost price of these vouchers is EUR21.00 in 2005 prices. The difference is paid to the company by the federal government. The households can recover 30 per cent of the price of the voucher in their tax return. So, for them the price of a voucher for 1h of work amounts to EUR4.69. The Belgian government intended to create 25,000 jobs by the end of 2005.According to an evaluation of this scheme (Gevers et al., 2005), by the end of 2005 some 28,933 people had been employed through this service voucher scheme. In that sense, the government's goal of 25,000 new jobs was realised. These employees were employed by 792 companies, of which 41 per cent were temporary employment agencies, one-quarter were private not-for-profit enterprises and 18 per cent were public companies. Most (90 per cent) of the employed were employed in part-time jobs, although most of them work more than half-time. The 28,933 people in jobs represent 17,360 full-time equivalent jobs, a 70 per cent growth on the number employed using this service vouchers scheme in 2004. However, 49 per cent of the employed already held a job before joining the service voucher scheme. The net effect on job creation, therefore, was some 9,000 jobs. By the end of 2005, two-third of the employees were employed on permanent contracts.During the almost two and a half years, the scheme has been in operation, almost 200,000 households used the vouchers to pay for various different services. In 75 per cent of the cases, these were household tasks, in particular housecleaning. Reflecting the gender divisions of domestic labour in the broader society, it is perhaps unsurprising to find that 99 per cent of the resultant employees were women.Service vouchers: transforming undeclared into declared work?\nThe Hartz committee and mini jobs in Germany: In order to improve the functioning of the German labour market, the German government requested Peter Harz, Volkswagen's personnel manager, to present a report with recommendations. The Harz Committee reported in August 2002. In general terms, the German labour market was reported to be too inflexible to be able to react to current economic problems and during the course of 2003 many of the recommendations of the Harz Committee were implemented (Harz I-III). The latest Harz-reforms (Harz IV) were implemented in 2005. Indeed, because of the direct relationship between the deficiencies and inflexibility of the formal labour market and the informal labour market, many of the Harz-proposals are directly relevant to the topic of formalising undeclared work. Here, therefore, the most relevant Harz Committee in relation to undeclared work are briefly reviewed. These are three-fold:(1) To give an impulse to temporary employment, throughout Germany so called Personal Service Agencies were created. The official Employment Offices have to associate themselves with private (for-profit) temporary agencies for that reason. When there is no private temporary agency in the region, the local Employment Office should start one itself;(2) Unemployed people who want to start a one-man business (Ich-AG, or in English, Me-Inc) or a family business (Familien AG), are entitled to a monthly subsidy for three years: year 1 - EUR600 (50 per cent of the average unemployment benefit in 2002); year 2 - EUR360 (30 per cent of the average unemployment benefit in 2002); and year 3 - EUR240 (20 per cent of the average unemployment benefit in 2002)(3) The (re)creation of mini jobs. Below, this is considered in a little more detail.Mini jobs\nConclusions: This paper has reviewed some of the more innovative initiatives taking place in Europe in order to facilitate the formalisation of undeclared work. Rather than pursue the eradication of undeclared work by increasing punishments and deterrents, these initiatives are seeking to introduce new institutional arrangements in order to facilitate the transformation of undeclared work into the formal economy.Evaluating the outcomes of these initiatives, this paper has revealed that these schemes in Belgium and Germany in which subsidies lower formal wage costs for domestic work are not a panacea so far as tackling undeclared work is concerned. Indeed, perhaps this is starting to be realised. In the fall of 2006, the Dutch government decided to bring to an end its scheme that previously enabled households to hire subsidised cleaners, namely Regeling Schoonmaakdiensten Particulieren (RSP). By 2007, it will be replaced by new tax rules on domestic services. These new rules enable households to hire domestic workers for cleaning, child care or gardening without the obligation to pay tax on wages or a premium for social security. With regard to the demand side this takes away the incentive to resort to undeclared work. The worker is expected to declare his or her income to the tax authority. But as the Ministry of Finance states on its website; it does not mean tax inspectors will check on every household and there is no illusion that undeclared work will be abolished. The new rules will, however, make a serious contribution to creating a market for personal services.There remains little doubt, therefore, that the European Union has become a social laboratory for a wide range of innovative experiments in tackling undeclared work and that there is much that governments can learn from each other's experiments. Not all of these initiatives will be effective but what is so important about what is happening in the European Union is that there is no longer a purely punitive deterrence approach being adopted towards undeclared work.\n",
"role": "user"
},
{
"content": "Purpose: - The aim of this paper is to focus upon some of the more innovative projects that are being pursued in Europe that move beyond the conventional punitive approach and seek to provide positive incentives and new institutional arrangements to enable undeclared work to be transformed into legitimate employment.\nDesign/methodology/approach: - This paper evaluates the service vouchers policy initiative in Belgium and the Harz Committee policy recommendations, especially mini jobs, in Germany.\nFindings: - The finding is that although significant numbers of declared jobs have been created by these schemes, sometimes at a relatively low cost compared with other options for creating new formal jobs, such initiatives have had a relatively minor impact on the overall level of undeclared work in these societies.\nOriginality/value: - This paper is one of the first to bring these innovative policy initiatives to the attention of an English-speaking audience.\n",
"role": "assistant"
}
] |
Technology transfer between universities and companies: Two cases of Brazilian universities
|
[
"Innovation",
"Technology transfer",
"Case study",
"University–industry interaction",
"Brazilian universities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The interaction between universities and companies arises from the need of the productive sector to develop a new technology, product or process, or even when there is an adequately mature invention to be transferred from the university to the company, which is one of the ways interaction may occur (Sankat et al., 2007).\nTheoretical framework: University-industry interaction\nMethod: This research studies two cases. According to Yin (2013), studying more than one case validates the research and makes it more reliable; therefore, this is essential to have good results. For investigations of contemporary phenomena, case study is the most appropriate methodology, as opposed to how it was seen, as a methodology that was not strict and scientific enough (Yin, 2013).\nData presentation and analysis: Entrepreneurial university\nFinal remarks: Through this research, we were able to identify that there are weaknesses and strengths both in the university-industry interaction processes and in the TT processes of the studied cases. As shortcomings, we point out the bureaucracy, the lack of innovation and entrepreneurship culture and the university's lack of experience on working in collaborative research as well as the company's lack of experience on working with the university. As strengths of this relationship, there is the importance of combining theory with practice, achieved through collaborative research, the possibility of generating new technologies and the regional impact that these technologies may achieve. These findings are corroborated by other studies conducted in Brazil, which allows a view that is not restricted to the cases pointed out in this study (Cysne, 2005; Closs et al., 2012).\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to research how technology transfer occurs, based on the Schumpeterian approach to innovation trilogy focusing on the interaction between the university and the company.\nDesign/methodology/approach: The methodology used for this study was the analysis of two cases with an exploratory and qualitative approach. The case study subjects were two Brazilian universities: University of Campinas (UNICAMP) and University of Vale do Rio dos Sinos (UNISINOS). Semi-structured interviews were used as the data collection technique, whereas content analysis was used as the analysis technique.\nFindings: The main results showed the need of companies and universities to understand that working in collaborative technology research contributes to the transformation of applied research into technological innovations that can transform society.\nResearch limitations/implications: The research's limitations were the unfeasibility of studying the government helix, the lack of clear and established processes within universities so that a comparison between the cases would be possible and the lack of access to technology contracts, as they are considered confidential. In addition, the use of two cases is considered a limitation, as it is not possible to generalize the conclusions pointed out by the study.\nOriginality/value: With this research, the authors were able to conclude that the university-industry interaction process has been improving, but it still needs to advance in organizational aspects. Some of the aspects to be considered are the adjustments for the institutions' internal policies, the existing negotiations, the researchers' behavior regarding the dissemination of the innovation culture and the performance of the technological innovation centers, which gradually are being trained to work in the market as well as in the university. It is necessary that primarily companies and universities understand that they must join efforts in collaborative technological research, so that the financial resources invested are not only accepted as published articles in qualified journals but also turn into technological innovations accepted by the market. All this investment must return as new products, services and technologies that generate local, regional, national and even international impact, implementing new types of businesses and new markets and yielding an economic impact in the country, thus generating innovation and social well-being.\n",
"role": "assistant"
}
] |
Analysis of a customer satisfaction survey using Rough Sets theory: A manufacturing case in Taiwan
|
[
"Customer satisfaction",
"Manufacturing industries",
"Taiwan"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Analysis of customer survey data is a complex task. However, it is one of the most promising directions in customer-driven research. It has been recognized that customers are the most important assets for a business. Increasing customer satisfaction is essential to retain customers and maintain profitability for a firm. Therefore, most companies conduct satisfaction surveys annually to collect direct responses from their customers in order to capture this valuable information. The key to the success of any customer satisfaction survey is what happens after the data has been collected. Many companies spend a tremendous amount of time preparing the appropriate questionnaire, but less time on the analysis and application of these results. Moreover, companies that have been certified ISO 9001:2000 must now demonstrate how to measure customer satisfaction and improve their results.The purposes of analyzing customer satisfaction survey are not only to understand the performance about each attribute regarding the product, service, overall satisfaction and loyalty but also to capture the causal relationships between the impact of performance on the attribute of overall satisfaction and loyalty of customers. The question about cause-effect dependencies between attributes performance and overall satisfaction or loyalty is the most important in the analysis of customer satisfaction, since this causal relationships can provide valuable information for marketing managers to understand which product/service features have a greater proportional influence on overall satisfaction.Traditionally, customer satisfaction survey data were analyzed by using statistical methods such as multivariate regression analysis (Anton, 1996; Vavra, 1997). However, these techniques are based mainly on quite strong assumptions such as prior knowledge of independencies, numerical scale of attributes and uniform probability distributions among the independent attributes. In general, we are unable to derive such conclusions from the collected survey data because the scale of questions in questionnaires is usually measured on an ordinal or nominal scale. Several statistical and intelligence methods have been applied to customer satisfaction analysis like conjoint analysis (Danaher, 1997), partial least squares (Ryan et al., 1999), targeted bootstrapping (Willson and Wragg, 2001) and artificial neural network approach (Hackl and Weslund, 2000). In recent years, we have seen increased attention being given to satisfaction analysis in the intelligence methods literature, where a simple description on how to model the customer retention problem using the Rough Sets (RS) model was given (Kowalczyk, 1997). Van den Poel and Piasta (1998) proposed a method using rough classifiers for purchase prediction and discovery of knowledge on customer behavior patterns.The RS theory, introduced by Pawlak (1982), deals with the classificatory analysis of imprecise, uncertain or incomplete data by incorporating Classical Set theory. It has been successfully applied for data analysis in pattern recognition and information processing, business and finance, industry and environmental engineering, medical diagnosis and medical data analysis, system faults diagnosis and monitoring and intelligent control systems (Wu et al., 2004). This theory is particularly useful when these data cannot be analyzed easily using traditional statistical methods. We have found that RS is a novel approach to classification and diagnostic rule induction problems. The origin of the \"if ... then\" rules will allow to define the causal relationship between decisional attributes and conditional ones (Maurizio d'Amato, 2002). Therefore, diagnostic rules can provide the causal relationships between input conditions and related abnormalities (Lee et al., 2002). In considering the requirements of identifying latently dissatisfied customers and deriving the diagnostic rules from satisfaction surveys, the RS concept is a powerful tool for customer satisfaction analysis. Latently, dissatisfied customers are defined as customers reporting overall satisfaction, but who hold typical characteristics of the unsatisfied customers. These customers have a high tendency to switch to other competitors. With the early warning signal, percentage of future business loss can be predicted, and appropriate customer satisfaction actions can be taken for those latently dissatisfied customers to avoid possible business loss.The study we presented in this paper is a novel approach to apply RS the theory to discover hidden information from satisfaction survey raw data. To demonstrate the usefulness of RS, we analyzed a dataset provided by a company of mechanical presses in Taiwan. Based on these results, marketing managers were able to have the knowledge of the drivers of customer dissatisfaction as well as satisfaction. In the first part of this paper, we use the case problem as an example to explain the RS concept. In the second part, we analyze a real case dataset and discussed the empirical result derived from the empirical study. Finally, we discuss the results, limitations and managerial implications of our findings.\nRS concept in customer satisfaction analysis: This section gives an overview of the RS concepts, including information table, indiscernibility of objects, approximations of set, attributes reduction and decision rules induction. These concepts are explained by using the terminology in the studied topic. For those who are interested may refer to (Pawlak, 1991; Pawlak et al., 1995) for a more complete and theoretical understanding of the RS subjects and theories. Customer satisfaction surveys are used to elicit a product's quality, service and performance data by asking respondents a series of questions. In this study, the survey data concern a set {U} of customers described by a set {P} of questions that usually consist of product/service features or attributes, socio-demographic aspects, overall satisfaction judgment, loyalty of repurchasing intention and willingness to recommend the product to others. The implications of a satisfaction survey can be considered as a multivariate evaluation problem, given that customers' implicit degree of global satisfaction and repurchasing intention depends on their perceptions of a set of variables that represent both dimensions of product function and service characteristic, i.e. the final perceptions of overall satisfaction and loyalty judgment presented by customers are determined by how customers feel about the performance of product or service features. Here, we refer to those features as condition attributes.Concept 1: information table\nAn empirical study: Data collection and coding\nConclusion: One goal of knowledge discovery is to extract meaningful information from raw data. Considering past and current customer satisfaction, together with loyalty behavior, are the best predictors of future customer retention. With better understanding of customers' perceptions, a company can determine proper actions to meet the customers' needs. However, there can be ambiguity in the voice of customers, which entails a need to analyze large amounts of subjective satisfaction data effectively. In this study, the RS theory has been applied to find the relationships between perception of product and service attributes and overall satisfaction as well as loyalty behavior choices in a manufacturing case in Taiwan.We collected 422 records, selected 20 conditional attributes and two decision-attributes to represent the class of the overall satisfaction and loyalty. The RS approach produced the results of decision rules, CORE and reducts which provided us with valuable information to classify attributes. Using only the core attributes, the quality of the classification reaches up to 0.7701. This implies that core attributes are well chosen to properly approximate the classification. Based on these decision rules, two attractive attributes (product's packing and price) and one must-be attribute (part supplement) of the Kano model can be identified effectively.From a practical point of view, we tackled the problem of estimating the percentage of latently dissatisfied customers and disloyal customers for customer forfeit estimation, i.e. estimating proportion of customers who report overall satisfaction but who possess characteristics of dissatisfaction based on the boundary region of RS. The 14 per cent of latently dissatisfied customers and 10 per cent latently disloyal customers can be considered as an early warning signal, providing the company a chance to correct the problems before real customer forfeit occurs, i.e. necessary precautions should be taken if those latently dissatisfied and disloyal customers are considered valuable customers for the company.In this study, the important set of attributes that ensure high quality of classification and rules for each class of overall satisfaction and loyalty were presented to the company's sales and marketing managers who believed that these rules provided them with potentially valuable information for making the service and pricing policy. The results generated in this paper opens new avenues for customer satisfaction analysis. We should not restrict analysis of customer satisfaction survey data by only using conventional statistical methods. RS theory is an innovative tool to discover knowledge on customer behavior patterns. The limitation of these finding is that it is restricted to the case study of machinery products. However, this approach can be applied to the service sector by a well-designed service satisfaction questionnaire.Opens in a new window.Figure 1.\n",
"role": "user"
},
{
"content": "Purpose: - This paper seeks to present the use of Rough Sets (RS) theory as a processing method to improve the results in customer satisfaction survey applications.\nDesign/methodology/approach: - The research methodology is to apply an innovative tool to discover knowledge on customer behavior patterns instead of using conventional statistical methods. The RS theory was applied to discover the voice of customers in market research. The collected data contained 422 records. Each record included 20 condition attributes as well as two decision attributes. The important attributes that ensured high quality of classification were generated first. Then decision rules for classifying high and low overall satisfaction and loyalty categories were derived.\nFindings: - Three important facts were found: the important product and service attributes that lead to overall satisfaction and loyalty; the percentage of latently dissatisfied customers; and customer decision rules.\nResearch limitations/implications: - The study is limited by the case company and its experience. These rules were presented to the company's sales and marketing managers who believed that they provided them with valuable information for creating strategies to increase customer satisfaction and retention.\nOriginality/value: - RS theory provides a mathematical tool to discover patterns hidden in survey data. The paper describes a new attempt of applying a RS-based method to analyze overall customer satisfaction and loyalty behavior through regular satisfaction questionnaire surveys.\n",
"role": "assistant"
}
] |
In their shoes: co-creating value from deaf/hearing perspectives
|
[
"Co-creation",
"Service encounter",
"Service model",
"Vulnerability",
"Deaf",
"Transformative value"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The fight for deaf individuals' equality has drastically improved over the last several decades; however, deaf discrimination is still an up-hill battle that deaf individuals face on an almost daily basis (Callis, 2015). Deafness is defined as an individual's inability to understand conversational speech through hearing alone (Bowe et al., 2005). As a consumer group, the deaf community is often stigmatized by the general population (Berbrier, 2002), as they often communicate in a language (e.g. American sign language or ASL) that has no readily understood written form. Additionally, deafness is an \"invisible disability\" as it is not always immediately apparent that an individual is deaf (Kaufman-Scarborough, 1998).\nValue co-creation: The concept of value co-creation has been explored from numerous perspectives within the marketing literature as the introduction of the service-dominant logic (Vargo and Lusch, 2004, 2008). Proponents of the service logic suggest that co-creation involves collaborative interaction among interdependent members within a given service system (Vargo et al., 2008), and define value in terms of system improvement. (Vargo and Lusch, 2008; Vargo et al., 2008). Co-creation has been explored in a variety of service contexts, such as service employee resource integration (Ple, 2016), customer effort (Sweeney et al., 2015) and even social media (Kao et al., 2016). In general, findings suggest that successful co-creation leads to greater outcomes due to the improved value generation and performance of the experience (Chan et al., 2010).\nMethods: Given the unexplored nature of this phenomenon, grounded theory was employed (Glaser and Strauss, 1967; Strauss and Corbin, 1998), allowing the researchers to develop a theoretical account of the service experience, grounded in qualitative observations and findings (Martin and Turner, 1986). Grounded theory is a useful technique to gain insight into a phenomenon in which theoretical development is currently lacking (Goulding, 2002) and has been used in services marketing literature to understand a variety of phenomenon (Keaveney, 1995; Beverland et al., 2010; Ringberg et al., 2007). Consistent with grounded theory, qualitative depth interviews were conducted, which allowed the researchers to probe deeper into the participants' responses, and gain a more detailed understanding of these service models (Weiss, 1994).\nStudy 1: Qualitative depth interviews were first conducted on-site with 12 service employees to understand their experience within this co-created service model. An overview of the interview participants can be found in Table I. A semi-structured interview format (see Appendix 1) was used, which allowed for greater rapport to be built with the respondent, and the ability to uncover richer data through follow-up questions (Seidman, 1998). Because deaf service employees relied on ASL for communication, two trained interpreters were hired for translation purposes. Each interview was audio-recorded and lasted between 30-60 min in length, and was then transcribed.\nStudy 2: As the foundation of this service model relies upon successful co-creation amongst the majority and minority groups, a second series of depth interviews were conducted with hearing customers of the restaurants, to more holistically understand this service model. Twelve hearing customers were interviewed following their experience at the deaf dining restaurant (see Appendix 2). Respondents were first-time patrons of the restaurant and were compensated in gift cards for their participation in the study. A complete listing of the interview participants can be found in Table III. Interviews lasted between 30 and 85 min, were digitally recorded and then transcribed. Data were again analyzed through Miles and Huberman's (1994) three-step coding process, as described in Study 1. Inter-rate agreement reached acceptable levels for percentage of agreement across judges (0.94).\nDiscussion: In the co-creation experience, value is generated through personalized interactions that improve the service system. However, the basis of value may be unique for each party involved (Prahalad and Ramaswamy, 2004); therefore, firms using this type of innovative service model must understand what value is derived, not from an overall business perspective, but individually for both the deaf and hearing populations. The findings above highlight the importance of recognizing differences in the respective value derived for each stakeholder in the interaction. Consequently, this service model must balance both habitual and transformative value ideals to satisfy the wants and needs of both population segments. Furthermore, to overcome barriers and fully achieve transformative value for both hearing and deaf consumers, enhancing the well-being of one group must be accomplished while not fully disempowering the other.\n",
"role": "user"
},
{
"content": "Purpose: This research investigates new innovative service models that provide opportunities for hearing and deaf individuals to switch roles within a co-created service encounter to allow for an enhanced perspective-taking experience. The purpose of this paper is to gain an in-depth understanding of deaf individuals' experience working within such models using their preferred language, American sign language, to interact with a primarily hearing-majority customer base.\nDesign/methodology/approach: Data were collected for two studies through qualitative depth-interviews with both the deaf service employees and the hearing-majority customers from a North American restaurant founded on this innovative service model.\nFindings: Results of these studies yield new insights into understanding the value generated for both the minority and majority populations within this co-creation platform. Notably, the deaf employees primarily recognize the transformative value derived from this service experience, whereas the hearing customers note the missing habitual value elements to which they are accustomed in traditional service encounters that inhibit repatronage intentions.\nOriginality/value: This is the first study to investigate the interpersonal service experience of deaf and hearing individuals within these emerging service models. Further, this research represents an initial attempt to explore a co-creative service experience between two different cultures, the deaf-minority and hearing-majority populations, with differing levels of ability.\n",
"role": "assistant"
}
] |
Regional effects on customer satisfaction with restaurants
|
[
"China",
"Urban regions",
"Rural regions",
"Customer satisfaction",
"Restaurants",
"Consumer behaviour",
"Regional factors",
"Moderating effects",
"Online customer reviews",
"Hierarchical linear models"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: It is widely recognized that customer satisfaction is an essential objective of corporate marketing strategies. Such a recognition is due to the high levels of customer satisfaction can lead to customer loyalty (Ryu et al., 2012; Suh and Yi, 2006), trust (Jani and Han, 2011), positive word-of-mouth (Pantelidis, 2010), repeating sales (Alegre and Cladera, 2009; Tsai and Huang, 2007), increases in future revenue (Bolton, 1998) and stock prices (Fornell et al., 2006), and a reduction in customer complaints (Choi et al., 2008) and perceived risk (Fornell et al., 2006; Tuu and Olsen, 2009).\nTheoretical background and hypotheses: Factors influencing satisfaction with restaurant service\nRegional characteristics and customer satisfaction: Based on previous research (such as Billon et al., 2009; Morgeson et al., 2011), we consider two categories of traditional socioeconomic variables that we argue should help explain cross-regional variance in customer satisfaction; economic (per capita gross domestic product (GDP), and per capita retail sales of consumer goods) and population (density and educational level).\nMethodology: Research design\nResults: First, a simplified version of the HLM (that is, a one-way analysis of variance (ANOVA) with random effects model) was estimated, in which the coefficients on all customer and regional characteristics were set to zero. This model can be used to check for any variation in customer satisfaction with K Restaurants across regions. The results of the estimation of this simplified model are displayed in both parts of Table III. The first part presents the coefficient estimate and t ratio for the intercept g00 in the model, and the second part gives the outcome of the random effect.\nDiscussion and implications: In the context of the K Restaurants found in 52 cities in China, the results demonstrate different levels of customer satisfaction across these regions. First, a region's per capita GDP negatively moderates the relationship between the physical environment and customer satisfaction. Since per capita GDP is a traditional indicator of economic prosperity (Morgeson et al., 2011), this suggests that the restaurant environment has less influence on customer satisfaction in more prosperous regions. Perhaps because such areas often have better facilities and more attractive buildings, customers in such cities are likely to have heightened expectations of a service-delivery environment. Consequently, a change of servicescape may not bring as much satisfaction to such customers as those in less-developed regions.\nConclusions: Customer satisfaction is an essential indicator of a company's performance and has therefore been a long-standing critical focus for marketing practitioners and scholars (Oliver, 1999). However, few studies have ever explored the cross-regional difference in customer satisfaction. This study contributes to the hospitality literature by looking at whether, and if so how, regional factors influence customer satisfaction with restaurants. Specifically, the uniqueness of the current study is focusing on how regional characteristics (i.e. economic condition, consumption level, population density, and educational level) affect satisfaction through their moderating effects on key attributes of restaurants (i.e. food, environment, and service).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine how regional factors affect customer satisfaction in the food service sector. This is achieved through investigating the moderating effects on the most important attributes of restaurants.\nDesign/methodology/approach: Hierarchical linear models, comprising customer observations nested within regions, were developed to explore whether there is any variance in customer satisfaction with the restaurants of an international chain (hereafter known as K Restaurants) across 52 regions in China and which regional factors contribute to understanding such variance.\nFindings: The results indicate that there is an apparent difference in customer satisfaction across regions. Regional consumption level can positively (negatively) moderate the relationship between food taste (physical environment) and customer satisfaction. Economic condition and population density have a negative moderating effect on the relationship between the physical environment and customer satisfaction. Education level, however, does not have any regional effect on satisfaction.\nPractical implications: Multinational companies or chain corporations must account for regional influences when evaluating their performance based on customer satisfaction surveys. Standardization across a nation is not necessarily the best approach because the various regions of a nation may differ in terms of socioeconomic condition.\nOriginality/value: This pioneering study examines the moderating effects of regional factors on the relationships between the attribute qualities of a restaurant and customer satisfaction.\n",
"role": "assistant"
}
] |
Understanding and measuring scalability in disaster risk reduction
|
[
"Replication",
"Scaling"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Recent decades have seen increased attention to disaster risk reduction (DRR). Now a significant component of international humanitarian and development action, DRR is integral to increasing resilience to natural hazards, the impacts of which continue to rise. Yet, despite the active \"scaling up\" of DRR in various contexts, there is a lack of common understanding of what effective scaling up is and how to measure and plan for it. This paper aims to address such a gap by addressing three key questions: What is scaling up and scalability with relevance to DRR? What are key features of effective scaling up? And how can scalability be measured and planned to improve the effectiveness of DRR interventions?\n2. Consolidating knowledge on scaling up: The literature review on which the key concepts are based is informed not only by studies of DRR, but also comparative literature from health, education, nutrition, sanitation, hygiene and international development more broadly. Literature was collected through a purposeful sample (n=77) as well as an objective rules-based sample (n=41) of documents published in English between January 2010 and June 2017 with the keywords \"scal\" and \"replicat\"[1] in title and/or abstract on a range of search engines/resource platforms[2]; owing to the number of hits, scholastic literature was also cross-referenced against keywords \"disaster\", \"humanitarian\" or \"development\" (e.g. search for \"scal\" AND \"disaster\"). False positives were removed and documents were then prioritised by the authors based on extent of the discussion of scaling up and relevance to DRR. The objective of the literature review was to identify/develop a definition of and key features associated with scaling up and factors that can influence scaling up with relevance to DRR, including distinguishing between \"scaling up\" and \"effective scaling up\".\n3. Scalability assessment and planning framework: The key features of effective scaling up (\"deal breakers\") and the internal and external enabling factors discussed above directly informed the development of a scalability assessment and planning framework to measure and promote the scalability of DRR initiatives. This framework was also informed by existing international development scalability \"checklists\" (e.g. MSI, 2012; ExpandNet and WHO, 2010; UNDP)[7]. The measurement framework asks the analyst to score the extent to which each deal breaker criterion and enabling factor is present. It uses a Likert scale (1-5) and space to provide a mandatory qualitative explanation for each score (see Table I). Each of the main questions is further broken down into sub-questions. For example, the sub-questions for M&E ask the assessor to consider: What is the M&E system/structures of the scaling entity? Is there capacity/planning to put M&E structures in place? What mechanisms are in place to review progress and incorporate new learning into the implementation process? and so on.\n4. Application methods: In order to test and to identify specific characteristics or blockages in DRR initiatives, the scalability assessment and planning framework was applied to a series of DRR and non-DRR initiatives. Initiatives were identified by the authors through a review of practice-oriented literature based on purposeful and objective samples; initiatives identified in the literature were then catalogued in Microsoft Excel, generating a matrix. An initiative was defined as \"a discrete intervention with a specific objective and corresponding core components (the precise part of the initiative deemed essential to meet that objective, e.g. one that would be the same regardless of where it was implemented)\" (Moriniere et al., 2018, p. 49). In total, 60 initiatives were catalogued: 51 DRR and 9 non-DRR.\n5. Results and discussion: Across all the initiatives (n=25) and when examining the DRR initiatives (n=20) alone, the overall finding is that there is no specific \"most scalable\" initiative. Rather, adherence to the deal breaker criteria and the presence of a sizeable proportion of internal and external enabling factors are critical. As anticipated, those that scored well across \"deal breaker\" criteria had the strongest chance of success. Enabling factors exert varying degrees of influence on scalability. However, when analysing \"deal breaker\" scores only, two stronger patterns emerged: one regarding the role of national authorities and the second regarding M&E.\n6. Conclusion: This paper promotes that scaling up in DRR is not only comprised of expanding both tangible and intangible outcomes, but also that effective scaling up requires successful, adaptable, sustainable initiatives supported by an M&E system. Moreover, even with these features present, the presence or absence of an enabling environment strongly influences scalability. Together these features form the foundation of a scalability assessment and planning framework for DRR initiatives.\n",
"role": "user"
},
{
"content": "Purpose: Despite increased attention to, and investment in, scaling up of disaster risk reduction (DRR), there has been little detailed discussion of scalability. The purpose of this paper is to respond to this critical gap by proposing a definition of scaling up for DRR, what effective scaling up entails, and how to measure and plan for scalability.\nDesign/methodology/approach: A literature review of debates, case studies and good practices in DRR and parallel sectors (i.e. education, health and the wider development field) unveiled and enabled the weighting of key concepts that inform scalability. The mixed methods research then developed, validated and employed a scalability assessment framework to examine 20 DRR and five non-DRR initiatives for which a minimum set of evidence was accessible.\nFindings: Support from national, regional and/or local authorities strongly influenced the scalability of all initiatives assessed. Currently, insufficient to support effective scaling up, monitoring and evaluation were also found to be critical to both identify potential for and measure scalability.\nOriginality/value: The paper ends with a scalability assessment and planning tool to measure and monitor the scalability potential of DRR initiatives, highlighting areas for corrective action that can improve the quality and effectiveness of DRR interventions.\n",
"role": "assistant"
}
] |
A multi-disciplinary approach to explaining workless-ness in Britain
|
[
"Employment",
"Gender",
"Demography",
"Human capabilities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Over the centuries economists and social scientists have formalized different explanations to understand and predict human behaviours. The undisputed theoretical framework of mainstream economics assumes that the individuals are rational, atomised, and most importantly totally selfish, so that their behaviour is driven by pure self \"interest\". Our approach departs from the vision of the traditional economic actor because we take into account two important dimensions of the human race, namely the social and the personal sphere. First of all, in our model, the individual is not referred to as homo-economicus but as homo \"econ-amicus\", meaning that in making economic decisions one is affected by social relations and by the degree of structural \"embeddedness\" in their surroundings - all factors that shape and constraint opportunities and independence. Second, in our model an individual is also referred to as homo \"humanus\" - that is one enters into social relations with his/her endowment of capabilities, idiosyncrasies, personality traits, human cognitive biases.\n2. Extending the homo-economicus: horizontal and vertical dimensions: Working and non-working decisions are the resultant of many interdependent forces at work. Sociality is one of those forces which interfaces with our sense of identity, affecting the emotional costs and benefits of non-employment and employment beyond the factual monetary gains and losses.\n3. Methodology data: To investigate the UK labour market, we use the BHPS data for 2009, a richer multidimensional approach and a sequential two-step procedure. In the first step we use a logistic model and we differentiate between gender and we focus on modelling a binary choice between employment and non-employment. We use the insights gained in this first step to build a second model, a Multinomial Probit, which focusses on non-employment only. This sequential model is used in order to take into account the selection process into attachment into the labour market. In addition the second stage of looking within the non-employment group, which is very heterogeneous in nature, we follow Furlong (2006) in considering smaller groups of non-employment in the second stage, although we acknowledge that even within these groups significant heterogeneity can be seen.\n4. A first step: employment vs non-employment: We use the Logistic Model for male and female to estimate the probability of being non-employed, controlling sequentially for various constraints and/or opportunities. This first model is auxiliary to the second and it should help identity salient differences in constraints and enablers, particularly those pertinent to social and personal dimensions and those that are gender related, that could be insightful and useful for the study workless-ness, which is at the core of this paper.\n5. A second step: workless-ness statuses across gender and age: In view of our previous results, we use a multinomial Probit model that accounts for several categories of non-employment according to different degrees of attachment to the labour market (unemployment, students, disabled, early retired and non-working carers), and that differentiates not only for gender but also for age categories (young 16-24; mature 25-49; old 50-64). The reference category is unemployment because it is the more attached to the labour market. The Multinomial Probit model has the following form:(1)\n6. The influence of \"under-employment\" on inactivity in the labour market: Under-employment has been defined in various ways. According to Office for National Statistics (ONS) and LFS definition under-employment is unemployment plus those who are working part-time and working fewer hours than wished. The ILO definition is broader and it includes those not searching for work but who would be ready for work and those searching for job but not ready for it as all these are not considered unemployed but inactive. In terms of this study we have largely used the latter as the definition of under-employment.\n7. Policy evaluation, conclusion, and limitation: In this empirical study have used a new cross-disciplinary approach among labour economics, BE and social economics to explain agents' decisions over labour market statuses. Based on the framework of capabilities and refined functioning proposed by Sen (1980) we develop and test a model of non-employment that is much broader than those usually estimated within labour economics. We find, in addition to standard labour economic variables, BE and social economics are potentially important in explaining non-employment. In addition there are important differences found between the genders, across the different types of inaction and between age groups.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to adopt the principles of labour economics, behavioural economics and social influence to identify constraints and enablers that influence people's choices in relation to the labour market decisions.\nDesign/methodology/approach: A sequential empirical methodology has been adopted, where data from the British Household Panel Survey (2009) has been collected to explain various statuses of labour market activity, with a focus on workless-ness, across the categories of unemployment, being a student, disability, retirement and being a carer - differentiating for gender and age. The paper develops and substantiates the hypothesis theoretically and gives some indications using a multi-disciplinary approach.\nFindings: The authors found that labour market opportunities, choices and achievements are affected by the interrelations and interactions of an individual's demographic and psychological characteristics (such as age, gender, heuristics, perceptions, beliefs, attitude's goals and ambitions) along with external factors (such as geographical, socio- cultural and economic conditions).\nOriginality/value: This study makes a unique contribution to labour economics as the authors abandon the traditional welfare approach and use a more general framework of capabilities and refined functioning to interpret how different types of constraints - ranging from socio-economic conditions and environmental background to specific features of individual processes of choices and decision making - affect preferences and functioning's. The study also identifies how \"under-employment\" complements the use of BE/social influence in explaining labour market inactivity, and highlights how the findings of this study have important implications for policy.\n",
"role": "assistant"
}
] |
Transformational leadership and innovation in an R&D organization experiencing major change
|
[
"Innovation",
"Transformational leadership",
"Employees behaviour",
"Team working",
"Identification",
"Climate",
"Australia",
"Organizations",
"Change management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Sustained competitive advantage in business is contingent on continued innovation in processes, products and services (Jung et al., 2003; Mumford, 2003). As such, academics and practitioners have shown a great deal of interest in identifying the factors that promote and sustain innovation in organisations (Damanpour and Schneider, 2006). One important line of research focuses on the growing recognition of the role of teams and their leaders in fostering and sustaining innovation in firms. However, relatively few studies investigate the processes that team leaders employ in driving innovation (Eisenbeiss et al., 2008), resulting in numerous calls for more research, especially into the factors and processes that explain how leaders influence and sustain higher levels of innovation in their teams (e.g. Avolio and Yammarino, 2002; Hunt and Conger, 1999). Moreover, little is known about how these factors and processes might operate during times of large scale and turbulent organisational change. In such contexts, team leaders are attempting to promote or continue high levels of innovation in their teams, but in an environment of uncertainty and disruption that might not help their efforts.In the current study, we aim to address these gaps in our understanding of leadership, teams, innovation and major change by examining the team leader's role in shaping how employees identify with their work groups, and how this level of identification influences perceptions of innovation in an R&D organisation experiencing considerable turbulence around a major restructuring. The current study adds to emerging conceptualisations around the relationship between leadership, teams, change and identification processes (Walumbwa et al., 2008), with growing evidence for the positive effects in promoting employee identification with the organisation (Hulsheger et al., 2009), and employees' identification with their work unit (e.g. Riketta and van Dick, 2005; van Knippenberg and van Schie, 2000).In addition, the current study explores how leaders might promote a climate for creativity that influences more innovative outcomes from their work groups despite what is occurring around them in terms of large scale change. While researchers originally assessed climate at the organisational level, it has since been argued that perceptions of team climate may be even more salient for team members (Tesluk et al., 1999). While the organisational climate is likely to shape teams' outputs in a broad sense, assessing individual team members' perceptions of support for creativity might provide even better insights into why teams undergoing major change and re-structuring might differ in their levels of effort to be innovative. Finally, the current study is one of only a few studies that follows over time the emergence of new team leaders and the teams, and that examines the critical first year in which leaders and team members work together.\nLiterature review: The role of transformational leaders\nMethods: Participants\nResults: Experts' innovation ratings\nDiscussion: Key contributions\nConclusions: The present study has begun the exploration of the \"black box\" that exists around how a more transformational style of leadership might promote greater levels of innovation. Our findings point to how this style of leadership influences team climate and identification, and in turn innovation. Significantly, these preliminary findings in the context of scientific R&D teams add further support to calls for managers to adopt more transformational styles of leadership during periods of organisational change. Such styles do produce better outcomes, both for the organisation around more innovative products and processes, but also for team members who get to enjoy more creative and rewarding team environments despite the demands of operating in highly competitive and often turbulent work environments. As noted by Callan et al.(2004), leaders today need to be skilled not only at disturbing the organisational system, but also in operating effectively in systems that are disturbed and in states of major change. The current study shows that leaders who encourage employees to identify with their team, and who build a positive team climate, not only support sustained creativity and innovation, but possibly encourage employees to capitalize on the opportunities presented by major change.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to add to the understanding of how transformational leaders influence R&D team outcomes around being more innovative. In particular, the study aims to focus on the role of group identification in mediating innovative outcomes.\nDesign/methodology/approach: - In total, 104 participants in a large Australian R&D organization were surveyed twice during a 12 month period of major restructuring and change. These matched respondents came from 29 different teams.\nFindings: - Results revealed that group identification and perceived support for creativity exerted equal independent effects in fully mediating the relationship between transformational leadership and team innovation.\nResearch limitations/implications: - The potential for leadership to influence innovation through identification raises interesting alternative possibilities. Future research may generate new insights by investigating alternative samples, leadership styles or using qualitative methods.\nPractical implications: - Findings point to how a more transformational style of leadership influences team climate and identification, and in turn innovation in the context of scientific R&D teams. Such styles do produce better outcomes, both for the organization around more innovative products and processes, but also for team members who engage in more creative team environments.\nOriginality/value: - These findings add to the conceptual understanding of processes through which transformational styles of leadership promote innovation, and highlight the benefits gained by promoting more transformational styles of leadership to generate more innovative outcomes from teams and employees.\n",
"role": "assistant"
}
] |
Internationalization and relationship marketing: an introduction
|
[
"Market entry",
"Internationalization",
"Relationship marketing",
"Psychic distance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Although several aspects of the internationalization process and relationship marketing (RM) strategy have been addressed in the literature, a unified theoretical framework is lacking, explaining the impact of RM strategy on internationalization process. This study was undertaken as an initial attempt to address the role of RM in internationalization process. To understand this concept, the internationalization process is divided into two stages: pre-internationalization, which focuses on overcoming the psychic distance and post-internationalization, which focuses on market entry and positioning. At the different stages of internationalization, different components of RM assume greater levels of importance, which are likely to have implications for the implementation of an internationalization process.\nA theoretical lense - the literature review: The internationalization process\nImplications for managing relationships in an internationalized context: The discussion in this paper and the model included has highlighted the importance of four key aspects of RM in an internationalized context: communication, trust, commitment and satisfaction. It is essential for marketers to manage relationships effectively, by reducing any conflict seamlessly, and promoting strong, positive communication. Communication is the essential basis of any relationship strategy in an internationalized context. Without communication, trust cannot be built and relationships cannot be developed beyond internationalization. As indicated in this paper, this may require the establishment of offices in the focus country. At minimum, it requires face to face contact to develop trust in the relationship.\nConclusion and discussion: The simple model provided in this paper can provided a basis for researchers to understand RM in an internationalization context. Furthermore, for practitioners, the model provides a simple understanding of the key requirements for each stage of the internationalization process. An increased understanding of the process and the intersection between internationalization and RM is necessary, however. Researchers are encouraged to explore this intersection through qualitative research and an initial research program.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to integrate the concepts of the internationalization process and relationship marketing (RM). It identifies two stages for internationalization (pre-internationalization and post-internationalization) and assigns RM components for each step.\nDesign/methodology/approach: The study undertakes a review and synthesis of the extant literature examining internationalization and RM. It then identifies two stages of the internationalization process and its steps, associated with RM components.\nFindings: The study highlights that each step in the internationalization process requires appropriate RM component to be implemented successfully.\nPractical implications: Findings of this study highlight the importance of managing internationalization for firms intending to enter to foreign market and identify the issues that need to be understood, if firms are to effectively manage their internationalization strategy.\nOriginality/value: The paper is the first to integrate the concepts of internationalization and RM and to identify the factors that make managing these two types of firm's strategy.\n",
"role": "assistant"
}
] |
Study of married workers in Indian IT shows influence of work-family enrichment on job satisfaction
|
[
"Proactivity",
"Self-efficacy",
"Family permeability",
"WFE",
"WFSE"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: A study of 508 married workers in the Indian IT sector - who all have at least one child - has shown the influence of work-family enrichment (WFE) on job satisfaction. The research used proactivity, work-family self-efficacy (WFSE) and family permeability as antecedent variables. The authors said the strong correlations between these personality-based antecedents and WFE made it essential for managers to consider each individual worker's personality traits, as well as to produce work policies and job specifications that helped people to manage their work-family interrelationships.\n",
"role": "user"
},
{
"content": "Purpose: The researchers wanted to examine work-family enrichment using the antecedent variables of proactivity, work-family self-efficacy (WFSE) and family permeability,\nDesign/methodology/approach: The researchers selected married individuals working full time with at least one child younger than the age of 16. Respondents were drawn from the IT sectors in the large southern Indian cities of Bengaluru and Hyderabad. Data were collected using self-administered questionnaires. Respondents answered questions about their proactive personality, WFSE, family permeability, WFE and job satisfaction, in addition to demographic variables around age, gender, children and dual-earner status.\nFindings: The results show the influence of work-family enrichment (WFE) on job satisfaction. The authors said the strong correlations between the three personality-based antecedents and WFE made it essential for managers to consider each individual worker's personality traits, as well as to produce work policies and job specifications that helped people to manage their work-family interrelationships.\nOriginality/value: The authors say the research has made three unique contributions to work-family literature. First, they have explored individual preferences and personality variables that result in greater enrichment. Second, the study has advanced WFE literature by considering family permeability as a preference. Finally, the study has answered the calls of previous researchers to look at variables like work-family self-efficacy as additional resources that help enrichment.\n",
"role": "assistant"
}
] |
Determinants of success in open source software networks
|
[
"Social networks",
"Open systems",
"Internet",
"User studies",
"Consumer behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Nowadays, people hardly ever develop software programs without interacting with other users (Bagozzi and Dholakia, 2006). In this line, open source software[1] (OSS) is usually developed and distributed through online social networks thanks to collaborative efforts and social interactions. To be precise, the justification to the expansion of these OSS online networks is twofold and based on the advantages offered by the internet when compared to traditional distribution channels:1. These online networks allow the formation of heterogeneous collaborative teams (in terms of nationality, place of residence, professional experience, etc.) in order to develop and continuously improve OSS products. Broadly speaking, the Internet helps to overcome the space and time barriers to interaction (Andersen, 2005) and therefore, very heterogeneous individuals, that would not be able to be connected by other means, can work on the development of an OSS product together.2. These networks favour product distribution to a very wide audience at a very low cost (zero in many cases). Besides, individuals can take part in discussions in order to inform and support (Muniz and O'Guinn, 2001) fellow users in the correct use of the OSS products, and even promote OSS products through positive comments made on the online network. Indeed, this positive word-of-mouth (WOM) or consumer advocacy is one of the major factors that most influence consumer behaviour (Almquist and Roberts, 2000), especially in the online context (Sen and Lerman, 2007).However, following Wasko and Faraj (2000), it is difficult to understand why members help to develop these OSS products in an altruistic manner (through members' contributions) when they could also benefit from the knowledge and products developed in the network without spending their time and effort on the community; that is, free-riding.In this regard, we may note that reputation is one of the most important intangible resources to establish and maintain a relationship (Flavian et al., 2006). According to relationship marketing, reputation emerges as a consequence of the reliable and consistent behaviour of an organization over time, showing concern for other parties in a relationship, so that the credibility of the organization is enhanced since this past behaviour is a sign for its future actions (Ganesan, 1994). Besides, the importance of reputation may be even greater in online relationships due to the greater uncertainty that is perceived in the Internet (Casalo et al., 2007; Harris and Goode, 2004).Therefore, reputation may be a crucial factor to promote relationships in an online social network so that it will help to explain why people actively participate in online networks and cooperate with other members, who are usually physically unknown. Broadly speaking, in order to achieve a shared objective (e.g. developing an OSS product, such as the Internet browser Firefox), people should prefer to interact with members with a good reputation in order to avoid possible opportunistic behaviours. Thus, with the aim of moving on this topic, we put forward a behavioural model that characterizes the influence of reputation on member's satisfaction and participation in an online network.Finally, due to the fact that social networks have been proposed to favour the development of emotional links to the interest (e.g. a brand, a product, etc.) around which the network is developed (Koh and Kim, 2004), we also analyze the influence of participating in an OSS online network on the affective commitment and loyalty intentions, in terms of future use, to the OSS products.Taking into account the previous considerations, the paper is structured as follows:1. We carry out a review of the relevant literature concerning the variables included in the study.2. We formalize the hypotheses. Then, we explain the process of data collection and measures validation.3. Finally, we present the main results, conclusions and limitations of the study and outline possibilities for future research.\n2 Literature review (Appendix): 2.1 Reputation\n3 Formulation of hypotheses: 3.1 Outcomes of reputation in an online social network\n4 Data collection: In order to empirically test our hypotheses, data were collected thanks to a web survey using Spanish-speaking members of OSS online social networks. To be precise, we obtained 215 valid questionnaires (atypical cases, repeated responses and incomplete questionnaires were controlled). To assess the reliability of the sample, we compared the socio-demographical characteristics of the sample with available information about the population. Since there are not specific studies that focus on Spanish-speaking OSS users, we compare some of the survey results with a recent study on OSS communities (Bagozzi and Dholakia, 2006). Results were very similar (Table I).To obtain the responses, several posts were included on heavy traffic websites (with a high level of awareness among the Spanish-speaking OSS users), email distribution lists and well-known electronic forums (all of them related to the OSS). This method of collecting the data is consistent with the habitual research practice in the online context (Steenkamp and Geyskens, 2006; Bagozzi and Dholakia, 2006; Flavian and Guinaliu, 2006). Finally, all latent variables were measured using a multiple-item measurement scale. These measures use a seven-point Likert type response format, and respondents rated them from 1 (\"completely disagree\") to 7 (\"completely agree\").\n5 Measures validation: The process of validation included the following stages.5.1 Content and face validity\n6 Results: With the objective of testing the proposed hypothesis we developed a structural equation model that can be seen in Figure 2.First, reputation in an OSS online social network has been found to have a positive and significant effect on the satisfaction with the network (b=0.819, p<.01), which supports H2. However, the effect of reputation on the intentions to participate in the OSS online social network was not significant (b=0.016, p>0.1), so that H1 is not supported. Second, we have discovered that satisfaction influences positively the intentions to participate in the OSS online network (b=0.465, p<0.01), which confirms H3 and provides evidence of an indirect effect of reputation on participation intentions through satisfaction. At the same time, H4 and H5 were proofed since affective commitment to the OSS is influenced by both the satisfaction with the OSS online network (b=0.389, p<0.01) and the participation continuance intentions (b=0.283, p<0.01). Finally, as proposed in H6, affective commitment has a positive influence on loyalty intentions to the OSS products (b=0.810, p<0.01).Besides, we have also obtained quite good R2 levels, which have allowed us to partially explain the endogenous variables of this study: R2SAT=0.670; R2PARTI=0.204; R2AFF_COMM=0.331; R2LOY=0.657. Similarly, model fit was also acceptable (kh2=140.501, 68 df, p<0.01; Bentler-Bonett Normed Fit Index = 0.931; Bentler-Bonett Nonnormed Fit Index = 0.982; Comparative Fit Index = 0.987; Bollen Fit Index = 0.987; Root Mean Square Error of Approximation (RMSEA)=0.032; 90 per cent Confidence Interval of RMSEA (0.000, 0.054); normed kh2=2.06619).\n7 Conclusions: OSS online social networks have experienced an exponential growth in the last years due to the advantages in the development, distribution and promotion of products offered by the Internet when compared to traditional channels. Thus, to move on the analysis of these networks, this research has investigated:* the members' behaviour in OSS online networks, focusing on the influence of perceived reputation and satisfaction with previous interaction in the intention to participate and collaborate in the network; and* some possible outcomes associated to the participation in these communities.To be precise, the present paper has firstly allowed us to define the decisive role of reputation and satisfaction in developing a successful OSS online social network. In this respect, reputation is found to have a direct influence on user satisfaction with an OSS online network and an indirect effect on his/her intentions to participate in that network through satisfaction. However, the direct effect of reputation on the intentions to participate in an OSS online network is not significant. A justification to this could be found in the fact that reputation may affect directly member intentions at the early stages of his/her participation in the network, when she/he has not a clear idea about what to expect, since reputation would be a sign of reliable behaviour in the network over time. Thus, once the individual has interacted with other members, reputation becomes less important since the individual is more familiarized with the behaviour of other members and then, it is the level of satisfaction that determines mostly the continuance intentions to participate in the network. As a result, we see that there is an indirect logical sequence between these variables (reputation, satisfaction and continuance participation intentions). This logical sequence implies that surpassing low levels of participation also depends on the proper management of reputation in the online social network since it helps to increase the levels of member's satisfaction with the network, which determines the intentions to participate in the network at any given time. In addition, due to these results, both satisfaction (R2=0.670) and intentions to participate in an OSS online network (R2=0.204) could be partially explained. Thus, it is possible to state that perceived reputation serves to satisfy the needs of the network members since it acts as a deterrent factor of free-riding behaviours in the network. As a consequence, it is more likely to develop collaborative actions among their members.Second, results have shown that it is possible to boost the affective commitment to the mutual interest of an online network thanks to the participation in the network and the satisfaction obtained when participating in it. This can be explained by the development of emotional ties with the mutual interest of the network that appears as a consequence of:* the interactions with other network members, which are usually based on topics related to that interest; and* the fulfilment of expectations regarding these interactions. Indeed, thanks to these effects, it is possible to explain at a quite high level the affective commitment to the OSS (R2=0.331).Finally, as it was expected, we have also found that affective commitment is a major determinant of loyalty intentions, which can be clearly explained in our model (R2=0.657).Thus, three main conclusions arise from this research:1. the importance of reputation in the context of online networks becomes evident;2. satisfaction of needs is a crucial aspect to guarantee the interaction among network members; and3. the participation in an online social network may increase the level of affective commitment to the mutual interest of the network (the OSS in this case), which may favour the development of loyalty intentions toward that interest (e.g. a brand, firm, product, etc.).7.1 Managerial implications\n",
"role": "user"
},
{
"content": "Purpose: - Open source software (OSS) products have rapidly acquired a notable importance in the last years among consumers and firms all over the world. They are mostly developed and distributed through online social networks thanks to the voluntary and collaborative actions of their members. However, their innovation and development has to face up the existence of free-riders which can benefit from the knowledge developed in the online social network. This paper aims to understand the factors that moderate the opportunistic behaviour in OSS development and distribution, which will help to correctly manage the OSS innovations.\nDesign/methodology/approach: - The influence of reputation on members' satisfaction and participation intentions in OSS online social networks are measured. Additionally, the impact of being an active and satisfied member on his/her commitment and intention to use the OSS products are studied. After the validation of measurement scales the hypotheses are contrasted with structural modelling.\nFindings: - This research show that perceived reputation acts as a deterrent factor of free-riders. More specifically, reputation exerts a positive and significant effect on member's satisfaction with previous interactions and an indirect effect on participation intentions in the social network through satisfaction. Besides, these two outcomes of reputation boost the members' affective commitment to the OSS and, as a consequence, the intention to use OSS products is also increased.\nPractical implications: - Reputation and satisfaction are two crucial aspects in explaining the success of an online social network since they serve to guarantee the interaction among its members. In addition, participation continuance intentions in an online network may help to increase the levels of affective commitment and loyalty to the mutual interest of the network (the OSS in this case). This result may be especially relevant for commercial networks, which are based on the admiration to a brand, firm or product.\nOriginality/value: - The analysis of online social networks as a development and distribution channel and the role of reputation in promoting members' participation (that is, avoiding opportunistic behaviour) represent a new contribution to the analysis of online social networks. This research field has acquired a notable popularity in recent years.\n",
"role": "assistant"
}
] |
Determinants of credit card misuse among Gen Y consumers in urban Malaysia
|
[
"Self-efficacy",
"Materialism",
"Social norm",
"Credit attitude",
"Credit cards misuse amongst Generation Y",
"Knowledge on credit card"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Credit cards now play a vital role in modern societies. They are a convenient tool to pay for goods and services on credit, within a certain credit limit set by the credit card issuer (Omar et al., 2014). Each month, the outstanding balance on the credit card may be paid in full or in part, subject to a minimum amount specified by the issuer. Interest is charged on the balance outstanding at a rate usually higher than most loans due to its convenience and flexibility (United States Government Accountability Office, 2006).\n2. Literature review and hypotheses development: 2.1 Credit card misuse\n3. Methodology: 3.1 Data and sample\n4. Results: We proceed to estimate multiple regression to examine the contribution of credit attitudes, knowledge of credit cards, materialism, social norm and self-efficacy to predict credit card misuse amongst Malaysian Gen Y and the results is presented in Table VII. Based on our correlation results, most of our variables do not have any multicollinearity issues except between knowledge about credit cards and self-efficacy (r=0.812). To address this issue, we re-estimated Model 1 by excluding the variable about credit card knowledge in Model 2. Then, we repeated the process by removing the self-efficacy variable in Model 3. Since the ANOVA results showed statistically significant differences in mean amongst religious groups and income levels, we re-estimated Models 1, 2 and 3 by including the religion and average monthly income categories into our estimation models (see Models 4, 5 and 6). From the adjusted R2 results, we found that Model 6 which include all the tested variables is superior to the other models. To verify our inference, we further check the incremental increase in these six model's R2 resulting from the addition of a predictor to the regression equation. The F-statistic of the R2 changes results affirm that Model 6 (full model) outperformed the other five models. Hence, we use Model 6 for our analysis.\n5. Discussion: The number of bankruptcy cases reported by the Malaysian Department of Insolvency (2017) is highest amongst individuals between 25 and 44 years of age (more than 50 per cent of the total bankruptcy cases for year 2017 are from this age group), suggesting that the Gen Y cohort are potentially included within this particular group facing severe financial problems. This study attempts to examine the contributing factors, namely, credit attitudes, knowledge on credit cards, materialism, social norm and self-efficacy towards credit card misuse amongst Gen Y in Malaysia. The questionnaires were distributed amongst Gen Y respondents in two selected urban areas in Malaysia, with 501 respondents for our analysis. We developed five hypotheses and tested each via multiple regression estimation models.\n6. Conclusion: Despite active participation from the regulator's side such as Bank Negara Malaysia to enhance the financial management skills via the establishment of the Credit Counselling and Debt Management Agency (AKPK), the bankruptcy cases amongst Gen Y are still of concern. To address Gen Y's lack of financial management skills, we offer two potential practical solutions. This study has shown that credit attitude and knowledge of credit cards are amongst the most important factors influencing the credit card misuse amongst the Gen Y cohort in Malaysia. Better credit attitudes and knowledge on credit card matters will lead to a lower tendency to misuse credit card. Hence, the first implication of this study is the effort to increase the risk awareness associated with credit card transactions organised by the Central Bank of Malaysia (BNM) and AKPK that focus on this Gen Y cohort. Policy makers should produce a creative awareness campaign using short videos in social network platforms to educate the public on the adverse consequences of using credit cards and to inform consumers on the best practices in managing credit card debt. The short video can be boosted via social media and is the best mechanism to educate a larger pool of Gen Y consumers since they are likely to access social media more than traditional media, such as newspapers, television or radio. In addition, the awareness campaigns should also aim at shaping the attitudes of these young adults regarding credit cards because as this study has shown credit card attitudes have the strongest influence on credit card misuse amongst Gen Y.\n",
"role": "user"
},
{
"content": "Purpose: The concept of \"buy now pay later\" leads Malaysian Generation Y (Gen Y) to excessively use their credit cards for spending. To gauge the extent of this worrisome scenario, the purpose of this paper is to attempt to investigate the factors, including credit attitudes, knowledge on credit card, materialism, social norm and self-efficacy, that influence credit card misuse amongst Gen Y in Malaysia.\nDesign/methodology/approach: The authors have collected responses from a total of 501 respondents in two urban areas in Malaysia and estimated six multiple regression models to test five hypotheses.\nFindings: The results suggest that credit card knowledge and self-efficacy are negatively related to credit card misuse amongst Gen Y in Malaysia. In contrast, positive relationships were found to exist between credit card attitudes, materialism and social norm and the dependent variable.\nResearch limitations/implications: In this study, the authors limit the data collection to the two biggest urban areas in Malaysia, namely, Klang Valley and Ipoh.\nPractical implications: For the regulator's perspective, the results can be used to understand the alarming indebtedness behaviour amongst working members of Gen Y and outline appropriate and effective policies to reduce their serious indebtedness. Financial service providers, however, can collaborate with regulators to curb credit card misuse amongst Gen Y, so that the latter can avoid high bad debt from line of credit facilities and bankruptcy.\nOriginality/value: The study's findings will further enrich the existing literature on the factors affecting the credit card misuse, especially for the unique Gen Y cohort in Malaysia.\n",
"role": "assistant"
}
] |
Routes to educational success for low achievers
|
[
"Human capital",
"Attainment",
"Vocational education",
"Education",
"Students",
"United Kingdom"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nI. Introduction: The UK witnessed a rapid increase in the proportion of 16-year-olds entering post-compulsory education from the 1980s to the mid 1990s, which rose from 41.7 per cent in the academic year 1979/1980 to 72.7 per cent in 1993/1994 (DfES, 1994). The increase in the staying-on rate was largely the consequence of a number of government initiatives aimed at encouraging young people to increase their level of education. For example, the Education Reform Act (1988) introduced the \"General Certificate of Secondary Education\" (GCSE) qualification, which simultaneously replaced both the General Certificate of Education (GCE) and the Certificate of Secondary Education (CSE) qualifications, and thereby reduced the distinction between young people with different levels of ability[1]. Entrants for the GCSE are assessed on their coursework along with examination performance rather than on their examination performance alone, which has been claimed by many to have increased the motivation to study for those on the margin of the decision to stay-on in education, especially for females[2].In the subsequent decade there was recognition that the UK still lacked competitiveness in the market place and that the increased provision of vocational educational was required[3]. However, the proportion of 16-year-olds entering post compulsory education rose only minimally to 73.8 per cent in 2004 (DfES, 2006c). By 2006 the UK still lagged well behind other European countries in the proportion of its young adults achieving National Qualification Framework (NQF) levels 2 and 3 (DfES, 2006a)[4]. Young people with the greatest need to stay in education beyond age 16, who are classified as \"lower achievers\", are those most likely to find themselves unemployed in a labour market that requires increasingly high skill levels (DfES, 2006b), and it is these lower achievers that are the motivation for this paper. In 2009 an estimated 10 per cent of young people were classified as not in employment, education or training (NEET), and participation in post compulsory education at age 17 (80 per cent) was much lower than that at 16 (90 per cent) (DCSF, 2009, p. 9). In a further attempt to increase the skills level of all young people the British Government passed the Education Act (2008) which will increase the age at which young people in England can leave education or training[5].In this paper we focus on the progression of those young people classified as \"lower achievers\" at age 16 (gaining less than five grades A*-C at GCSE), but who subsequently decided to stay-on in education. A novel aspect of this work is that we seek to examine if there are any differences in educational outcomes after controlling for the type of course taken and the institution chosen. A further novel feature is that we include interactions to ascertain whether it is the institution attended, or the type of course taken that is the main driver of educational success in further education for these young people. The paper is organised as follows. In the next section we discuss the background: what it means to be a lower achiever and the educational options open to a \"typical\" lower achiever. In Section III we outline the theoretical framework and review the main factors which have been found to influence educational attainment. In Section IV we discuss our data and econometric methods. In Section V we present and discuss our findings. Section VI concludes.\nII. Background: Since 1976 young people in the UK complete their compulsory education at the age of 16[6]. In their final year of compulsory schooling they sit their GCSE examinations after which successful students may choose to take \"Advanced level\" (A' level) courses, which are the traditional academic prerequisite to university entrance, or they may choose an academic access programme or a vocational programme, which since their introduction in the mid 1990s are an alternative route to university entrance. Students also have the option of continuing their education at this time at either the school they have been attending (which we refer to as FE school) or by transferring to a further education college (FE college). The FE college in the UK is a complex structure which provides a wide range of programmes, from non-certificated courses through to higher education courses and are available to individuals of all ages (see Lenton, 2008 for an overview). Originally designed to provide technical skills as they have strong links to the local business community, the FE college has been transformed over the past three decades with the introduction of more academic and higher education courses. The Higher Education Funding Council for England (2006) states that currently 71 per cent of students who participate in access courses progress to degree programmes (Lenton, 2008).Much of the economics of education literature on attainment is focused on the determinants of educational success of the brightest students. However, in order to increase the aggregate skills level of young people further still, the focus now must turn to meeting the needs of weaker students. Lower achievers at age 16 are presently faced with various decisions, the first being whether to enter the labour market or whether to stay in education or training. Those who decide to stay in education may retake GCSE examinations in the hope of improving their grades as is also the case for some students taking A level courses. Alternatively, students may wish to study a different syllabus, possibly a vocational course that due to its applied nature could provide more interest to this type of student. The increase in the provision of vocational education courses within schools, since the 1990s, has led to more \"lower achievers\" remaining in school to opt for these courses. Typical vocational courses include the Business and Technology Education Council (BTEC) and the Royal Society of Arts (RSA) qualifications which are available for many subjects and at many levels through which a student may progress[7]. These courses have been shown to increase the employability of low achievers (Lenton and McIntosh, 2008), as well as providing large increases in the wage returns to low achievers compared to those who do not gain any qualifications by age 18 (McIntosh, 2008). Table I provides an example courses and their relevant NQF level[8].Lower achievers may choose to attend the FE college if they believe that the provision of vocational courses offers a wider range of options or that the quality of vocational teaching is superior at the FE college compared to school. FE colleges have long offered a greater range of post-16 courses than schools, including national vocational qualifications (NVQs), advanced apprenticeships and the international baccalaureate. In a survey conducted for the Learning and Skills Council the dominant factors in student satisfaction were reported as the quality of teaching and the range of courses available (LSC, 2002). There are a number of hypotheses to explain the success of students in FE colleges which include: the variety of courses offered to students, the availability of support services and the impact of the college learning environment on student morale. Low achievers from schools with high rates of truancy are more likely to enter the FE college for their further education (Armstrong, 1999), as are those lower achievers from lower socio-economic backgrounds suggesting that the culture or atmosphere of the FE college incentivises student learning. As a consequence, FE colleges may potentially offer a major alternative strategy for progressing towards the UK Government objective of increasing attainment in post-16 education for disadvantaged socio-economic groups (Lenton, 2008).Both during the phase of rapid growth in post-16 participation and during its subsequent flattening-out, the share of students from socio-economic groups IV and V in higher education remained disappointingly low, at a constant level of about 6 per cent between 1980 and 2001 (Greenaway and Haynes, 2003, Figure 3, p. F155). Thus, a majority of the increase in post-compulsory education participation rates in the 1990s came, as in the 1980s, from students within socio-economic groups I-III, many of whom then progressed into higher education (Bradley and Taylor, 2000; Ermisch and Francesconi, 2001). Foster (2005) states that 29 per cent of students in FE colleges come from relatively disadvantaged areas. Success in further education will encourage entrance into higher education.\nIII. Theory and literature review: The theoretical framework that best explains the decision to invest in education is the human capital model (Becker, 1964). According to this approach, an individual will invest in education as long as the discounted expected returns to education to the individual are greater than or equal to the costs of the investment. Post-compulsory education allows students to invest in a course of their choice, either academic or vocational and in the subject area where they receive their highest net benefit. Arguably, the student is more likely to succeed in this scenario.The literature on the determinants of academic success during compulsory schooling is wide and we draw upon this literature for our study of post compulsory education. Haveman and Wolfe (1995) give an excellent overview of studies of children's achievements. The main determinants of educational success found in the literature include high prior attainment by the student (Haveman and Wolfe, 1995; McIntosh, 2001), high parental socio-economic status (Carpenter and Hayden, 1987; Dolton et al., 1999; Bradley and Taylor, 2000; Ermisch and Francesconi, 2001); and school type. Attendance at a grammar or independent school compared to a comprehensive school is found by Micklewright (1989) and Rice (1999), to increase the probability of gaining good results in GCSE examinations and also increase the probability of staying on in education (Lenton, 2005). The factors underlying low achievement in compulsory schooling are identified in a recent report for the Joseph Rowntree Foundation (Cassen and Kingdon, 2007) but, like many other studies of this type, the focus is on reasons for low attainment at school, not on how this problem can be rectified. Within the post-compulsory education literature there is little discussion of the impact from the type of institution at which low achievers attend or the type of course most likely to lead to eventual educational success. The literature does offer some clues that are particularly relevant for our story, for example Lauer (2003) finds that the influence of parental socio-economic status on attainment is not as strong for post-compulsory education as it is for compulsory education. However, no systematic comparison between the rate of attainment in schools and their major competitors, FE colleges, is available. Armstrong (2002) assesses school performance and staying on rates and finds that school examination performance captures the academic orientation of the school which in turn influences the decision to enter post-compulsory education; however, the attendance record of the school captures the school environment and influences the choice of further educational institution. Within the educational psychology research literature, Banks and Woolfson (2008) argue that attributions play a large role in academic self-perceptions, in other words many low achieving students have low self-esteem and perceive their failure to be partly due to factors that they cannot control, such as their teachers and their school. We investigate whether attending the FE college, by changing ones learning environment encourages success or whether the FE college curriculum of vocational courses is the major determinant of success.\nIV. Econometric data and methods: IV. 1 The data\nV. Results: (a) Increase in attainment\nVI. Concluding comments: We find evidence that young people who are classified as \"lower achievers\" at age 16 can be educationally successful by age 18. The influence from family background on educational attainment diminishes by the age of 18, which suggests that personal characteristics and aspirations have a greater influence on educational attainment as young people approach adulthood. The choice of further education institution is highly relevant to the likelihood of success for lower achievers, who we find, significantly benefit from attending a FE college compared to continuing in state school. There are further significant gains in the probability of educational success for young men who take vocational courses at the FE college. Furthermore, these students are more likely to either gain employment or to enter higher education courses than their counterparts who chose to continue their education at school.In terms of policy implications, the government's aim to increase the skills base of young workers via the Education Act (2008) whereby young people are required to stay in some form of education or training until age 18 is most likely to be met by encouraging those young people classified as lower achievers to continue their education at a FE college. Furthermore, higher levels of educational attainment through FE colleges will lead to an increase in participation rates in higher education, another of the British Government's objectives. The promotion of the FE college as a stepping stone into higher education has been shown elsewhere to be a cost effective strategy (Lenton, 2008). Given that a large proportion of lower achievers are from further down the socio-economic ladder, this will address the issue of equal access to higher education, in that a proportion of these young people will subsequently enrol for higher education courses. O'Connell et al. (2006) have concluded that there has been a closing-up of relative social inequalities, partly due to the increased participation of the children of manual workers: the implication of our findings is that this trend can be accelerated if a higher proportion of lower achievers at age 16 are directed through the FE college.A full explanation of the reasons for the apparent advantage of FE colleges in the educational attainment of lower achievers lies beyond the scope of this paper and is an area for further research. However, two factors which may be relevant are the existence in FE colleges of a wider curriculum (especially in relation to vocational options) and a greater presence, in FE colleges, of both staff and fellow students willing to give positive encouragement to low achievers to persist and if possible achieve excellence in their chosen subject rather than give up. Our findings suggest that FE colleges have a comparative advantage in persuading 16-year-old lower achievers that any lack of self-esteem is not justified.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to investigate whether the institution attended or type of course taken in further education, by students who are classified as low achievers at age 16, influences the probability of their subsequent educational success.\nDesign/methodology/approach: - To evaluate the probability of educational success of this subsample of young people, a probit model with sample selection is estimated, using the Youth Cohort Survey data for England and Wales and covering the period 1982 to 2004.\nFindings: - The results suggest that the best route to educational success for \"lower achievers\" is through further education colleges. There are further significant gains in the probability of educational success for young men who take vocational courses at the FE college. Furthermore, the students taking this route into post-compulsory education are subsequently most likely to enter higher education; a result relevant to policy, since the aim of the Education Act (2008) is to increase the educational attainment of all young people up to the age of 18.\nOriginality/value: - The paper provides an analysis of the educational decisions and outcomes of young people who were considered of low ability at age 16.\n",
"role": "assistant"
}
] |
Do interactions within networks lead to knowledge management?
|
[
"Social capital",
"Knowledge creation",
"Knowledge management",
"Knowledge capture",
"Knowledge sharing",
"Trust"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Studies conducted over the past two decades support the idea that merely tangible assets do not correspond to corporate development. No doubt, the future success of a company is affected by how efficiently it can convert one form of value into another and considering this point that intangibles increase and provide leverage through deliberate action (Allee, 2008). Also four strategic resource qualities: value, rarity, inimitability and non-substitutability (Barney), highly involves intangible assets since they are less imitable (Wills-Johnson, 2008).Hoffman et al. (2005) believe that sustained competitive advantage occurs when a firm develops a distinctive core competency such as Knowledge Management. According to Karkoulian et al. (2008), organization can maximize returns through knowledge utilization. Weber and Weber (2007) introduce knowledge as a source of sustainable competitive advantage. Wills-Johnson (2008) supposes ability to manage and coordinate knowledge amongst resources provide comparative advantage to a firm.It is evident that in the new economy, knowledge assets are grounded in the experience and expertise of those individuals working in a company and the firm has to therefore provide the right structures to shape knowledge into competencies. In addition to physical and resource allocation structures, social structure is also an important driver in creating the right competencies to ensure the commercial success of a firm (Smedlund, 2008). While business environment changes rapidly, organizations need to acquire and develop knowledge at a much faster rate (Ratten and Suseno, 2006). Social Capital (SC) in a relationship enables partners to tap into each other's knowledge resources and thereby increase the depth and efficiency of mutual knowledge exchange, which is considered a predominantly social process (Weber and Weber, 2007). Hence, Investments in SC can improve growth of the firm (Qizi and Hung-Gay, 2006).On the other hand, organizations should be aware of the needs of new generation employees to gain a good fit between their organization and its capitals. Tomorrow's employees seek a strong series of cultural values. Open and honest communication, fairness and respect, optimum working conditions, innovativeness, the \"acceptability\" of mistakes, a friendly environment, training and development opportunities - or in sum SC - are most valued by new generations employees when deciding where to work (Szamosi, 2006).Interaction between resources, rather than some intrinsic property of individual resources themselves, is a more useful focus of analysis for those seeking to understand the strategic aspects of firm construction (Wills-Johnson, 2008). In this regard, this paper seeks to find appropriate cultural Infrastructure for Knowledge Management (KM) leverage. It addresses the question whether Social Capital affects Knowledge Management Cycle in a Trading Organization, where expertise knowledge is deemed as strengths to gain competitive advantage. Following this purpose we applied Nahapiet and Ghoshal (1998) model for SC and its effect on Knowledge Creation, Sharing, Application and Saving. Specifically, the structural, relational, and cognitive dimensions of Social Capital enhances organization capacity and are antecedents of KM. Hence, section 2 will briefly review literature on SC, KM Cycle and introduce hypothesized Model. In section 3, methodology including a description of the research instrument will be described while section 4 elaborates on results and its analysis. And finally, the last section presents conclusion of major findings and possible implications for future research.\n2. Literature review: 2.1 Social capital\n3. Research methodology: Selecting right methodology highly depends on the nature of the research subject hence choosing the approach is the initial and important step. Besides, the research paradigm and strategy should be clear. As such, we subjected the data collected to quantitative analysis in order to utilize statistical tests.3.1 Development of instruments\n4. Analysis and results: 4.1 Demographics and descriptive statistics\n5. Conclusions and discussions: The key to understanding the knowledge economy lies not only in understanding intangibles as assets, but also in coming to terms with how they are set into motion in unique configurations of relationships, interactions, and resources in value conversion networks (Allee, 2008). Without extensive behavioral, cultural, and organizational change, KM cannot take place (Monavvarian and Kasaei, 2007). Social Capital enhances Knowledge Management processes because it contributes to a firm's ability to create value in the form of innovation through the facilitation of combination and exchange of resources in a firm (Hoffman et al., 2005). Value network analysis can provide a systematic way for approaching the dynamics of intangible value realization, interconvertability, conversion, and creation (Allee, 2008).Although it is proposed that the use of Knowledge Management is positively associated with an organization's ability to achieve sustained superior performance, the question still remains as to what factors can determine an organization's capacity for Knowledge Management. One answer to this question may be found in the Social Capital literature (Hoffman et al., 2005). Together they provide insights into practical issues. In this study, we combine Social Capital theory with the knowledge based view of the firm in order to provide a framework to analyze the interactions between strategic assets in the firm and their interrelatedness. We described how each dimension of SC enhances KM cycle in organization.This study has some limitations. First, we measured culture as independent variable which may differ in different organizations and make it fairly difficult to generalize it. Second, we study perceived SC and KM rather than the reality.In spite of the aforementioned limitations, there are important managerial implications obtained from the findings. First, there is a strong relationship between Relational dimension and Structural dimension of Social Capital which means the quality of relations can impact the actors location and vice versa. Second, shared vision and values represent an important part of cognitive dimension which can increase the likelihood of knowledge sharing. Third, Knowledge creation and application is significantly relevant which means that when knowledge is absorbed it will be used. In line with other studies, norms like trust, accepting criticisms, ...constitute the quality of relations and play a significant role in relational dimension. As a result in such established interactions, parties are more willing to cooperate and share resources. Accordingly the exchange of information and the flow and acquisition of tacit knowledge is facilitated (Ratten and Suseno, 2006). One of the key embedded factors is improving teamwork morale. This improves knowledge creation and if supported by a corporate system of knowledge saving, increases the probability of application in future. Thus a cycle is created which helps organization take advantage of its capitals.\n",
"role": "user"
},
{
"content": "Purpose: - Through Social Capital, this paper seeks to introduce interactions as antecedents of Knowledge Management.\nDesign/methodology/approach: - The paper briefly reviews the Social Capital and Knowledge Management literature. It investigates how SC impacts the KM cycle in the literature and then presents statistical evidence using a case study.\nFindings: - Social Capital can be considered as an appropriate infrastructure for Knowledge Management implementation.\nResearch limitations/implications: - Social Capital refers to corporate culture; hence it differs in different contexts. This might result in a slight limitation to generalizing the findings to other organization types.\nPractical implications: - Actors' position in a network, trust, norms, teamwork enhance interactions, and encourage cooperation and resource sharing. As a consequence, exchange of knowledge is facilitated and knowledge is shared, created, applied and saved.\nOriginality/value: - The paper provides insights to different dimensions of Social Capital and the way it enhances the Knowledge Management cycle. It further clarifies the relationship by statistical evidence via introducing a case study.\n",
"role": "assistant"
}
] |
Online collaboration: how can it boost personalized learning?
|
[
"Mentoring",
"Training",
"Learning"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Historically, companies looking to achieve greater business efficiencies typically focused on internal issues such as productivity gains and better processes. As a result, collaboration tools including audio, video or web conferencing have in the past been judged almost exclusively on the economic benefits they bring, by replacing much of the travel required by staff to meet with colleagues, clients or partners. Similarly, training programs can be run without having to take staff off-line for extended periods.\nIndividual needs: Most staff within the business will share a common need in learning how to use the company's data entry system, for example. Similarly, the whole sales team will need to learn about the organization's products and services and how to promote them to best advantage to prospects and customers.Here, collaboration solutions provide a range of responses, enabling instruction to take place and the online, real time sharing of technical expertise, best practices and documents and other materials typically dispersed around the business. This one-to-many teaching replicates and enhances established face-to-face teaching and training practice, by getting beyond traditional geographical, cultural and technological barriers.And the value of this should not be underestimated. At one level, traditional collaboration solutions might be seen simply in terms of transferring content from a pre-structured course or presentation \"face to face\" to one which is \"face to screen\".Yet the availability of additional tools offered by multi-media technology such as chat enables the individual trainee to ask questions on a one-to-one basis: at the same time, by using e-quizzes, votes and surveys, the trainer or moderator is able to simulate more closely a classroom environment by checking understanding, to ensure the appropriate follow-up coaching or reinforcement in each case.More recently however, the increased recognition of how modern collaboration solutions can be used on a more personalized, one-to-one basis has enabled organizations to get beyond \"training\" and extend their use into the broader area of learning and development.As such, collaboration becomes more powerful, by addressing an individual's unique development needs. It provides a range of tools to suit individual learning requirements, habits and styles and enables maximum interaction and response through the most appropriate communication medium.\nEase of use: As Blocher (1996) observed in his book, Counseling: A Developmental Approach:Learning is not a spectator sport - it is an active, not a passive enterprise. Accordingly, a learning environment must invite, even demand, the active engagement of the student.The need for such two-way involvement may be true, but any virtual learning facility will only work if the tools are easy to use, so encouraging maximum participation. And it is in this area where collaboration technologies have witnessed real advances over the past decade.Today, little training is needed for either moderator or participant to derive maximum benefit from the latest multi-media solutions. \"Click to join\" and highly intuitive functionality also makes such tools easy for participants throughout the organization both to access and to use.\nCost effective: Many companies continue to consider learning and development as costly and view it narrowly in terms of technical skills transfer in a classroom environment. As such, it is often at or near the top of the list of budget cuts in times of economic pressure.Yet the need to consider more broadly the importance of \"whole person development\" has become increasingly important as businesses look to retain and nurture their best people, as key to improving their overall competitiveness. This becomes more difficult as businesses become global, with staff spread across many geographies.On-line collaboration addresses these concerns in a number of ways. It enables fast and ready access to the company's dispersed knowledge base in a highly interactive way. Its use as a highly flexible, one-to-one learning and development medium supports the needs and desires of the individual as well as that of the employer.And, most importantly of all, with the availability of delivery mechanisms as software as a service and \"per minute per user\" pricing models, such benefits can be achieved in a much more cost-effective and controlled manner.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to examine how collaboration services can support the development and learning process, to the benefit of both the individual and broader organization.\nDesign/methodology/approach: - The paper explores how sophisticated online collaboration solutions complement face-to-face training and follow-up coaching and support by replicating much of its physical functionality and two-way communication capability.\nFindings: - In a training context, audio, video and conferencing tools have historically been used for the purposes of individual coaching or reinforcement. Yet increasingly, they are recognized as having much broader application in such areas as mentoring, coaching and training, so enhancing both personal development and the acquisition of wider business management skills.\nPractical implications: - In achieving this, it shows how collaboration has moved on from its traditional focus on one-to-many to providing one-to-one unified communication through the use of tools such as chat and feedback mechanisms including e-quizzes, votes and surveys. Availability is one thing, usage is another however and here, if they are to encourage maximum participation and usage in a training context, such tools must be easy to deploy, easy to use and offer ubiquitous access.\nOriginality/value: - The value of this approach is to show how, from a training perspective, the latest collaboration services offer a uniquely broad and cost-effective mechanism in getting beyond traditional geographical, cultural and technological barriers to share work-related skills, techniques and product knowledge.\n",
"role": "assistant"
}
] |
Timing and compensation strategies in service recovery
|
[
"Customer satisfaction",
"Emotions",
"Service recovery",
"Repurchase intention",
"Delayed recovery",
"Hotel service"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Facing increasingly fierce competition, managers increasingly value quality of service. Service failures are inevitable because the production and consumption of services occur simultaneously and services closely interact with customers (Gronroos, 1984). Most customers encountering service failures respond negatively and respond by spreading online and off-line negative word-of-mouth reviews or by turning to competitors (Andreassen, 1999; Hennig-Thurau et al., 2004; Zeelenberg and Pieters, 2004; Lau and Ng, 2001). Studies have found that customers might display higher satisfaction and purchase intentions when proper service recovery actions are taken instead of when no service failures occur (Smith et al., 1999; McCollough et al., 2000). Due to the increasing cost of acquiring new customers, retaining current customers is critical. Service recovery, which has evolved from emergent measures for service failures to a new component, the service quality system (Berry and Parasuraman, 1997), appears to be an important means of customer retention. Efficient service recovery may enhance customers' positive impressions of the company, whereas failure of service recovery might cause the company to lose customers. Since the importance of service recovery was highlighted by Gronroos (1988, 2007), recovery strategies have attracted great interest from various industries, including retailing (Kelley et al., 1993), catering (Hoffman et al., 1995), tourism (Boshoff, 1997), online retailing (Forbes et al., 2005) and online auctioneering (Kuo et al., 2011). Most of these studies focus on the compensation given to and interactions with customers (Levesque and McDougall, 2000; Seawright et al., 2008).\nTheoretical framework: Negative emotions in service failures\nMethod: Experimental design\nResults: Reliability and validity\nDiscussion and implications: In service industries, some seemingly-solved questions of service recovery (e.g. whether to take recovery actions as soon as possible) have caused researchers to rethink the effectiveness of conventional service principles. The introduction of emotional studies addresses these questions in a clearer way. The first major contribution of the current study is the finding that, for customers in a low-intensity negative emotional state, the sooner the recovery occurs, the better the performance will be, echoing the suggestions of previous research (Clemmer, 1989; Kelley et al., 1993; Bitner et al., 1990, 1994; Boshoff, 1999; Smith et al., 1999; Ramsey, 2003; Amir and Ariely, 2007; Gronroos, 2007). For customers in a high-intensity negative emotional state, in contrast, a strategy of giving them time to calm down and delaying service recovery actions may lead to a better outcome. Specifically, for customers with high-intensity negative emotions, delayed recovery causes higher satisfaction and repurchase intention than does immediate recovery. The second theoretical contribution of this paper is that it extends the current research scope and further investigates what actions service providers should take when considering immediate and delayed recovery. Decisions to use economic recovery or psychological recovery (Hart et al., 1990; Miller et al., 2000) necessitate consideration of the intensity and temporal characteristic of negative emotions. That is, different pairings of recovery timing and strategy are required under different levels of negative emotions. The results show that, for customers with low-intensity negative emotions, immediate and economic recovery should be combined to increase consumers' repurchase intention, which significantly grows with increases in satisfaction. In contrast, for customers with high-intensity negative emotions, delayed and psychological recovery should be combined to meet consumers' emotional experiences, causing repurchase intention marginally to grow with increases in satisfaction.\n",
"role": "user"
},
{
"content": "Purpose: A combined model involving the intensity of negative emotions and the strategic combinations (timing and means) of service recovery is developed. The purpose of this paper is to evaluate the performances of these different combinations through customer satisfaction, repurchase intention and fitting curves between the two under hotel service scenarios.\nDesign/methodology/approach: A 2 (recovery timing: immediate/delayed) x 2 (recovery means: psychological/economic) x 3 (type of service failure: failure in a delivery system/failure in responding to customer needs/improper employee behavior) between-subject experimental design was used with 456 participants.\nFindings: The results suggest that immediate and economic recovery effectively raises the service recovery evaluations from customers with low-intensity negative emotions, whereas delayed and psychological recovery helps customers with high-intensity negative emotions to give higher evaluations.\nOriginality/value: When service failures happen, the strategies for and timing of recovery directly influence customers' service recovery evaluations. This study sheds light on the role that negative emotions play in the process of service recovery and provides implications for service industry managers.\n",
"role": "assistant"
}
] |
Co-creating service experience practices
|
[
"Co-creation",
"Practices",
"Service experience",
"Service co-creation",
"Customer experience",
"Service ecosystem"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Service experience is a key issue in service research and practice. In management practice, service experiences are often still thought to be provided, or at least facilitated, by firms for customers. Yet, there is a growing recognition that customers are changing their behaviors in multiple ways, exercising greater influence on the firm, including engaging in behaviors designed to improve the service experience (van Doorn et al., 2010). Furthermore, in today's interconnected world, the boundaries between customers, firms, and other suppliers are increasingly becoming blurred (Achrol and Kotler, 2012).\nConceptual development: Service experience conceptualizations\nMethodology: Overview\nFindings: Figure 1 depicts our conceptual organizing framework (Epp and Price, 2011) which we developed drawing on Kjellberg and Helgesson's (2006) work on performativity of markets. The organizing framework comprises three broad categories of practices: representational; normalizing; and exchange practices. Representational practices are seen in the way an individual views the world (\"how I see the world\"). These practices affect normalizing practices (i.e. the way an individual interacts through accepting or adjusting norms is reflected in interactions with others (\"how I interact with others\"), which in turn affects exchange practices, or in other words, activities (\"the things I do\"). These practices in turn affect representational practices and so the cycle continues.\nDiscussion: Theoretical implications\nConclusion: This research represents a crucial step towards a deeper understanding of service experience practices. By undertaking extensive observations, supplemented with 35 interviews and staff diaries, we gained insight into what customers do and how customers make sense of their lived and imagined experiences. Importantly, we explicate the ways customers can and do actively co-create service experiences, thus answering our research question. Consistent with our conceptualization that service experiences are experiential, relational activities and interactions developed with the customer and potentially other actors, including for example, other customers, organizations, and friends and family, we draw parallels and contrasts with a biological ecosystem - an ant colony. We provide a Co-created Service Experience Practices (CSEP) framework designed to integrate and deepen our understanding of co-created service experience practices and extend practice theory. We offer a seven-point plan designed to assist management improve service experiences. Furthermore, we suggest five potentially fruitful avenues for future research.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is threefold: to introduce a practice-based framework designed to integrate and deepen our understanding of how individuals co-create service experience practices; to identify co-creating service experience practices; and to provide a compelling agenda for future research, and offer practical strategies to enhance co-created service experiences. Accordingly, we extend practice theory, building on Kjellberg and Helgesson's (2006) practice-based framework for markets by integrating Holt's (1995) consumer practices and social capital-based practices (Gittell and Vidal, 1998; Woolcock, 2001).\nDesign/methodology/approach: The authors interpretive analysis draws on naturalistic observations carried out over 18 months, supplemented with 35 interviews (17 with residents, and 18 with staff) and a diary study of nine non-management staff (including nursing staff, kitchen and cleaning staff and administrative staff) at a residential aged care facility.\nFindings: This paper offers a new conceptualization of service experience. Rather than viewing service experiences as dyadic, designed and produced by the firm for the customer, the authors conceptualize service experience as dynamic, experiential, relational activities and interactions, thus highlighting the collective, collaborative, evolving and dynamic nature of service experience.\nResearch limitations/implications: Building on McColl-Kennedy et al.'s (2012) foundational work, the authors articulate three distinct types of practices that characterize service experiences. We extend practice theory offering an integrative practice-based framework consistent with our practice-based conceptualization of service experience. Based on the service ecosystem metaphor and drawing parallels and contrasts with an ant colony, the authors provide a co-created service experience practices (CSEP) framework comprising: representational practices - assimilating, producing and personalizing; normalizing practices - bonding, bridging and linking; and exchange practices - accounting (searching and selecting), evaluating (sorting and assorting), appreciating, classifying (displaying objects and demonstrating collective action, and play (communing and entertaining). Our CSEP framework integrates three theoretical frameworks, that of Kjellberg and Helgesson's (2006) market practices framework, Holt's (1995) consumer practices and social capital-based practices (Gittell and Vidal, 1998; Woolcock, 2001), to yield a deeper explanation of co-created service experience practices.\nPractical implications: It is clear from our observations, interviews with residents and staff, and from the diary study, that customers co-create service experiences in many different ways, each contextually determined. In some cases the customers are well equipped with a wide array of resources, integrated from exchanges with other customers, staff, friends and family and from their own resources. In other cases, however, few resources are integrated from few sources. Importantly, the authors found that some staff are willing and able to offer an extensive range of resources designed to complement the customers' own resources to help facilitate the service experience. We offer a seven-point practical plan designed to enhance service experiences.\nOriginality/value: The authors work contributes theoretically and practically in four important ways. First, the authors provide a critical analysis of prior service experience conceptualizations. Second, consistent with the conceptualization that service experiences are dynamic, experiential, relational activities and interactions developed with the customer and potentially other actors, including for example, other customers, organizations, and friends and family, we draw parallels and contrasts with a biological ecosystem and offer a co-created service experience practices (CSEP) framework designed to integrate and deepen the understanding of co-created service experiences and extend practice theory. Third, the authors provide managerial implications, including a seven-point practical plan. Finally, the authors offer a research agenda to assist further theory development.\n",
"role": "assistant"
}
] |
Enhancing market exchanges at the base of the pyramid: The roles of national culture, nontraditional capitals, and transformative technologies
|
[
"National cultures",
"Base of the pyramid",
"Social capital",
"Creative capital",
"Market exchanges",
"Mobile telephony"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: International marketers are increasingly interested in pursuing the \"base of the pyramid\" or BOP, markets. The chief reason is the prospect of a large number of new customers. Faced with saturated markets in the industrialized world, global managers are attracted to the growth opportunities presented by the BOP. The BOP is comprised of the four billion or more persons living below the poverty line in developing countries, such as India, Nigeria, and Columbia (Arnold and Valentin, 2013; Prahalad, 2006). Despite earning as little as $1 a day per person, the BOP holds $9 trillion in assets and $5 trillion in purchasing power (Hammond and Prahalad, 2004; Prahalad, 2012). Interest in the BOP stems from the desire both to fulfill the needs of a sizable global market, as well as to increase the well-being of persons traditionally excluded from economic participation. Hewlett-Packard, Procter\nTheoretical bases and conceptual framework: Amartya Sen (1999) forwarded a theory of poverty that has been enormously influential in economic and policy circles, and for which he was honored with a Nobel Prize in Economics in 1998. The United Nations and World Bank have embraced this theory, making it perhaps the reigning view of poverty in developmental economics (Evans, 2002). Sen argues for a more encompassing understanding of poverty than the previously held assumption that poverty is simply the inadequacy of income. Instead he proposes that poverty is multiple, interconnected deprivations in capability, or \"unfreedoms.\" Individual well-being rests on the ability to live a life of value and to have substantive choices for achievement (Sen, 1997). The poor have diminished capabilities to achieve and live valuable lives (Zheng and Walsham, 2008).\nResearch propositions: According to Sen (1999), individuals living in impoverished conditions suffer from several inter-related deprivations. The most apparent deprivation is economic. While estimates vary considerably about what level of income constitutes poverty, the criterion commonly used in the BOP literature has been a World Bank standard of less than $2 per day in earnings (Prahalad, 2006). The World Bank elaborates further that those who live on less than $1.25 a day are \"extremely poor,\" in contrast to the \"moderately poor,\" who survive on $2 daily (Arnold and Valentin, 2013). Regardless of the cutoff applied, these individuals subsist on very low levels of material abundance that affect all aspects of life. Meals tend to be irregular, water not potable, homes temporary, sanitation primitive, and little monies available beyond the bare necessities of food and shelter if that (Ruth and Hsiung, 2007).\nDiscussion: The purpose of this study is to examine specifically how national culture affects BOP market exchanges, and generally how market exchanges are enhanced at the BOP. We propose a conceptual framework where poverty lowers or inhibits market exchanges but is countered by several factors. Assuming national culture, non-traditional assets, and transformative technologies vary by BOP setting, greater performance orientation alongside higher social capital, creative capital, and mobile telephony increase buying, selling, making, and using activities. The framework is based on extant knowledge from past studies as well as theories and perspectives of poverty by Sen (1999), national culture by House et al. (2004), human assets by Bebbington (1999), and IT resources by Bharadwaj (2000).\nConclusion: This study fills gaps in the BOP literature by offering a theory-based conceptual framework of enhancing market exchanges through national culture, non-traditional human-based capitals, and transformative mobile technology. Global and BOP firms may use the model to frame and guide their marketing activities in these under-served markets, overcoming some of the challenges posed by deep poverty. We hope other scholars will join us in investigating further ways of delivering ethical solutions to BOP markets by applying the strengths of commerce and sound business practice.\n",
"role": "user"
},
{
"content": "Purpose: The base of the pyramid (BOP) is characterized by deep and wide poverty, which dampens market exchanges, or making/selling and buying/consuming activities. The purpose of this paper is to address the specific issue of how national culture distinguishes BOP markets in terms of exchange activities, and the broad issue of how market exchanges can grow and flourish by accounting for comparative differences across BOP markets.\nDesign/methodology/approach: The study design is a conceptual framework drawn from the extant BOP literature and several theories such as Amartya Sen's theory on poverty, and Anthony Bebbington's concepts of human capital. The framework specifies research propositions for future empirical examination.\nFindings: The conceptual framework proposes that BOP poverty lowers or inhibits market exchanges but is countered by several factors: national culture (performance orientation), non-traditional assets (creative and social capitals), and transformative technologies (mobile telephony). Assuming these factors vary by BOP setting, greater performance orientation alongside higher social capital, creative capital, and mobile telephony directly and/or interactively increase market exchange activities.\nResearch limitations/implications: Among research implications are the application of other culture theories to the BOP market exchange issue, and the need to examine the role of government and other non-traditional capitals in exchanges.\nPractical implications: Managerial implications include the targeting and selection of BOP markets and development of marketing tactics that leverage cultural, nontraditional, and technological assets.\nOriginality/value: This paper explores how to counter the negative effects of BOP poverty on market exchanges by leveraging the distinctives and variations among BOP markets.\n",
"role": "assistant"
}
] |
High-performance work practices and organizational performance in Pakistan
|
[
"Employee productivity",
"Employee involvement",
"Human resource management",
"Employee turnover"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The human resource management (HRM) literature has focussed on the links between high-performance work practices (HPWPs) and organizational performance for almost two decades; however, research has largely focussed on firms in western countries. Previous studies have shown that organizational performance improves considerably with the implementation of HPWPs practices. A number of studies demonstrate improvement in employee level outcomes, such as absenteeism and turnover (Batt et al., 2002; Boselie, 2010; Guest et al., 2003; Guthrie et al., 2009), enhanced labour productivity (Combs et al., 2006; Datta et al., 2005; Huselid, 1995; Wood and de Menezes, 2008) and financial performance (Collings et al., 2010; Delaney and Huselid, 1996; Subramony, 2009).\nUniversalistic vs context approaches to HRM perspective: The universalistic perspective argues that, over time, all HRM models will converge to the US model of HRM with US multinationals, business schools and consultants playing a major role (Brewster, 2007). Moreover, the adoption of \"best practice\" HR, which underpins the universalist perspective, will help to generate greater returns for the firm (Guest et al., 2003; Guthrie et al., 2009; Huselid, 1995; Pfeffer, 1998; Wood and de Menezes, 2008).\nHR practices, HR systems and performance: The literature often explores the links between HRM and performance by focusing on systems or bundles of HPWPs. A number of high-performance studies that adopt a systemic approach argue that HR practices are more effective when used in a coherent and consistent manner, incorporating the arguments of contingency and configurational perspectives and emphasizing the internal and external alignment of practices to generate synergistic combinations that affect organizational outcomes (Kepes and Delery, 2007). For instance, some research that examines HR in Asia finds, in general, positive associations between HR and organizational performance; these studies tend to examine high-performance workplace systems rather than individual practices (Bae et al., 2003; Chand, 2010) and have the advantage of assessing the overall impact of HR.\nHypothesis development: Drawing on Marchington and Wilkinson's (2008) work, we examine five HPWPs that previous studies have researched extensively (Guest et al., 2003; Guthrie et al., 2009; Macky and Boxall, 2007; Wright et al., 2003). Specifically, we examine how selection and recruitment, employee involvement and participation, performance-related pay, performance review and appraisal, and extensive training, learning and development are linked to performance outcomes in the context of Pakistan.\nResearch methods: We conducted our own survey that includes questions used in previous (see below), as there are no relevant secondary data sets that we could use that would enable us to meet our objectives. Our sample consists of establishments of multinational and local firms operating in three important sectors, banking, pharmaceutical and information technology, in Pakistan. The amount of inward FDI in these sectors and the presence of a sizeable number of MNC subsidiaries and local firms make them ideal for examining the links between HPWP and organizational outcomes. In addition, these sectors are often important in emerging economies (Ahmad and Allen, 2015; cf. Asadullah and Talukder, 2019). Respondents were branch managers, general managers or plant managers. Using a structured questionnaire, one of the paper's authors conducted face-to-face interviews with the managers. Some 392 out of 1,081 establishment managers participated in the research. Of the respondents, almost 34 per cent were from MNC subsidiaries and 66 per cent from domestic firms. We focussed on senior establishment-level managers rather than HR managers, as they are more reliable, aware and impartial about their organization's people management processes and practices (Boselie et al., 2005). Due to their close proximity and first-hand experience and low risk of variance in HR practices within the unit, the responses are likely to be more valid (Wright et al., 2003).\nOperationalization of variables: We measured all variables using a five-point Likert scale. Selective hiring and sophisticated selection (recruitment) was operationalized using three items: How often does your recruitment process generate as many good/qualified applicants as you need (Guest et al., 2003)? Are one or more employment tests (e.g. skills tests, aptitude tests, mental/cognitive ability tests) used prior to hiring (Guthrie et al., 2009)? Is there a deliberate attempt to provide a preview of what work in the organization will be like, including more negative aspects as part of recruitment and selection process (Van Wanrooy et al., 2013)? The factor analysis suggests a single factor with eigenvalue of 2.40 that explains 80 per cent of these items' variation. The scale has a Cronbach's a of 0.723.\nResults: Table I shows the descriptive statistics and correlation coefficients for all variables. Table II provides the validity and reliability statistics for the variables. Most correlations are low; thus, multicollinearity is not a problem (highest correlation: 0.514) (Hair et al., 2006). The threshold point for minimum factor loading is 0.70 (Hair et al., 1998). The loadings for all of the items to their relevant constructs are in the acceptable range, i.e., above 0.70; therefore, the factor pattern matrix shows the relevance of the items to their relevant constructs. Table II also provides the constructs' composite reliabilities.\nDiscussion: We found that two practices (recruitment (H1) and training (H5)) were consistently and statistically significantly associated with lower employee turnover, higher productivity and higher financial performance. This supports H1 and H5, shows the universal applicability of these two practices and demonstrates that firms that use wider pools of applicants and processes to select the most appropriate candidates are likely to benefit along many measures of firm performance (Armstrong et al., 2010; Lawler et al., 2011). The evidence was also consistent with studies investigating recruitment and selection practices in firms in Pakistan (Masood, 2010; Raziq and Wiesner, 2016).\nConclusion: Theoretical implications\n",
"role": "user"
},
{
"content": "Purpose: Research on the relationship between high-performance workplace practices (HPWPs) and organizational performance has largely focussed on western settings, limiting the knowledge of how these systems influence performance in other countries, including Pakistan. Universalistic assumptions underpin the HPWP paradigm; to examine the validity of these assumptions, the purpose of this paper is to study the links between HPWP and performance in Pakistan, a country with different cultural norms and institutional settings to those in which most research has been conducted.\nDesign/methodology/approach: The authors draw on a unique survey of 392 establishment managers in the banking, pharmaceutical and information technology sectors. The authors include managers of foreign-owned multinational subsidiaries and domestic firms to ensure the sample represents firms in Pakistan.\nFindings: The authors find that some individual HPWPs (recruitment and training) are associated in a statistically significant way with lower labour turnover, higher productivity and higher financial performance. Employee involvement is associated with lower labour turnover and higher labour productivity. Compensation is associated with higher financial performance. None is linked to higher labour turnover, lower productivity or lower financial performance in a statistically significant way. Performance appraisal was not statistically significantly associated with any of the three outcome variables.\nOriginality/value: The results provide some relatively strong support for universalistic assumptions, but also highlight the need for future research to examine the variable links of some HPWPs and the lack of any association for the performance appraisal measure.\n",
"role": "assistant"
}
] |
Higher vocational education in China: A preliminary critical review of developments and issues in Liaoning province
|
[
"Economic development",
"China",
"Vocational training"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: China has opened up its economy to the world since the early 1980s. This has led to major market reforms within the country, resulting in spectacular growth in both the size of the Chinese economy and in China's share of world trade. According to a recent WTO report real growth in GDP averaged 8.9 per cent over the period 1997-2004, whilst China's share of world trade increased to 6.7 per cent, making it the world's third largest trader after the European Union (14.5 per cent) and the US (13.6 per cent) (WT/TPR/S/161, 2006, p. 1).Much of this growth has been as a consequence of international trade and inbound FDI, and in both respects the European Community (EC) has been a significant player. In 2004, the EC ranked fourth as a provider of China's total imports (11.1 per cent by value); and second as a recipient of her exports (18.9 per cent). In respect of direct inbound FDI the EC ranked fifth, providing 7.0 per cent of the total 2004 figure of US$60.6 billion (WT/TPR/S/161, 2006, p. 27).As alluded to above, profound changes have occurred in China's education system as a result of these market reforms. Currently, its focus is on improving the country's intellectual outlook by producing appropriately qualified graduates capable of contributing to all aspects of China's development. Increasingly, reform of higher education is geared towards training technical specialists for employment in all sectors of the country's developing economy. Chinese universities, colleges and institutes offer a range of higher vocational education programs at the undergraduate and postgraduate levels. Admission to these various programs is by way of a centralized enrolment system, in which the Central Government's Ministry of Education oversees admissions committees at the provincial level.The Chinese higher vocational education and training (HVE) system, in particular, has come under close scrutiny recently. The central government and provincial governments, and industry generally, have expressed increasing concern over the lack of relevance of many current HVE offerings, to the point where there is serious doubt as to the ability of the system to deliver sufficient numbers of appropriately qualified graduates to ensure the continued growth and development of the Chinese economy (Guo, 2003). The issue relating to the necessity to realign higher vocational education outcomes, such that they are congruent with the labour market needs of a rapidly developing economy, is thus an important one and deserving of further research.The focus of this paper is on the issues and problems confronting the HVE sector in Liaoning province and their implications for the central and provincial governments' economic strategy for reviving the Northeast Industrial base (Cui and Liu, 2000, p. 5). Geographically, Heilongjiang, Jilin and Liaoning provinces constitute the north-eastern region of China. However, Liaoning forms the locus of this revival strategy, successful execution of which is designed to replace the region's \"rust-belt\" industries with industries founded in the \"new economy\" including state of the art agricultural produce processing industries, new high technology industries and modern service industries. It is proposed that as a consequence of this development Liaoning province will become the country's premier raw material and equipment manufacturing base (Wu, 2004).It is evident from the above that the Liaoning Provincial Government will need access to substantial human and financial resources if it is to achieve the objectives of its proposed economic reforms. The acquisition of additional financial resources in the form of increased levels of foreign direct investment is seen as the key driver of the reform process (Shen, 1997). However, it is well established in the international literature that a critical determinant of the rate at which such funds flow into a developing economy is the extent to which investors are satisfied as to the pre-existence of a well trained and relatively sophisticated labour force (Toner, 2004). In a recent study of FDI inflows into China, Ting (2005) was able to demonstrate that this view holds true on a regional basis as well. The implications of this fact are profoundly challenging for the Liaoning Provincial Government and the provincial HVE system.In this paper the authors attempt to provide insights into the above mentioned issues, based on a critical review of recent developments within the Liaoning HVE system. Specific problems faced by provincial and central government agencies in meeting the labour market challenges presented by an economy in transition are explicated as are current constraints in the implementation of the ongoing reform of the HVE system. Finally, several reference models of vocational education and training reform are used in order to frame a series of strategic responses to overcoming these constraints, both regionally and nationally.\n2 The development of HVE in China: The origins of formal education in China go back to at least the Shang Dynasty (1523-1027 B.C.). In this period, education was the privilege of the elite few and it primarily existed for the purpose of producing government officials. The curriculum then centred on the so-called \"six arts\" being mathematics, music, archery, chariot-riding, history, and rites (Du, 1992). However, the curriculum gradually gave way to one based on \"The Four Books\" and \"The Five Classics\"[1], which are based on the teachings of Confucius (551-479 BC) and warring states periods (770-221 BC). These works collectively define Confucian philosophy and outline the principles of society and government and codes for personal conduct, which exerted a fundamental influence on virtually all aspects of life and education prior to 1949.In 1949 the Chinese Communist party adopted a new educational system, modelled on that used in the (then) USSR. The rationale for the adoption of the Russian model was that it had proven effective in meeting Russian technological needs and given that similar economic and political systems were at play in China, a similar educational system would satisfy China's technological needs also.In the 1970s, the guiding principle was to bring about educational reforms in order to realize the \"four modernizations\" of advances in the areas of agriculture, industry, national defence and science and technology. It was also during this period that the system of vocational and work-study schools was expanded to provide an articulation track for those not meeting the standards for college entry. The definitive restructuring of the Chinese education system started in 1985 with the \"Decision of the Reform of the Education System\" which guided reforms at all levels of education.In 1996 the Chinese Government promulgated the first \"Vocational Education Law\" which stated that advanced vocational education was to be the highest level of vocational education in China. It also provided a legal framework for the development and perfection of vocational education nationally (China Education Research Network, 2005).Currently, vocational education in China exists at three levels. At the junior secondary level government policy is to provide basic professional knowledge and certain professional skills to secondary school students, as part of their nine-year compulsory education. At the senior secondary level the policy is designed to prepare students for direct entry into the workforce, by imparting basic knowledge, theory and skills appropriate to their chosen fields. Finally, at the higher vocational education (tertiary) level, stated policy is to enroll graduates from regular high schools and secondary vocational schools into HVE institutions, with a view to imparting high level specialized technical and management skills. HVE graduates are needed to help meet the country's increasing demand for skilled and technically proficient specialists. These graduates are seen as pivotal to ensuring the success of the country's economic reform agenda. HVE schools currently exist in many major Chinese industrial cities, including Suzhou, Qingdao, Wuxi, Wuhu, Changzhou, Jingsha and Shenyang. Typically, HVE is managed by education and labour ministries and departments, operating at the central and provincial government level, although increasingly individual enterprises are assuming responsibility for providing such training to their own employees (China Education Research Network, 2005).2.1 Supply and demand considerations in the provision of HVE\n3 Higher vocational education in Liaoning province: The following section outlines recent developments within the HVE sector in Liaoning province and presents a description of current issues and problems. Predominantly secondary data are used and one of the authors was directly involved in their compilation. Sources included the Liaoning Provincial Government Enrolment and Employment Office and the Vocational Education Research Institute of the Liaoning Province Education Commission. The review period was 1997-2003, which represented the time span between the enactment of the Central Government's Vocational Education Law and the most current point in time for which statistics on HVE were available. Where data were unavailable for the entire period, the analysis was conducted over a shorter time frame.3.1 HVE institutions\n4 Critical issues and constraints: an international perspective and possible reform agenda: A review of the international literature on HVE reform reveals that the issues and constraints reported in the operation of the Liaoning HVE system exist also as constraints to the growth and development of effective vocational education and training systems in many other developing countries (Gill et al., 1999; Middleton et al., 1990, 1993). Middleton et al. (1993) explicate these constraints succinctly as: inadequate financing of public sector vocational education and training systems, fragmentation of VET systems, weak information links between training providers and employers, weak institutional capacity and incomplete information on private training providers. This section attempts to synthesise the international literature into a possible reform agenda for the Liaoning HVE system, using Middleton's set of constraints to underpin the analysis.4.1 Cost effectiveness\n5 Summary and conclusions: Preliminary findings provide evidence to support the view that the Liaoning HVE system has made a positive contribution to the growth and development of the provincial economy over the period 1997-2003. Moreover, statistics on foreign direct investment into the province support the view that the underlying reform process is proving successful and that the new economy is growing in a sustainable fashion. Employment into the new economy is proceeding in concert with this economic growth and there can be little doubt that the HVE system has played a positive role in achieving this outcome by providing graduates with the requisite skills.However, the review did identify a number of serious constraints to the effective operation of the HVE system, which could serve to limit continued economic growth. These included a low degree of market orientation; an insufficiency of resources both intellectual and physical, stemming from a chronic lack of financial investment; a relatively low level of overall cost effectiveness; and a poor image in terms of the perceptions of the population at large as to the value of HVE as a tertiary education pathway.A significant finding of this review was that in using international benchmarks for HVE reform in developing countries, the Liaoning HVE system could only be rated as marginally successful in terms of its ability to deliver acceptable placement rates for HVE graduates.The issues surrounding these constraints have been canvassed and their implications for the reform of the HVE system in Liaoning and China more generally, explicated. A possible reform agenda has been proposed, based in part on the findings of this review and in part on earlier international research, including that of the World Bank and the ILO. It is argued that many of these constraints will lessen in severity with the continued growth and development of the new economy and that this growth will be presaged by the continued growth in FDI into the province. Critically, the provincial government will need to consider the ways in which the returns from this economic growth are used to support the remediation of the existing HVE system, including the associated structural reforms necessary to sustain the system, as referenced in this paper.Internationally, the Chinese economy is seen as a major driver of growth in the global economy. In light of this it is the authors' view that these review findings provide a better understanding of the issues relating to its continued development, specifically as they relate to reform of the labour market and associated HVE system. This is considered of value to China's major trading and investment partners, of which the EC is one.As the title indicates, this has been a preliminary review only into the state of HVE in the province and it has raised more issues and questions than it has addressed. Clearly, more work is required in order to understand the range and relative importance of the strategic drivers of economic and HVE reform in China. A reliance on international case research findings, whilst useful in terms of the comparative insights it reveals, is no real substitute for rigorous research into the specifics of the situation in China. Equally, little was said regarding the impact of the on-going SOE reforms on the performance of the HVE system, notwithstanding the fact that the underlying cost effectiveness imperative is greatly affected by these reforms. These are clearly matters requiring further attention.\n",
"role": "user"
},
{
"content": "Purpose: - This paper addresses the increasingly important economy of China and its associated higher vocational education and training (HVE) system. More specifically, the paper aims to present the findings of a critical review of recent developments in the HVE system in Liaoning province.\nDesign/methodology/approach: - As part of this review, specific problems faced by Liaoning provincial and central government agencies in meeting the labour market needs of an economy in transition are explicated, as are current constraints in the implementation of the ongoing reform of the HVE system vis-a-vis economic reform. Predominantly secondary data sources were used to conduct the review and one of the authors was directly involved in their compilation. The review period was 1997-2003, which represented the time span between the enactment of the central government's vocational education law and the most current point in time for which statistics on HVE were then available.\nFindings: - Preliminary findings provide evidence to support the view that the Liaoning HVE system has made a small but significant positive contribution to the growth and development of the provincial economy over the period. Equally, however, the findings identified a number of constraints to the effective operation of the HVE system which could serve to limit continued economic growth. These constraints exist in the form of a low degree of market orientation; an insufficiency of resources both intellectual and physical, stemming from a chronic lack of financial investment; a relatively low level of overall cost effectiveness and a poor image in terms of the perceptions of the population at large as to the value of HVE as a tertiary education pathway. It is the authors' view that the constraints identified within the Liaoning HVE system exist as a general set of constraints within the overall Chinese HVE system.\nResearch limitations/implications: - Understanding the nature of any labour market is a challenging and complex process and the authors are conscious that their preliminary review is deficient in a number of key areas. Firstly, further work is required to better understand the range and relative importance of the strategic drivers of economic and HVE reform in Liaoning province and China generally. Secondly, little has been said regarding the impact of the on-going state owned enterprises reforms on the performance of the Liaoning HVE system, notwithstanding the fact that the underlying cost effectiveness imperative is greatly affected by these reforms.\nOriginality/value: - Internationally, the Chinese economy is seen as a major driver of growth in the global economy. In light of this it is the authors' view that the findings of this preliminary review provide a better understanding of the issues relating to its continued development, specifically as they relate to reform of the labour market and associated HVE system.\n",
"role": "assistant"
}
] |
Exploring international atmospherics: The impact on shopping behaviors based on the language of the lyrics of music played in global apparel retailers' stores
|
[
"International retailing",
"Atmospherics",
"Music",
"Language of the lyrics of music",
"Store-fit"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: As retailing in the USA continues to experience declining revenues from brick and mortar channels (Thompson, 2017), retailers are increasingly looking to international markets for growth. A growing question that retailers continue to face as they expand internationally is whether they should adapt their store atmospherics to the local culture or have a consistent design across stores and countries.\nConceptual framework and hypotheses: Noting the effects of environmental factors on consumers' behavior, retailers manipulate the design of their stores in an effort to influence consumers' perceptions, evaluations, behaviors and experiences at the point of purchase (Ballantine et al., 2010; Puccinelli et al., 2009). Elements of store design are known as atmospheric stimuli (Kotler, 1973), and include temperature, lighting, layout, cleanliness, scent, color, crowding and music (Michel et al., 2017; Spence et al., 2014; Turley and Milliman, 2000). The research stream that seeks to understand how the environment (or \"atmosphere\") of a retail store affects customer's behaviors, then, is collectively known as \"retail atmospherics\" (Baker et al., 2002; Grewal et al., 2014).\nMethodology: Design and procedure\nResults: In an initial analysis, we checked for possible confounding effects on shopping time due to gender, age and traffic flow in the store. As these effects were all non-significant, we collapsed across them in subsequent analyses, and they are not discussed further.\nDiscussion: The reported research extends previous studies' findings in an effort to learn more about how the language of the lyrics (native vs foreign) of music influences shoppers' behavior in international retailing contexts. Over the course of a month, the language of the lyrics (native vs foreign) and volume of music played was experimentally varied in a retailer's store in Mexico, with time in store and purchase rates tracked. Specifically, the results indicate that when a global retailer plays music in a foreign language of the lyrics that \"fits\" with the store's environment (i.e. English), shoppers are more likely to make a purchase. This effect is mediated by greater time spent in the store when English language of the lyric's music is played. Volume of the music played does not affect the influence of store-fit created by the language of the lyrics of music.\n",
"role": "user"
},
{
"content": "Purpose: Previous research has demonstrated that, in retail settings, music has some of the largest effects on consumer behavior. However, it is still unknown how the language of the lyrics (native vs foreign) affects consumer behavior. In order to address this gap in retail atmospherics, the purpose of this paper is to examine the differential effects of the language of the lyrics of the music played and explain the interactions between the music language and volume.\nDesign/methodology/approach: The data were obtained from a field study conducted in an apparel store and from 241 shoppers speaking Spanish as their native language. The study involved the manipulation of language of the lyrics of music played in the store (native vs foreign).\nFindings: Results indicate that customers in a non-English speaking country are more likely to make purchases when music is played in English, which fits with the store's global image. This effect is mediated by time spent in the store.\nPractical implications: For managers of global apparel brands, the results suggest that English music may be a good option to increase time spent in the store and subsequent purchases. This is especially attractive as music is an atmospheric cue that can be easily modified at less expense than other atmospheric cues.\nOriginality/value: This work is the first to demonstrate that fitting the language of the lyrics of music in an international retail store to a global brand image affects consumer behavior. Furthermore, it demonstrates that atmospherics research may not directly transfer to non-English speaking countries.\n",
"role": "assistant"
}
] |
The role of academic health centres in improving health equity: a systematic review
|
[
"Health policy",
"Systematic review",
"Health systems",
"Health equity",
"Academic health centres"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Academic health centres (AHCs) are organisations that pursue a \"tripartite\" mission to provide high-quality care to patients, undertake clinical and laboratory research, and train the next generation of health professionals (French et al., 2014). To deliver this mission, AHCs employ a wide variety of governance and operational models in combining higher education institutions delivering health professional education with one or more affiliated or owned teaching hospitals or health systems (Culbertson et al., 1996; Weiner et al., 2001; Wartman, 2007). Originating predominantly in Germany in the nineteenth century, the movement to link practice-based medical teaching in hospitals with university-based research and education - a precursor to modern AHCs - spread globally following the publication of the Flexner Report in 1910 (Ovseiko and Buchan, 2014). Today, AHCs are either established or developing in multiple countries and are well-established in North America, where they are the focus of a substantial body of research, commentary, books and policy reports.\n2. Method: The review protocol was registered with the international prospective register of systematic reviews on 22 November 2016 (PROSPERO 2016:CRD42016051802). The protocol also underwent a peer review process and was published in May 2017 (Edelman et al., 2017). The PRISMA guidelines and the equity extension were followed in the process of this review.\n3. Results: 3.1 Characteristics of the included publications\n4. Discussion: A prior scoping review of AHCs using systematic methods (French et al., 2014) explored the managerial, political and cultural perspectives of AHCs and, as in the present review, found the literature on AHCs to be largely normative, atheoretical and predominantly focussed on North America. This review extends and complements the scoping review by systematically synthesising and critically appraising the literature on the role of AHCs in improving health equity. Analysis of the literature on AHCs using this health equity lens, which involved examining how AHCs are (or are not) eliminating unfair and unjust health disparities and addressing the determinants of these health disparities, has contributed a number of unique findings.\n5. Conclusion: This review has systematically reviewed the literature on the role of AHCs in improving health equity. The review has found substantial interest in the capacity and potential of AHCs to adopt a health equity role, highlighting the relevance of health equity concepts in discussions about the role and missions of AHCs, particularly in the context of the growing global focus on health equity as a key measure of health system performance, and the global spread of AHC models. However, the review also found a profound lack of evidence, robust or otherwise, to support this perceived capacity, and of demonstrated contributions of AHCs to health equity goals - indicating an urgent need for further research. Such research should improve the quality of the evidence base on the capacity of AHCs to contribute to health equity goals, and on the efficacy of any equity-focussed strategies and interventions within AHCs. This research should consider the health systems and policy contexts in which the AHCs are embedded and could adopt a social accountability framework.\n",
"role": "user"
},
{
"content": "Purpose: Academic health centres (AHCs) are organisations that pursue a \"tripartite\" mission to deliver high-quality care to patients, undertake clinical and laboratory research, and train future health professionals. The last decade has seen a global spread of AHC models and a growing interest in the role of AHCs in addressing health system equity. The purpose of this paper is to synthesise and critically appraise the evidence on the role of AHCs in improving health equity.\nDesign/methodology/approach: Peer-reviewed and grey literature published in English between 2000 and 2016 were searched. Articles that identified AHCs as the primary unit of analysis and that also addressed health equity concepts in relation to the AHC's activity or role were included.\nFindings: In total, 103 publications met the inclusion criteria of which 80 per cent were expert opinion. Eight descriptive themes were identified through which health equity concepts in relation to AHCs were characterised, described and operationalised: population health, addressing health disparities, social determinants of health, community engagement, global health, health system reform, value-based and accountable financing models, and role clarification/recalibration. There was consensus that AHCs can and should address health disparities, but there is a lack of empirical evidence to show that AHCs have a capacity to contribute to health equity goals or are demonstrating this contribution.\nOriginality/value: This review highlights the relevance of health equity concepts in discussions about the role and missions of AHCs. Future research should improve the quality of the evidence base by empirically examining health equity strategies and interventions of AHCs in multiple countries and contexts.\n",
"role": "assistant"
}
] |
Global poverty, hunger, death, and disease
|
[
"Sub Saharan Africa",
"Poverty",
"Diseases",
"Hunger",
"Death"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Poverty, hunger, death, and disease have been cruel realities of the human condition for centuries. Horrific examples abound: Charles Dickens' credible accounts of English workhouses, the great famines in Ireland and India, the Black Death, and the scourge of AIDS/HIV notably in sub-Saharan Africa.\nI. Death and disease: Our attention in the following is drawn to six infectious diseases: pneumonia, diarrheal diseases, HIV/AIDS, tuberculosis, malaria, and hepatitis. Upwards of 10.453 million persons die of infectious and parasitic diseases (Table I) making those diseases one of the leading causes of death worldwide. The most recent estimates from the World Health Organization put the death toll from cardiovascular diseases - the leading cause of death around the world - at 17.1 million (WHO, 2009b). Another 7.4 million died of cancer (WHO, 2009a).Pneumonia\nII. Poverty and hunger: There are two standards by which poverty is defined and measured: the absolute (minimal living) standard and the relative (income-distribution). The minimal living standard addresses the question: \"How much does one need to live at a minimally acceptable level?\". The income-distribution standard addresses the question \"How much does one have relative to others?\". In the following we use \"minimal-living\" and \" income-distribution\" because they differentiate between these two standards with greater clarity.Students of poverty are divided as to which standard to use. In the USA, the official definition of poverty from the very beginning has incorporated minimal living, dismissing out of hand income distribution. Some countries have opted for the income distribution, rejecting minimal living.This divide, which gives the impression that how one defines and measures poverty in the end is entirely arbitrary, is unfortunate and unnecessary. There is nothing fundamentally arbitrary in this matter. The minimal-living standard originates in human individuality whereas the income-distribution standard originates in human sociality.An income-distribution standard makes sense only in the developed world where it should be used alongside the minimal-living standard but never as a replacement for that standard. Employing both concepts allows us to incorporate both dimensions of human nature into the definition and measurement of poverty.However, common sense alone tells us that outside the developed world where hundreds of millions of humans struggle to survive from one day to the next, poverty must be defined and measured first in terms of minimal living. It is meaningless to set the official poverty threshold at, say, one-half the median income for all persons when large numbers of persons above that threshold cannot afford the bare essentials for human survival.In the following we focus on the numbers and proportions of persons who are poor and changes in poverty with the passage of time. A World Bank report published in 2009 is the source of the information in Tables III and IV on the cost of a common basket of goods and services consistent with the bank's concept of extreme poverty. In preparing its estimates of global poverty, the bank set the poverty threshold at US$ 1.25 a day - the average of the poverty thresholds of the 15 poorest countries in the world for which there are data. It is appropriately a minimal-living standard (The World Bank, 2009).Estimating poverty\nIII. Four revolutions: Nearly 50 years ago, Barbara Ward identified four factors, which she identified as revolutions, that probe more deeply into the nature and causes of poverty, hunger, death, and disease. These four revolutions, she asserted, have swept over the Western world since the 1850s but are not widely evidenced in the poor countries of the world.The four are the biological revolution that broke through the cycle of peaceful population growth and violent diminution, the intellectual revolution of materialism and this-worldliness, the political revolution of equality, and the scientific and technological revolution that involves the application of savings and the scientific method and insight to everyday business affairs (Ward, 1962).Before the biological revolution of modern medicine and sanitation, populations grew until the limits of economic resources had been reached, and then declined due to malnutrition and starvation or war with a neighboring tribe over control of resources. Until this revolution, tribal war, and with it disease and hunger, were revisited in every generation.By the intellectual revolution, Ward means that archaic civilizations, notably tribal in nature, are backward looking, tend to hold on to the old ways, to mystery and magic rather than hard work and reason.The political revolution of equality breaks down the hierarchical nature of traditional societies, and the backward-looking ways that dominate such societies and subordinate merchants to warlords and landlords.In tribal societies there is little or no science. Mystery and magic predominate, effectively putting the exercise of the human will before the use of the human intellect in the manipulation of the physical matter of the universe for human ends. And there is no sustained saving in traditional societies, blocking investment in infrastructure and thereby holding back economic development (Ward, 1962).Remedies\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to address the global dimensions of poverty, hunger, death, and disease.\nDesign/methodology/approach: - The authors assemble the most recently available data on these four human scourges from reliable international sources.\nFindings: - Reducing the higher incidence of poverty, hunger, death, and disease in sub-Saharan African countries depends critically on the extension of four revolutions - biological, intellectual, political, and scientific-technological.\nOriginality/value: - The paper selects and interprets these data from the perspectives of economics and paediatrics.\n",
"role": "assistant"
}
] |
E-learning at Dartmoor National Park Authority: How to minimize drop-out rates and resistance to future training programs
|
[
"E‐learning",
"Employee development",
"Communication technologies",
"Training methods",
"United Kingdom"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The role of information communication technology (ICT) to increase corporate and individual performance has gained significant review in recent years. Nevertheless, for these improvements in performance to be realized it is not enough to just invest in technology, it is also essential to invest in the organization's human resources.\nDNPA case study: Dartmoor National Park Authority (DNPA) has made significant investments in ICT systems to support the work of its staff. However, the role of ICT at DNPA has changed, particularly with the advent of e-business and e-governance and the expectation of customers that they should be able to communicate and conduct business transactions with the Authority electronically.In 2004 DNPA acknowledged that many of its staff lacked the required skills to make effective use of the available ICT. In 2005, it was decided that the internationally recognized European Computer Driving License (ECDL) would be adopted to enable people to demonstrate their competence in computer skills.Approximately 30 percent of the staff initially signed up to the program, to be delivered through a web-based e-learning package. It involved completing four out of seven available modules, each taking up to ten hours to complete. Line managers were required to sign a Learning Agreement for each trainee, guaranteeing that time would be made available for the training. By 2008, only 15 of the 38 who started the program had completed four modules and achieved a certificate. A further eight had completed at least one module and then stopped, and 15 did not complete any module.Factors affecting drop out rates\nTrainee needs: It is essential that the needs of trainees are considered, and training is targeted to address skills shortages. Trainees who are enrolled on a course that they do not consider relevant are among the most likely to drop out. User resistance to e-learning is also a significant issue. This can be reduced by marketing the training program thoroughly prior to introduction. Piloting with a small number of people who can then become mentors and promote the training program can also help.A trainee must be motivated to complete the training. This motivation may arise from a desire to gain a qualification, increase future career prospects or perhaps to make them more efficient in their job. However, what is clear is that those people with no motivation or desire for the training are particularly unlikely to set aside the time to complete it.\nComment: This is a review of \"Exploring barriers to effective e-learning: case study of DNPA\" by Fenio Annansingh and Ali Bright. This article makes a number of practical recommendations that organizations can use to help minimize drop-out rates and alleviate barriers to future e-learning programs. It is clear and concise, and employs an interesting and readable style. The article is of interest to HR managers, training coordinators and line managers of companies offering e-learning training programs to their employees.\n",
"role": "user"
},
{
"content": "Purpose: - Reviews the latest management developments across the globe and pinpoints practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: - This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: - The role of information communication technology (ICT) to increase corporate and individual performance has gained significant review in recent years. Nevertheless, for these improvements in performance to be realized it is not enough to just invest in technology, it is also essential to invest in the organization's human resources.\nPractical implications: - Provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nSocial implications: - Provides strategic insights and practical thinking that can have a broader social impact.\nOriginality/value: - The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.\n",
"role": "assistant"
}
] |
Major business-environment influences on the cereal products industry supply chain: An Australian study
|
[
"Supply chain management",
"Business environment",
"Cereal",
"Foods",
"Australia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: The purpose of this paper is to present and discuss the major business-environment influences impacting on the component companies of the Australian cereal products supply chain. An exploratory cereal products supply chain study was conducted by the authors in conjunction with the Victorian Government Department of Innovation, Industry and Regional Development examining the status of supply chain networks and practices in Victoria's cereal products industry (see Section 2 for details). The study applied research frameworks drawn from literature-based themes including business-environment factors affecting supply chains.The Australian cereals industry, encompassing export commodity activity and domestic supply, is in the process of rapid consolidation, with varied degrees of vertical integration over the past few years, towards an anticipated handful of major players. The major players, with interests and establishments throughout Australia, are experiencing a period of intense competition for industry dominance, particularly in the predominant commodity export field, with impacts spreading through the remainder of the supply chain.The cereal products industry under discussion comprises the gamut of domestic cereal-grain-derived manufacturing activity. The production activity includes the Australian and New Zealand Standard Industry Code (ANZSIC) manufacturing categories of flour milling, cereal food and baking mix manufacturing and the production of bread, biscuits, cakes and pastries, prepared animal and bird feed and beer and malt (refer to Figure 3 and the related discussion). Cereal products are a key component of the processed food and beverage industry, Australia's largest manufacturing industry, with a turnover of over $AU55 billion (National Food Strategy (AFFA), June 2002). The scope of the project encompassed Victorian industrial activities for cereal-based foods and by-products. Wheat, barley and oats were the main Victorian-grown grains associated with the study in that they are the major grains grown and processed in Victoria. The barley industry is also highly important to Victoria in that it underpins the high-output local industries of stock feed, and malting and beer brewing.The analysis of academic literature relating to the cereals industry, agribusiness supply chains, the business-environment and competitive supply chains provided the researchers with a theoretical framework upon which to assess supply chain effectiveness. A research instrument (see Section 2), to be applied in interviews, was developed from the aggregate theory. Business-environment themes viewed in the literature as important considerations in supply chain effectiveness included technological advances (Hobbs and Young, 2000; Frolich and Westbrook, 2001), the regulatory environment (Lindgreen and Hingley, 2003), socio-economic factors impacting on consumer behaviour (Hobbs and Young, 2000) and retail power (Robson and Rawnsley, 2001; Piggott et al., 2000). They also included the necessity of being globally competitive (Folkerts and Koehorst, 1998) with increased whole-chain accountability and product traceability (Rademakers and Knight, 1998; Wilson and Clarke, 1998; Viaene and Verbeke, 1998). Other factors included the imperative of timely delivery throughout the supply chain (Rademakers and Knight, 1998) and meeting the requirements of the busy-lifestyle global consumer possessing an increasing desire for branded, processed meals and \"environmentally-safe\" products (Piggott et al., 2000). A broad framework of research topics that includes the business-environment scenario is provided in the methodology section. The remainder of this paper discusses the project methodology, the significance of the Australian cereals industry, the ten influencing factors, and the associated model and its importance to the future of the cereals industry.\n2 Methodology: Three research objectives were addressed in the cereal-product supply chain study. The first was to examine and map the complexities of supply chain networks operating throughout the Victorian cereal products industry. The second was to undertake a preliminary assessment of the global competitiveness of the Victorian cereal product supply chain practices and the third objective was to identify obstacles to, and opportunities for, increased globally competitive supply chain activity. The study involved two phases of fieldwork applying open-ended field interviews that explored issues relating to the themes encompassed in the research framework. Phase one entailed initial open-ended interviews with 20 grain, grain-marketing and cereal food industry practitioners and authorities, seeking their views on current cereals industry supply chain issues and challenges. The interviewed cereals industry leaders represented major Australian grain exporting companies, government departments, relevant industry associations and companies involved in cereal grain manufacturing activities. The purpose of the industry interviews was to obtain a widely informed overview of the cereal-industry competitive scenario and its component supply chains.Phase two was conducted across a representative sample of fifteen cereal-product manufacturers with operations in Victoria. The sample was obtained through industry database scrutiny and referrals from industry organisations. Although most of the interviews were conducted in the state of Victoria, the Victorian sample was considered representative of the Australian scenario. This was the case in that:* many companies operated throughout the grain-growing states; and* the Victorian scenario can be viewed a microcosm of Australian grain production of mainly wheat and cereal processing operations in regional and metropolitan centres.The interviewed cereal processing managers represented the industry sectors of bread (small and large-scale), flour, baking mix, biscuits, cakes, pastry, stockfeed, pasta and snack-food manufacturing. Grains processed by the companies included wheat (soft and hard varieties), oats, barley, maize, rye, ancient grains and organic grains. The Victorian regions represented by the sample included Wimmera, Melbourne, Gippsland and Loddon. Because of a wish for confidentiality of information by the majority of the company managers, aggregate findings only were included in the government report and this paper. Phase two research activities included industry visits to sites throughout the state, plant inspections where possible and confidential semi-structured open-ended interviews with senior managers addressing the three research objectives. The interview research instrument first addressed the descriptive elements of company operations and supply chain players, networks and logistical activities (Table I). Then three types of open-ended questions were asked of the company managers, namely, factual questions concerning the company's operations, opinion seeking questions concerning the company, the supply chain and the industry and specific questions concerning competitive supply chain and manufacturing practices. Data analysis was conducted by using the Nvivo2 computer package addressing each of the three main project objectives identified above. The findings were further related to relevant Victorian and Australian government's strategic agendas.\n3 Findings: With an overall gross value of $AU9.321 billion in 2001-2002 (ABS, 2003), the cereals industry is of major importance to the Australian economy. The industry is also becoming increasingly focussed on global trade. Folkerts and Koehorst (1998, pp. 385-8), who examined European food supply chain, stressed the importance to global-market-focussed supply chains of adopting new methods, tools and levels of expertise in order to compete successfully. They recognised at that time a need for increased cooperation between research institutions, agribusiness and food industries. Whilst the Australian cereal industry does have strong grain biotechnology research base, it is evident from the paucity of refereed articles scrutinising Australian cereal industry supply chain activity, that there is a knowledge gap in this field to be rectified through future collaborative research initiatives.Wheat, barley and oats are the predominant Australian cereal grain crops (Figure 1). With a gross value of $AU6.356 billion in 2001-2002 (ABS, 2003) wheat is the major export crop. More than 80 per cent of exported wheat supplies South-east Asian and Middle Eastern markets. The majority of exported Australian wheat supplies flour millers, and is subject to intense price competition, with the exception of Japan that pays a premium for quality supply (Grain Growers Association, 2002, pp. 4, 5). Grain sorghum and triticale, grown almost exclusively for stock fodder, amounted to $3AU49 million and $AU168 million, respectively. Rice, grown mainly in the state of New South Wales totalled a gross value of $AU327 million and oats amounted to $AU251 million.Barley exports amounted to $AU1.75 billion in 2001-2002. Around 60 per cent of barley is normally processed into a variety of stock feeds with the other 40 per cent being processed into malt that is mainly used in brewing. Of the malting barley, approximately 65 per cent is normally exported, with the remainder being converted to malt. About two thirds of the malt is normally exported. Cereal crops are grown mainly in the semi-dry areas of the continent, inland from the coastal mountain ranges and lessening towards the large arid centre. Figure 2 shows geographic spread of the Australian winter grains crop, consisting mainly of wheat oats and barley. The darker areas depict regions of greatest cereal planting concentration. It is evident in Figure 2 that freight distances from some grain regions to capital cities and ports (see white circles) are substantial. Hence freight cost-minimisation and efficiency are important industry and supply chain considerations.Cereal products are a key component of Australia's processed food and beverage industry, the largest manufacturing industry, with a turnover of over $AU55 billion (Department of Agriculture, Fisheries and Forestry, 2002). Through the supply of grain and flour-based products and prepared food ingredients, the overall cereal industry underpins a number the Australian flour milling and cereal food and baking industries. It also supports, through grain-based ingredients, the prepared food manufacturing and confectionery industries and contributes significantly to the barley-based food and beverage industries of malt and beer and prepared animal and bird feed manufacturing.Four major activity channels are evident in the Australian cereals industry (Figure 3). The channels are export-commodities, and the cereal product group of cereal-based foods, beer and malt and prepared animal and bird feed. The latter three channels form the cereal product industry. Further definition of this industry using the Australian and New Zealand Standard Industry Classification (ANSZIC) system clearly shows the supply chain complexity of flour-based cereal food activity sectors (Figure 3).The \"Flour Mill Product Manufacturing\" (C2151), \"Cereal Food and Baking Mix Manufacturing\" (C2152), \"Bread Manufacturing\" (C2161), \"Cake and Pastry Manufacturing\" (C2162) and \"Biscuit Manufacturing\" (C2163) are predominantly wheat flour based. On the other hand, the \"Prepared Animal and Bird Feed Manufacturing\" (C2174) and \"Beer and Malt Manufacturing\" (C2182) industry sectors are mostly barley-based.From an analysis of background information and the coded research transcripts findings concerning important industry and company issues, ten key business-environment factors were identified as currently significantly impacting the Australian cereal-products supply chain. Where applicable in the following section, the factors have been discussed in relation to literature-based business-environment or supply chain themes. The ten identified factors are: globalisation and demand trends; industry complexity and realignment; power relationships; delivery imperatives; the competitive supply chain imperative; the information economy; industry accountability requirements; freight challenges; supply chain labour requirements and climatic conditions affecting crop yields. The factors are depicted in a diagram of forces impacting on the Australian cereal-product supply chain. This diagram (Figure 4) is a precursor to the final competitive supply chain model under development from the extensive findings of the research project.3.1 Globalisation and demand trends\n4 Conclusion: In summary, Figure 4 shows the ten identified business-environment influences that are impacting on the cereal products industry.Although they may initially impact on a part of the supply chain, the influences invariably have flow-on effects and present on-going challenges for the whole supply chain. The examination of the main issues facing the Victorian cereal product supply chain has shown that it faces an interesting and challenging future.On the one hand, there is enormous potential for increased cereal product exports to cater to global food-demand trends. Effective and efficient deliveries to appropriate global markets will be mandatory and the implication is a highly efficient supply chain. Ideally speaking, the globally-responsive cereal products supply chain, if that is the direction chosen by much of the industry, will need to strategically align itself with the wider Australian cereals industry, food industry and local, state and federal governments and overseas market agencies to maximise all infrastructure and strategic considerations. It will also need to maintain a constant supply chain dialogue focussing on common global competitiveness goals, mindful of current constraints and future challenges. On the other hand, the range of current supply chain hurdles discussed in this paper need to be addressed by the industry and relevant parties and addressed as a precursor to a future-oriented strategy. The identified business-environment factors present strategic challenges to players across the entire cereals industry, including grain growing, marketing, product processing and wholesaling, and warrant careful consideration in inter-company initiatives aimed at long-term supply chain collaboration.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of the paper is to provide a contextual basis for the supply chain activity of the generic Australian cereal products supply chain. It identifies the business-environment factors underpinning the cereal products supply chain efficiency. It also further highlighted the complex nature of the extended supply chain and all the factors to be considered in its management. The approach to the topic is that of developing a holistic overview of the extended supply chain, its networks and its business-environment influences in order to understand the complex reality of issues that a supply chain manager must deal with in order to optimise efficiencies.\nDesign/methodology/approach: - A qualitative methodology was used in the study, involving an examination of supply chain practices throughout the cereal product supply chain, determined through interviews with players at all stages from seed to supermarket supply. Identification of key business-environment influences impacting on the cereal products supply chain was achieved through thematic examination of the responses and comparison of the findings with literature relating to the Australian business-environment. Field research activity included semi-structured interviews throughout the supply chain for mapping and examining the status of supply chain networks and practices.\nFindings: - The business-environment factor findings, depicted in a model, concerned the impact of globalisation and demand trends; industry complexity and realignment; power relationships; delivery imperatives; the competitive supply chain imperative; the information economy; industry accountability requirements; freight challenges; supply chain labour requirements and climatic conditions affecting crop yields.\nResearch limitations/implications: - The research was limited in that it represented a sample of viewpoints/supply chain activity in the Australian cereal products industry and lacked the comprehensiveness of a full industry study covering the entire gamut of regional, crop-wide and circumstantial influencing factors.\nPractical implications: - The paper highlights the influence of the identified business-environment factors on supply chain activity and points the way for supply chain managers in the cereal product industry to be aware of broad contextual factors as well as their own immediate operating environment. The identified business-environment influences present strategic challenges to players throughout the cereal products supply chain. A globally responsive cereal products supply chain will need to strategically align itself with the wider Australian grain industry, food industry and government to meet future market requirements. It will also need to maintain a constant supply chain dialogue focusing on common goals, being aware of constraints and future challenges.\nOriginality/value: - This research paper is original and of value to supply chain management theory in presenting a wide contextual view of the supply chain and hence supply chain management.\n",
"role": "assistant"
}
] |
Emergence of new modes of governance in activation policies: Czech experience
|
[
"Employment",
"Czech Republic",
"Public policy",
"Learning"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The welfare state reforms, related to changes that are under way in (post)modern society[1], show similar tendencies and are grounded on similar principles across various countries, despite variations in scope and content. The promoted principles (incorporated also in the field of employment policy in the European Employment Strategy) include a reviewed balance between rights and obligations, emphasis on employment, activation policies, the conditionality of rights, contractualisation (between the state and other welfare state agents, and between institutions and clients), individualisation, decentralisation and localisation of policies, pluralisation of actors, and new forms of governance accentuating indirect methods of influence, co-operation, and partnership (cf. Giddens, 1998; Mosley and Sol, 2001; Van Berkel and Hornemann Moller, 2002; Serrano Pascual, 2004; Van Berkel and Valkenburg, 2007; and others). Welfare state reforms in countries undergoing a market transition (including the Czech Republic) represent a deeper paradigmatic shift than elsewhere, particularly at the level of goals and principles that guide public policies (cf. Offe, 1996; Ferge, 1997; and others). On the other hand, the implementation of policies pursuing these goals is largely a product of the national context and implementation conditions, and that is in many respects rather different from the situation in the EU-15 countries.This paper deals with the question to what extent implementation conditions (at the national and local level) influenced the profile of activation policies in the Czech Republic. In agreement with variable clusters which, according to van Meter and van Horn (1975), affect the \"performance\" of public policy, we pay attention to contextual factors such as policy goals as formulated at the national level and implementation conditions, i.e. the legal and institutional framework, and resources available (financial, personal, power and influence, and organisational culture). Due to the fact that the policy process involves both policy implementation (a top-down approach) and policy formation (a bottom-up approach) (cf. Sabatier, 1986) we are also concerned with the question how the influence of implementation conditions is mediated through the interplay of top-down and bottom-up processes. In these processes the distinct loci in the policy-making process, that is the levels of governance - the political/administration system, institutional relations and the street level (cf. Lester and Goggin, 1998; Hill and Hupe, 2002) - may produce contradictory strategies and policies due to both ambiguity of goals and/or contradictions between goals and specific implementation conditions.Implementation is a quite complicated process that does not necessarily lead to the implementation of policy, but rather to the redefinition of goals and reinterpretation of outcomes, that is to evolution (Majone and Wildavsky, 1979; Lane, 1987). At the governmental level, the content of policy is rarely defined once and for all (Meyers et al., 1996). Local actors, particularly street-level bureaucrats, are therefore always given a certain discretion, which makes them important, if not the most important, actors in the process of policy implementation (e.g. Lipsky, 1980, 1991; Palumbo and Callista, 1990; Riccucci, 2005).Experiences with the public policies indicate that \"experimental implementation\" of the policy communities (Matland, 1995), sensitive to contextual conditions as well as to the goals and principles followed (and modified) at the local level, may enable quite viable and effective \"programme mutations\". These programme mutations are especially important, when policies initiated from the top-down do not match well the implementation conditions. In this connection, we pay attention to the process of learning that local actors experience in the course of implementing and realising activation policies.To examine the question about a link between implementation conditions and policy performance at the local level, we shall analyse the implementation of the most important methods of activation in the field of employment policy: Individual Action Plans (IAPs) and associated strategies of activation. In so doing we shall pay attention to two stages - first, the pilot phase (2003), and then the implementation phase at Local Employment Offices (three years later). At the first stage, we carried out case studies at four Employment Offices. At the second stage, we selected only those two where, in the initial phase, IAPs' implementation appeared less developed in multiple respects, especially professional ones. In this way, we aimed to verify whether the dynamics of implementation and the process of institutional learning would accelerate in the following phase - or, on the contrary, would continue to lag behind or even stagnate as a result of specific implementation conditions.\nImplementing activation in the Czech Republic: The goal of \"activation\" and implementation conditions\nDevelopments of the activation strategy: Changing context and implementation conditions\nConclusion: changing the model of activation from the bottom-up?: A new strategy of activation has been gradually forming at the local level. A high degree of discretion (and increasing decentralisation) given to the local employment offices in implementing IAPs enabled them to cope with inadequate implementation conditions. Discrepancy between top-down policy of activation and bottom-up policy increased. Activation is influenced (but less and less so) by the strategy adopted in 2002 in connection with piloting the IAPs. We referred to this strategy as an inconsistent or weak version of the strategy of enforced activation. This top-down line of policy-making is based on negative and (just to a small degree) positive job-search incentives and makes only a limited use of the individual approach. Application of such a version of the individual approach that would involve a broader support of the unemployed is hindered particularly by personnel and financial resources: the number and professional qualities of staff. These limited capacities imply an inconsistent implementation of this approach in several respects, and its redefinition at the local level. For example, depending on the local conditions, the extent of IAPs is often significantly and selectively curtailed and its content in terms of individual work is substantially reduced and formalised.This does not, however, mean that the strategy of activation as such, or the individual approach to activation, are dying out too. The strategy of activation is ever more powerfully shaped by the second line of policy-making - bottom-up - which emerges through involvement of a coalition of local actors. This line is to a greater extent influenced by the principles of inclusive participation, owing to finances provided under set conditions directly to the local partners from the European funds. In this way, resources are important as they (de)mobilise actors. On the other hand, the learning process is important as well. It must also be noted that the pilot phase of IAPs' application played a positive preparatory role. Employment offices learned what it takes to carry out individualised work with the client and went through the process of institutional learning.The process of institutional learning is still underway and is gaining in intensity, both because more actors are involved in it, and because these actors can now access real resources and accomplish their plans - which is one of the implementation conditions that, in the case of IAPs, was not met. It is interesting that local actors unwittingly refrain from the strategy of enforced activation without reformulating transparently their goals and priorities. Nevertheless, they spontaneously identify those unemployed persons whose handicaps deserve other, more profound, forms of intervention than a mere \"signed contract\", and strive to channel and suit such interventions to the needs of these unemployed. It seems that the shift towards the strategy of inclusive participation was facilitated, firstly, by the learning process and, secondly, by the modified implementation conditions, when available resources helped local actors initiate the bottom-up process. Besides, such actors have got involved who have an insight into clients' individual needs.The policy-making process has not taken the path of achieving clearly defined and legitimised goals, but rather the path of trial and error. It pursues multiple goals at once. Besides, these goals are not fully transparent, as they are formulated in a rather abstract or vague, and sometimes even conflicting, way (Meyers et al., 2001). Operative goals - which are closely related to the existing implementation conditions (the number and competence of staff and financial resources available for active measures) and possibilities for action, and which pull in coalitions of actors at the local level - are gaining in importance. These operative goals represent the means of accomplishing the official vague goals, thus themselves becoming the \"outcomes\" as a result (Hall, 1999). Because of the vagueness of the official goals and the instability of implementation conditions it is not a systematically organised process but rather experimentation by the actors involved.\nNotes: 1. Van Berkel and Hornemann Moller (2002, p. 45) define them as economic globalisation, demographic changes, labour market changes, processes of differentiation and individualisation, and reduced government spending, and interpret them as a shift towards the risk society.2. For example, at the employment office located in an above-average unemployment region, it was only about 6 per cent in the category of persons under 25 years of age and 1-2 per cent in the category over 25 years of age, while at the employment offices in a location with below-average unemployment, it was almost 30 per cent.3. It is symptomatic that she quit the employment office for work in a private agency that implements European projects.\n",
"role": "user"
},
{
"content": "Purpose: - The paper deals with the question to what extent implementation conditions influence the profile of activation policies in the Czech Republic. In this way, it helps to clarify more general questions: how are broader objectives of these policies specified at the bottom level of implementation and why activation policies differ among countries, although guided by similar general objectives and principles.\nDesign/methodology/approach: - The findings are based on implementation case studies carried out at several local labour offices during the pilot stage and later during the routine stage of implementation of Individual Action Plans (IAPs).\nFindings: - The paper shows that in the pilot stage of IAPs, the employability approach of enforced activation originated from the top-down and was adopted at the local level; however, in a fragmented way due to unfavourable implementation conditions (above all poor staffing and a lack of activation programmes). It follows from these very conditions that, in the routine stage, the programme has been dying away, despite being supported by legislation and programme documents. On the other hand, processes of institutional learning have been initiated owing to IAPs and, with the availability of new policy opportunities at the local level (brought about with projects funded from the ESF), policy coalitions and activation policies emerge from the bottom-up, giving rise to another model: capability approach of inclusion through participation.\nPractical implications: - The findings are signalling to policy makers the necessity to control the implementation conditions at the national, as well as local level and to take the bottom-up processes of policy re-formulation into consideration.\nOriginality/value: - The analysis of the contradictions between the levels of policy-making and of the volatility of implemented policies emerging from specific implementation conditions represents the original contribution of the study.\n",
"role": "assistant"
}
] |
Work engagement versus workaholism: a test of the spillover-crossover model
|
[
"Employee engagement",
"Employee relations",
"Work engagement",
"Workaholism",
"Dual-career couples",
"Family life"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Heavy work investment characterised by a strong focus on the task at hand and a high level of dedication to work generally has positive consequences for job performance (Beal et al., 2005; Demerouti and Cropanzano, 2010). However, what are the consequences of heavy work investment for one's private life? Do highly engaged employees come home cheerful or too occupied with work to be able to enjoy family life? In the present longitudinal study among Japanese couples, we use the spillover-crossover model (Bakker et al., 2009, 2008) to examine how two different types of work investment - work engagement and workaholism - are related to family satisfaction as reported by employees and their partner over time.\nThe spillover-crossover model: According to the Spillover-Crossover model, there are two different ways in which experiences are carried over from the work to the family domain. Spillover is a within-person, across-domains transmission of demands and consequent strain from the work domain to the nonwork domain (e.g. Byron, 2005; Lambert, 1990). This process is also called work-family conflict, referring to the interference of work with private life - for example when work-related strain is transferred from work to home. In contrast, crossover involves transmission across individuals, whereby demands and their consequent strain cross over between closely related persons (Bakker et al., 2009; Westman et al., 2001). This is particularly likely when partners are high in empathy (Bakker and Demerouti, 2009), and when individuals frequently discuss their work life (Bakker and Schaufeli, 2000). Thus, in crossover, stress experienced in the workplace by an individual may lead to stress being experienced by the individual's partner at home. Whereas spillover is an intraindividual transmission of stress or strain, crossover is a dyadic, interindividual transmission of stress or strain.\nWork engagement vs workaholism: Work engagement and workaholism are two active, work-related states that are indicative of heavy work involvement. However, whereas work engagement usually combines high effort with positive affect and has favourable consequences, workaholism usually combines high effort with negative affect (Bakker and Oerlemans, 2011). Work engagement is most often defined as \"a positive, fulfilling, work-related state of mind that is characterised by vigour, dedication, and absorption\" (Schaufeli et al., 2002, p. 74). In essence, work engagement captures how workers experience their work: as stimulating and energetic and something to which they really want to devote time and effort (the vigour component); as a significant and meaningful pursuit (dedication); and as engrossing and something on which they are fully concentrated (absorption). Unlike workaholics, engaged employees do not work hard because of a strong and irresistible inner drive, but because for them working is fun (Gorgievski et al., 2010).\nThe present study: The model that will be tested in the present study is graphically presented in Figure 1. As can be seen, we predict that workaholism will have a negative impact on one's own family satisfaction, because workaholism increases the likelihood of work-family conflict. Bakker et al. (2009) found evidence for a positive path between workaholism and work-family conflict, for both genders. People who have a compulsive drive to work excessively hard are likely to invest time and effort in their work at the expense of their private life (Taris et al., 2005). Such behaviour is likely to have a negative impact on well-being and satisfaction in life. According to Hobfoll's (2002) conservation of resources theory, individuals seek to acquire and maintain resources. Resources may include conditions (e.g. married status, tenure), personal characteristics (e.g. self-esteem), or energies (e.g. time, money and knowledge that allow one to acquire other resources). Stress reactions develop when there is a threat of loss of resources, an actual loss of resources, or lack of an expected gain in resources. COR theory proposes that work-family conflict may lead to a wide variety of stress reactions (i.e. dissatisfaction, depression, anxiety, or physiological tension), because valued resources are lost in the process of juggling both work and family roles. This means that compulsive tendencies make workaholics to devote more resources (e.g. time, emotions) to work, leaving them with fewer resources to devote to their family.\nMethod: Procedure and participants\nResults: Descriptive statistics\nDiscussion: The central aim of the present study among Japanese dual earner couples was to investigate the possible consequences of two types of heavy work investment - work engagement and workaholism - for one's private life over time. We hypothesised that work engagement would have positive, and that workaholism would have negative consequences for one's own and partner's family satisfaction. The results of structural equation modelling analyses offered clear support for our spillover-crossover model. We found that work engagement was positively related to work-family facilitation, which, in turn, predicted own and partner's life satisfaction, also one year later. In contrast, workaholism showed a positive relationship with work-family conflict, and had an indirect negative effect on own and partner's family satisfaction. In what follows, we will discuss the most important contributions of the present study.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to examine how two different types of heavy work investment - work engagement and workaholism - are related to family satisfaction as reported by employees and their intimate partner.\nDesign/methodology/approach: In total, 398 Japanese couples completed self-reported questionnaires including the model variables. One year later, participants reported again on their family satisfaction. Structural equation modelling analyses were used to test the hypotheses.\nFindings: As hypothesized, work engagement was positively related to work-family facilitation, which, in turn, predicted own and partner's family satisfaction, also one year later. In contrast, workaholism showed a positive relationship with work-family conflict, and had an indirect negative effect on own and partner's family satisfaction. The structural relationships between the variables from husbands to wives were similar to those from wives to husbands.\nResearch limitations/implications: The use of a non-experimental design does not allow for definitive conclusions regarding causality.\nPractical implications: The findings contribute to the work-family interface literature by showing how experiences built up at work can have a positive or negative impact on one's partner's family satisfaction. The study highlights a growing need to promote work engagement and discourage workaholism within organizations since engagement has positive and workaholism has negative implications for employees' private life.\nOriginality/value: This study clearly shows the differences between two important work experiences - work engagement and workaholism. Using the spillover-crossover model, the study sheds a new light on the process through which employee work engagement and workaholism influence one's partner at home.\n",
"role": "assistant"
}
] |
A comprehensive model for supply chain integration
|
[
"Logistics",
"Supply chain management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The purpose of supply chain management is to integrate the activities of partnering firms and to create seamless supply chain (Towill, 1997), where territorial boundaries between partners are eliminated and all entities of the value chain become integral parts of an organization, in order to provide quality products and services and satisfy the demands of customers effectively and leverage competitive advantages together. Supply chain management is different from the management of a single enterprise, it is about exploiting business partners' core competencies and advanced resources to achieve organizational objectives and integrating independent enterprises into one group to maximize the benefits, reduce costs, and be adaptive to the changing business environment.\n2. A comprehensive model for supply chain integration: Stevens (1989) suggests that companies should integrate internally first (internal integration), and then extend the integration to their supply chain partners (external integration). In this paper, we particularly focus on the external integration, which refers to the integration of a firm's activities with those of their customers and suppliers (Leuschner et al., 2013; Stock et al., 1998) to discuss the interconnectivity among those supply chain entities and required elements. However, it does not hinder the importance of internal integration as we consider it to be essential readiness for extending to external integration. The comprehensive model is actually a generic interface explanation beteen supplier and its customer, we will not distinguish the difference between upstream integration and downstream integration as Wang and Chan (2010).\n3. Implementation issues of the model: As integration has been the core of supply chain management (Fabbe-Costes and Jahre, 2007), a good understanding of the integration process is a key aspect (Gimenez, 2006). But it does not mean all relationships need strong integration. Integrating and managing all business process links throughout the entire supply chain are likely not appropriate. Since the drivers and costs even risks for integration are situational and different from process link to process link, the levels of integration should vary from link to link, and over time. Some links are more critical than others (Lambert and Cooper, 2000). Lambert and Cooper (2000) identify four fundamentally different types of business process links between members of a supply chain: first, managed business process links, second, monitored business process links, third, not-managed business process links, and fourth, not-member business process links, also categorize the members of supply chain into primary members and supporting members. Gimenez et al. (2012) show that supply chain integration is only effective if supply complexity is high. So managers must select the vital business linkages to allocate scarce resources and make their efforts. Factors such as business condition, company's power in supply chain, industry and market characteristics, and supply chain strategic perspective influence and determine both the optimum level of integration as well as the type of integrative activities employed (Van der Vaart and Van Donk, 2008). Van der Vaart and Van Donk (2008) disagree with the well-liked assumption that greater integration is always better. Overdoing integration may not improve supply chain performance (Gimenez et al., 2012). Before trying to test whether tighter integration leads to better firm performance as lot of researchers have done (Leuschner et al., 2013), whether it is the \"right\" integration status between supply chain partners should be identified.\n4. An illustrative case study: For illustrating the integration model and its implementation, a case study of Haier Group and Suning Appliance was conducted. Haier Group, headquartered in Qingdao, China, is ranked worldwide as the third appliance company by unit volume (http://blog.euromonitor.com.pitt.idm.oclc.org/2010/12/major-appliances-millionaires-club-new-2010-company-rankings.html); Suning Appliance, headquartered in Nanjing, China, is the largest Chinese chain retailers of home appliances. The successful case of Haier Group and Suning Appliance supply chain integration has attracted wide attention in China. The data were collected through the home page of Haier (www.haier.com) and Suning (www.suning.com) in last decade. The consistency of secondary sources was verified by interviews with managers of Suning Appliance.\n5. Discussion and conclusions: The supply chain integration model that we put forward can be summarized in Table I. Efficient supply chain integration places all essential resources of all cooperative partners together and connects all functional processes in order to effectively use resources. The goal is to operate the whole supply chain as a corporate entity, to achieve effective and efficient flows of products and services, information and knowledge, finance and decisions, to provide maximum value to the customers at low cost and high speed (Flynn et al., 2010). This study develops a comprehensive conceptual model for supply chain integration based on identified elements scattered in extant literature.\n",
"role": "user"
},
{
"content": "Purpose: Extant research on supply chain integration defines integration in different ways, and mainly discusses a limited number of integration elements. The purpose of this paper is to develop a conceptual integration model which consists of comprehensive elements that are important to academic research and industrial practices.\nDesign/methodology/approach: Key literature survey with drawing threads of existing practices together for developing a systematic referential model and then verify the model with a real case.\nFindings: Developed a model consisting of integration elements residing at the strategic, managerial, operational, and fundamental levels (bottom line). Based on the benefit alignment, the total integration requires supply chain partners to integrate resource flows (material, information, knowledge, and finance), processes and organization, planning and control activities and strategy.\nResearch limitations/implications: The research is based on secondary data and a case study illustration. Further empirical research is required.\nPractical implications: The normative model can guide managers to integrate resources and activities in their efforts for an effective supply chain management. It supplements the Supply Chain Operations Reference Model developed by the Supply Chain Council with an interface description, which may guide the development of information systems for supply chain integration.\nOriginality/value: The comprehensive model provides a more inclusive and integrated perspective of supply chain integration. It is expected that the consensus of supply chain integration could be achievable based on this model. The conceptual framework will assist the researchers to determine integration variables of supply chain.\n",
"role": "assistant"
}
] |
Educational policy borrowing in a globalized world: A case study of Common European Framework of Reference for languages in a Vietnamese University
|
[
"Language policy",
"English teaching",
"Curriculum English",
"Assessing English",
"CEFR"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: There is a growing interest among nation-states to borrow global educational policies and adapt them into their local contexts. Driven by an urge to promote global efficiency and competitiveness of their workforce, a number of governments in the Asia-Pacific region (e.g. China, Japan, Malaysia, South Korea, Taiwan and Thailand) have adopted global language policies that were believed to enhance the quality of English language education for their citizens (Hadisantosa et al., 2010; Hu and McKay, 2012). It is in this context that the Vietnamese Government announced educational reforms to incorporate global language standards into language teaching and learning in the country. Like many other developing countries, Vietnam is currently faced with enormous challenges for developing human capital to compete globally in line with its national ambition. It was reported that a large number of Vietnamese workers failed to meet global demands of foreign language proficiency, particularly in English (Vallely and Wilkinson, 2009). According to a 2013 report on the language test data of 750,000 adult learners from 62 countries and territories, Vietnam was grouped into \"low proficiency\" countries, although the report acknowledged that the country made quick progress in terms of English proficiency (EF Education First, 2013). This national deficiency in English language proficiency created a discourse of crisis in the country which was seen as a \"hindrance to international integration and the improvement of quality in Vietnamese higher education\" (Pham, 2010, p. 56).\nThe CEFR as a borrowed global policy across contexts: The CEFR was introduced in 2001 by the Council of Europe with a view to providing \"a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations, textbooks, etc. across Europe\" (Council of Europe, 2001, p. 1). Empirical evidence for the usage of the CEFR was first provided by Morrow (2004) from three case studies conducted in Europe. Despite positive feedback from teachers and learners involved in the research, given it was conducted in European institutions, it did not have much to say about the relevance of the CEFR beyond Europe. In an effort to cast more light on the use of the CEFR across different contexts, Byram and Parmenter (2012, p. 8) conducted a comprehensive survey both inside Europe and beyond. Focusing on the processes of influence, the authors sought for voices of people who are \"closely involved in the development of language education policy\", including those in ministries of education or academics with policymaking roles. They observed that there was limited evidence for the influence of the CEFR as a policy document in European countries. The evidence for the influence and use of the CEFR in non-European contexts (e.g. Argentina, China, Japan and Taiwan) was more diversified and mixed. In a separate study, Valax (2011) examined teachers' understanding and perception of the CEFR employed in France, the UK, Taiwan, Hong Kong, New Zealand and Australia. Using questionnaires to survey 164 teachers, Valax found that, among other things, only 20 per cent of those teachers read the CEFR themselves, although it was unknown whether their reading was related to their understanding and practice. He arrived at the argument that \"there is little interest in, or enthusiasm for the CEFR among those frontline professionals who will ultimately determine whether it has any real impact on the teaching and learning of languages\" (p. 1). However, although the study aimed at teachers' role in the implementation of the CEFR, it neither delved into teachers' deliberations and actions nor focused in any specific context. Consequently, the study could not provide adequate information on the impact of locality and the role of school administrators, teachers and learners in the process of implementing the CEFR at the local level. The present study seeks to contribute to this body of literature by investigating the situated process of adopting the CEFR and elucidating both opportunities and challenges encountered by local actors in that process.\nTheoretical underpinnings: We frame the argument in the paper by referring to neoliberalism (Block et al., 2013; Piller and Cho, 2013) and neocolonialism (Phillipson, 2008) in English language education, as well as by drawing on insights from educational policy globalization (Rizvi and Lingard, 2010). These contemporary theoretical frames help us highlight the frictions and challenges in adopting global educational mandates in the context of developing countries. Rizvi and Lingard (2010) assert that the driving force behind the process of globalization of educational policies is the neoliberal ideology. The assertion also applies to the case of globally adopted policies that demand the promotion of English proficiency and the introduction of Western pedagogical models such as the CEFR. Majhanovich (2013, p. 249) considers English language as a \"tool of neoliberalism and globalization in Asian contexts\". She argues that the spread of English helps to promote neoliberal ideals inherent in the globalization process. Global policies to promote English language promote a system which \"venerates the ideals of 'choice', 'competition' and the 'free market'\" (Price, 2014, p. 567). These policies create divisions between individuals and communities, leading to \"socio-economic and educational inequalities\" (Price, 2014, p. 569) between individuals, social groups and between more developed and the less developed or developing countries - i.e. between Global North and Global South. To participate in the globalized economy generated mainly through English, millions of other languages speakers in Asia are required to attain English proficiency controlled by the English-speaking world and are thus required to accept the worldviews and the neoliberal agendas associated with the language, irrespective of their local contexts, values and traditions (Majhanovich, 2014; Phillipson, 2008). The teaching and learning of English in the region relies on Eurocentric knowledge, evaluation system, textbooks and resources (Majhanovich, 2014, p. 169). Researchers, including Pennycook (1998), Canagarajah (1999), Rapatahana and Bunce (2012) and Phillipson (2006, 2008), regard this dominance of English and Western pedagogical models as instances of neocolonialism wherein Eurocentric practices and agendas are privileged and promoted by the Global North at the expense of the Global South. The growing interest in adopting the CEFR among Asian polities including Vietnam thus can be examined at this intersection of neocolonialism and neoliberalism in a globalized world.\nMethodology: The data to develop the argument of the paper are obtained from a doctoral research project that aims to understand the reception, interpretations and responses of key stakeholders in the process of enacting the CEFR in a Vietnam public university. The study was designed as a qualitative case study with data being collected using policy document analysis, classroom observation and in-depth interviews with 21 purposively sampled participants, including school administrators, English language teachers and students over a period of six months (Table I).\nAdopting the CEFR at the national level: The CEFR was first introduced through Decision No. 1400/QD-TTg, dated September 30, 2008, by the Prime Minister to grant approval to a multi-million dollar national project launched in 2008 with a vision for quantifiable change in the foreign language education running from 2008 to 2020 (hence, the commonly known as Project 2020). In Article 2 of this policy document, it was stipulated that a national framework of foreign language competence would be developed to serve as a platform for curriculum design, textbook writing, teaching lessons and testing and evaluation.\nAdopting the CEFR at the institutional level: Although the Vietnamese version of the CEFR-based framework is currently under development, policy-makers have considered the framework as a useful tool to reform the language education system in Vietnam. Article 5 of Decision 1400 stipulates that school-leavers need to reach a minimal level of language competence based on the new CEFR-based framework levels to be eligible for their degree. As for university students, the minimal English language proficiency level for non-major students after finishing the tertiary English program is B1 level (identified as the Threshold level where users develop the ability to perform fundamental functions in a language). For language major students, it is C1 or Mastery level. The policy also states that English language teachers at universities will attain the minimal proficiency level of C1.\nEmerging opportunities for teachers and learners: Controversial as the practice of adopting the CEFR at AU, it can be observed that the framework has brought about some initial, though constrained, benefits for language teaching activities, among which are better access to contemporary methodological innovations, including blended learning, and efforts to demystify the CEFR level descriptors and can-do statements.\nChallenges for teachers and learners: While the benefits for teachers and learners in classroom activities, however limited in scope, need to be acknowledged, enormous obstacles were generated by the adoption of the CEFR at AU which outweighed those opportunities. The challenges resulted in deficits in teacher professionalism which had the potential to hamper the whole process of reform as originally envisaged by the policy-makers.\nConclusion: Although the present study is limited to a specific locality in Vietnam, it has offered a fine-grained and in-depth understanding of how a borrowed global language education framework was received and appropriated for practice in a local context. At the macro level, it is not clear whether the policy-makers in Vietnam came up with the idea of adopting the CEFR as a result of a systematic dissemination of the framework to language education institutions in the country or out of sheer \"happenstance\" wherein an influential person came across the CEFR in a particular time at a particular place and then turned the decision-makers' attention to it, as it was in the case of China and New Zealand (Byram and Parmenter, 2012, p. 268). It is, however, observed that once the decision on the CEFR was made, the framework has been referred to and adopted consistently in the discourse of top-down educational policies supervised by the authorities from the central to the local level. The authorities in Vietnam have used high-stakes testing mechanisms to set standards for learners and teachers to generate swift and widespread influence of the new CEFR-based framework. As observed by Byrnes (2007, p. 683), using testing and assessment tools for gatekeeping and validation purposes will help to \"lend [...] itself to generalizing practices over large areas with high levels of uniformity and agreement to fulfill its purposes\". Yet, at the micro level, the challenges arising from this practice, as seen in the case of AU, can outweigh the opportunities it has generated in the context of limited human and material resources. Thus, the challenges entail detrimental implications for language educators and learners, making the enactment process of the CEFR in Vietnam a conundrum.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to shed light on the process of adopting and accommodating a global language education framework, namely the Common European Framework of Reference (CEFR) for languages, in the context of Vietnam.\nDesign/methodology/approach: The data to develop the argument of the paper are obtained from a doctoral research project that aims to understand the reception, interpretations and responses of key stakeholders in the process of enacting the CEFR in a Vietnam public university. The study was designed as a qualitative case study with data being collected using policy document analysis, classroom observation and in-depth interviews with 21 purposively sampled participants, including school administrators, English language teachers and students over a period of six months.\nFindings: The paper argues that the adoption of the CEFR, as it currently stands, can be seen at best as a \"quick-fix\" (Steiner-Khamsi, 2004, p. 58) solution to the complex and time-consuming problem of improving the quality of English language education in Vietnam, which fails to address some critical issues in the practice of teaching and learning the language in the country.\nOriginality/value: The study speaks to the body of literature on the CEFR as a contemporary global language policy borrowing phenomenon in developing countries. It contributes to a better understanding of how a global language policy is adopted and appropriated at the grass-root level.\n",
"role": "assistant"
}
] |
The usage of ILLiad and Get It Now at a US medium-sized academic library over a three-year period
|
[
"Document delivery",
"Electronic journals",
"Copyright Clearance Center",
"Get It Now",
"ILLiad",
"Pay-per-view articles"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Bradley University's Cullom-Davis Library finally experienced something that many other academic libraries had been dealing with for years, budget problems which resulted in the cancellation of approximately 33 per cent of the library's materials budget; the majority of which was from its serials budget in both paper and electronic formats. Toward the beginning of this cancellation process, several Bradley librarians attended a virtual demonstration of Copyright Clearance Center's Get It Now electronic article delivery service (Copyright Clearance Center, 2016). We felt this service would be a way of supplementing our serials subscriptions and the more traditional interlibrary loan (ILL) service, Interlibrary Loan Internet Accessible Database (ILLiad). Thus, we began using their unmediated service in July 2012. It has proved very popular with Bradley students and faculty, especially after Copyright Clearance Center added more publishers to the service. We were curious to find out if there was any effect of the Get It Now service on the usage of ILLiad. We also wanted to know which user group (faculty, undergraduates or graduates) and which departments were using each of these services, and if there was a difference between Get It Now and ILLiad with regard to both user groups and department.\nThe environment: Bradley University is a medium-sized academic institution with approximately 5,400 FTE (Bradley University, 2015). The majority of the students are undergraduates, with the largest populations in the subject areas of engineering, business, education and health sciences, which includes nursing. The Colleges of Fine Arts and Liberal Arts also offer popular programs for undergraduates. Until recently, the only advanced degrees offered were Master's Degrees in specific disciplines, with the majority in engineering, business and education. Within the past seven years, two new doctoral programs were created in physical therapy and educational administration. Nationally, the University is known as an engineering school and to a lesser degree for its College of Business. This may seem unusual for a medium-sized non-denominational private college, but the University was formed in 1897 as a school of horology (watch-making). This tradition plus the University's close proximity and ties to the Caterpillar Corporation, whose international headquarters are located in Peoria, Illinois, contributed to the strong engineering emphasis of the school.\nSurvey of literature for both ILLiad and Get It Now: ILLiad is an automated software system designed by Virginia Polytechnic Institute and Virginia Tech University and licensed by OCLC (Kriz et al., 1998). A study of the implementation of ILLiad was completed at Nicholls State University and centered on user satisfaction with the relatively new system (Tonn, 2003). Studies at this time also looked at user acceptance of electronic delivery through ILLiad (Kriz, 2000;Herrera, 2003). At the time of this article being written, 1,200 libraries worldwide use this system for their ILL services, including Bradley University (OCLC, 2016).\nLocal Get It Now customization: Get It Now provides access to articles from a number of publishers. In those cases where the library has a subscription to a title, the access through Get It Now is turned off. Bradley subscribes to several publisher packages that were available via Get It Now. They are listed below:\nMethodology: Usage statistics were collected from ILLiad on a monthly basis from the web-based statistical feature provided by OCLC. Get It Now monthly usage statistics was received by email from Copyright Clearance Center. Data from the fiscal years 2012-13, 2013-14 and 2014-15 were compiled. We did this because we really did not know how the Get It Now service would be used, how much it would be used and if it would be used differently from ILLiad. We recorded the user status (faculty, staff, undergraduate or graduate) and departmental affiliation for each user. Usage statistics for each user group and department plus the most requested journals were determined for both services.\nResults: The first fiscal year we turned on the Get It Now service we did not allow access to the largest publisher, Elsevier Science/Direct which affected our totals significantly for 2012-13 (Figure 1). Traditional document delivery via ILLiad (6,481 requests) was used significantly more than the newly added Get It Now service (2,014 requests).\nDiscussion: Get It Now has been a well-used service since its implementation in 2012. The requests increased as the number of publishers increased, and it appears that its use resulted in a reduction in requests through ILLiad. The sum of requests between both services decreased between the first and second year, but the sums for the second and third year are approximately the same. The department that uses both services the most is Physical Therapy for which the library did not receive an increase in budget when the program was implemented. The other heavy users are for the most part from the departments with graduate degrees, and the individuals who request the most articles are typically faculty or graduate students.\nConclusion: With a decreasing budget, Bradley University, like many other medium-sized academic institutions, must look to alternative routes to fill the requests for journal articles. Both Get It Now and the traditional document delivery service, ILLiad, are heavily used and seem to be meeting the needs of the faculty, staff and students.\n",
"role": "user"
},
{
"content": "Purpose: As medium-sized academic libraries' budgets continue to shrink or cannot compete with inflation rates, document delivery options like ILLiad and unmediated Get It Now can play a prominent role in providing content lost due to the inevitable cancellation of serial subscriptions. This paper aims to evaluate the impact of using these two options in a US university.\nDesign/methodology/approach: This paper examines the usage of these two services for the three fiscal years following the implementation of Get It Now at Bradley University.\nFindings: Questions addressed are as follows. How have they been used over the three-year time period? Which user status group has been using them, undergraduates, graduate students or faculty members? Which departments on campus have been using them the most?\nOriginality/value: This study is the first to look at the usage of Get It Now and ILLiad together over multiple years.\n",
"role": "assistant"
}
] |
Does female human capital contribute to economic growth in India?: an empirical investigation
|
[
"ARDL",
"Economic growth",
"Female human capital",
"Male human capital"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Economic theory suggests that human capital is an important determinant of economic growth. Empirical evidence from a group of countries confirms this linkage. A country that starts with higher level of education attainment grew faster, given the initial per capita GDP. The channel of effect involves the positive effect on fertility and additional positive impact on growth given values of investment and fertility (Barro, 1991).\n2. Literature review: The contribution of human capital as a source of economic development has long been an issue of interest among the academicians. The importance of human capital for economic growth has been a topic of intense discussion during last two decades. It includes studies from neoclassical exogenous growth models of Solow (1956) and Swan (1956) to endogenous growth model of Romer (1994) to empirical evidence-based studies with modern econometric techniques.\n3. Data, variables and model construction: 3.1 Data and variables\n4. Econometric methodology: The Ng and Perron (2001) unit root test is employed to check the stationarity properties of the series. This test does not suffer from severe size distributions properties when error term has negative moving-average root unlike other traditional unit root test (ADF, DF, PP and KPSS). Ng-Perron (2001) test employs the GLS de-trended data which are based on modified SIC/AIC.\n5. The empirical results and analysis: 5.1 Unit root tests\n6. Conclusion and policy implications: India is a developing economy and its economic growth has been low and inconsistent before liberalization in 1991. During the period, female segment has been denied fair access to social and economic opportunities. They have been deprived of acquiring education, skill and equal opportunities compared to the male counterparts. India's gender inequality indices show greater gender discrimination with respect to health, education and economic opportunities. This situation is worse in the rural areas even today. In India, public policies with respect to female education and health have not been successful after liberalization due to poor planning, bad governance and defective implementations. However, due to planned efforts of various non-governmental organizations and policy makers in recent times, the government has been allocating more funds to develop female human capital in India.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the impact of female human capital on economic growth in the Indian economy during 1970-2014.\nDesign/methodology/approach: The paper employs Ng-Perron unit root test to check the order of integration of the variables. The study also used ARDL-bounds testing approach and the unrestricted error-correction model to investigate co-integration in the long run and short run; Granger's causality test to investigate the direction of the causality; and variance decomposition test to capture the influence of each variable on economic growth.\nFindings: The study constructed a composite index for both male and female human capitals by taking education and health as a proxy for human capital. The empirical findings reveal that female human capital is significant and positively related to economic growth in both short run and long run, while male human capital is positive but insignificant to the economic growth; same is the case for physical capital, it implies that such investment regarding female human capital needs to be reinforced. Further, there is an evidence of a long-run causal relationship from female human capital, male human capital and physical capital to economic growth variable. The results of variance decomposition show the importance of the female human capital variable is increasing over the time and it exerts the largest influence in change in economic growth.\nResearch limitations/implications: The empirical findings suggest that the Indian economy has to pay attention equally on the development of female human capital for short-run as well as long-run growth of the economy. This implies that the policy makers should divert more expenditure for developing support for female education and health.\nOriginality/value: To the best of authors' knowledge, this is the first attempt to study the relationship between female human capital and economic growth in the context of the Indian economy.\n",
"role": "assistant"
}
] |
Implementation of sustainable development practices in the hospitality industry: A case study of five Canadian hotels
|
[
"Sustainable development",
"Hotel management",
"Hotel manager",
"Hospitality management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Managers in the tourism industry are faced with many challenges, as this industry has displayed an almost constant growth since the 1950s. Characterized by globalization, which increases the number of competitors (Augustyn and Seakhoa-King, 2004), and by the development of information and communication technologies that facilitate the comparison of offerings by consumers (Koch and Cebula, 2002; Rowley, 2004), the tourism industry is also vulnerable to various financial, political and public health crises (OCDE, 2003), as well as climate catastrophes and armed conflicts (International Hotels and Restaurants Association, 2015). Also, it must meet consumer expectations that are constantly evolving with regard to quality of service (Skogland and Siguaw, 2004) and respond to the increasing environmental and social awareness of consumers (Miller, 2003; Bergin-Seers and Mair, 2009; Jiang and Kim, 2015).\n2. Conceptual framework: The conceptual framework upon which our research is founded is first built upon the literature on tourism management and on sustainable development. Second, we identify the relevant concepts that characterize the global and specific business contexts that influence the adoption of SD practices in the hospitality industry. Moreover, we will attempt to explain the implementation of a SD orientation with multilevel (individual, organizational, meta-organizational) theory, i.e. by observing this phenomenon through the following four theoretical lenses: theory of reasoned action (TRA), organizational change management, stakeholder theory and open systems.\n3. Methodology: To answer the research question, we chose a qualitative research strategy, that is, a multi-site case study. This strategy is well-tailored to understand complex organizational phenomena within their global and specific context (Eisenhardt, 1989). Our analytical approach was essentially abductive, aiming to build theory by identifying the mechanisms and influential contingencies that best explain the phenomenon under study (Tsoukas, 1989). The sampling strategy utilized in selecting the sites was not random but purposeful (Patton, 1990), based on the diversification criterion, that is, to obtain a maximum variation in the case study sites (Yin, 2009). For sampling purposes, the following site selection criteria were identified: hotel size, ownership structure and location. Phone contacts were made with hotel chief executives in the province of Quebec, Canada, seeking their participation in the study, until five sufficiently diversified sites were identified.\n4. Results: We first investigated the process of implementing an SD approach within the global and specific contexts of the five hotels studied (named Mosca, Capella, Eagle, Triangle and Ursa)[1], as shown in Table I. We then compared the observed the results through the four chosen theoretical lenses within a cross-site analysis of the cases. Returning to the research model presented in Figure 1, the results of this data analysis thus characterize the initiator of the SD orientation, the process phases by which this strategic orientation was implemented, and the inhibiting and facilitating factors that played a significant role in this process. We conclude with the post-implementation evaluation of the SD orientation and the extent to which SD practices were adopted by the five hotels, regrouped under the environmental, social and economic dimensions.\n5. Discussion: Returning to the research question and model (Figure 1), the results are discussed by highlighting, for each component of the research model, the links that were not expected or anticipated but that have nevertheless been observed in the case studies, including the impact of these links upon the extent to which SD practices are adopted in the hospitality industry.\n6. Implications for hospitality management: The preceding results and discussion lead one to conclude that this research has theoretical and methodological implications for hospitality management and sustainable development. First, the initial elaboration and validation of a process model of SD adoption in hotel establishments constitutes a conceptual building block upon which this complex phenomenon can be further studied. Moreover, care has been taken in this study to systemically encompass all three dimensions of SD, that is, environmental, social and economic, in contrast to previous studies in the hospitality industry that have rather emphasized the environmental dimension.\n7. Limitations and conclusion: This study's main limitation lies in the purposeful sampling strategy that was used to select the cases, in that only three- and four-star-rated hotels were investigated. The aim here, however, was one of increased comprehension of a complex phenomenon studied in its natural context, i.e. the implementation of an SD orientation in a hotel, rather than one of generalization in the theory-testing sense. This being said, it is fairly evident that future research should investigate other types of hotel establishments such as \"bed-and-breakfast\" and luxury establishments. Also, with regard to external validity and notwithstanding the strong potential for theoretical generalization from case study research (Rosenzweig, 1994; Rousseau and Fried, 2001), the study collected case data from hotels within a single country whose specific cultural, socio-economic and regulatory context may limit our ability to generalize the results to other countries.\n",
"role": "user"
},
{
"content": "Purpose: This study aims to describe and understand how and to what extent hotel managers adopt sustainable development (SD) practices in their establishment, given the presence of barriers to the adoption of such practices by hotel establishments, and the lack of knowledge as to the ways and means by which an SD orientation is developed and implemented.\nDesign/methodology/approach: A qualitative research strategy was chosen to answer the research questions, that is, a multi-site case study of five Canadian hotels.\nFindings: The study shows why and how hotel managers proceeded to develop and implement an SD orientation, as well as the contextual factors that affected the extent to which SD practices were adopted.\nResearch limitations/implications: The initial elaboration and validation of a process model of SD adoption in hotel establishments constitute a conceptual building block upon which this complex phenomenon can be further studied.\nPractical implications: This study demonstrates that in the hospitality industry, implementing an SD orientation is a strategy that can be enabled through the provision of required knowledge and expertise as well as appropriate tools and techniques to hotel managers.\nOriginality/value: In describing and understanding the dynamics of implementing a SD orientation in five Canadian hotels, this study has provided a conceptually and practically fruitful answer to the question of \"how\" and \"to what extent\" hotels adopt SD practices.\n",
"role": "assistant"
}
] |
Are organizations shooting themselves in the foot? Workplace contributors to family-to-work conflict
|
[
"Work identity",
"Role conflict",
"Gender",
"Employee attitudes",
"Public sector organizations",
"United Kingdom"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: With the increase in dual-income families and employed single parents, conflict between work and family has reached a crisis. Most research on work-to-family conflict (WFC) has investigated the extent to which work interferes with family life (Thompson and Beauvais, 2000). Organizations, however, may be more interested in the extent to which family interferes with work, and how this process occurs. It has become increasingly clear that each direction of conflict (work-to-family and family-to-work) may be predicted by different variables (Fu and Shaffer, 2001) and may also result in dissimilar outcomes (Greenhaus et al., 2001). The formulation of workplace policies to address effects of family-to-work conflict (FWC) such as absenteeism, intention to turnover, and job satisfaction (see Eby et al., 2005 for a review) may be more effective if the antecedents of such conflict are better understood.\nAntecedents of FWC: WFC research is predicated upon the notion of spillover, in which attitudes, behaviours, or emotions from one domain diffuse to the other (Near et al., 1980), and upon role theory, which suggests that conflict, or psychological tension, occurs when individuals engage in multiple roles that are incompatible (Katz and Kahn, 1978). Hence, FWC is said to arise when responsibilities associated with the family role hinder an individual's performance at work. For example, worrying about a sick child may distract a parent on the job and reduce his or her efficiency (Duxbury et al., 1994).A number of demographic and situational characteristics have been investigated over the years as possible determinants of FWC. Established antecedents are caregiving responsibilities, whether for dependent children (Burke and Greenglass, 1999) or elderly relatives (Gignac et al., 1996); the number of hours devoted to household work (Fu and Shaffer, 2001); and family-related stressors such as parental workload and the misbehaviour of one's children (Vinokur et al., 1999). Greater psychological involvement in the family role has also been found to predict higher levels of FWC (Adams et al., 1996), as have lower levels of spousal support (Burke and Greenglass, 1999).The work-family literature has traditionally assumed that variables associated with the family domain (e.g. childcare, household work) predict FWC, and that work domain variables (e.g. hours worked weekly, job autonomy) predict WFC. When both types of conflict are measured, these are the hypotheses that are usually tested (e.g. Williams and Alliger, 1994; Frone et al., 1992). Much of the existing research on both work and family domain variables, however, has used composite, non-directional measures of WFC. These non-directional measures have incorporated items measuring both WFC and FWC in one scale. Studies using these measures cannot determine, therefore, whether antecedent variables are predicting WFC, or FWC.Any influence of work domain variables on FWC has been assumed to occur through the mediating effects of WFC. If work-related problems begin to interfere with the completion of personal or family-related obligations, these unfulfilled home obligations will begin to interfere with day-to-day functioning at work, and vice versa (Frone et al., 1992).An alternative potential relationship is that work role pressures may increase employees' perception of family role pressures interfering with their performance at work. If stressors originating in one domain create or increase the salience of stressors in another domain, a perspective accounted for by spillover theory and role theory, work domain variables may contribute directly to FWC.This opposite-domain perspective has been under-researched. A small number of studies have found direct links between elements of the work domain and FWC, indicating that mediation via WFC is not the only way in which work variables contribute to employees' FWC. For example, research by Fox and Dwyer (1999) has shown that two work domain variables, job involvement and time spent on work activities can moderate the relationship between family domain variables and FWC. This suggests that work domain variables may play a greater role in contributing to FWC than has previously been supposed, and invites further research.\nWork domain variables: Many features of the work environment have been positively linked to WFC, or to non-directional measures of WFC. Five of these with the unique potential to also predict FWC were chosen for investigation in this study. These are hours worked, work role expectations, control over work hours, work stressors, and supervisor support regarding work-family issues.Hours worked\nSex interactions: The roles of sex and gender in WFC are not well established, despite a number of studies incorporating sex as either a direct or a moderating influence on the experience of conflict between work and family. The rational model of WFC predicts that men should experience more WFC than women, because men tend to spend more time in work activities than women (Jacobs and Gerson, 2000). By the same token, women are likely to experience more FWC than men, because women take primary responsibility for the family and thus spend more time in family activities (Scott, 2001).Empirical findings have not been altogether supportive of this model. In the majority of studies examining sex, women have been shown to experience higher levels of both WFC (Gutek et al., 1991) and FWC (Duxbury et al., 1994). This may be due to the fact that women have been found to spend more total hours engaged in work and family activities than do men (Duxbury et al., 1994), creating more opportunities for work and family activities to overlap.A handful of studies have shown sex to moderate the links between various work and family variables and non-directional measures of WFC. Duxbury and Higgins (1991) found that work involvement and family conflict were stronger predictors of WFC for women than for men, and that family involvement and work expectations were stronger predictors of WFC for men than for women. Having responsibility for childcare (Buffardi et al., 1999) and eldercare (Neal et al., 1997) were also found to predict WFC more strongly for women than for men. Because these studies used non-directional measures of WFC, however, knowledge of how sex affects specifically FWC is constrained.Traditional gender role expectations and norms regarding employment and help-seeking behaviour have resulted in the association of social support with women, and the association of work role expectations with men. These gender associations, explored in the following section, suggest that the predictive power of each antecedent for FWC may vary between the sexes.Sex and work role expectations\nMethod: Sample\nMeasures: FWC was measured using four items developed by Burley (1989, cited in Gutek et al., 1991) and one item developed by Bohen and Viveros-Long (1981). Items assessed the extent to which respondents experienced conflict from the family to the work domain (e.g. \"My personal life takes up time that I'd like to spend at work\"; \"I'm often tired at work because of the things I have to do at home\"). Participants were asked to indicate the extent to which they agreed or disagreed with such statements on a five-point scale ranging from \"strongly disagree\"=1 to \"strongly agree\"=5. Cronbach's alpha was 0.82.Sex was assessed by means of a dummy variable, coded 0 for male and 1 for female.Family role expectations were measured using a four-item scale developed by Cooke and Rousseau (1984). Items assessed the degree to which respondents agreed that their friends and families expected them to prioritize family over work. The same five-point response scale as that used for the dependent variable was employed. Cronbach's alpha was 0.82.Parental strain was measured using two items developed by Pearlin and Schooler (1978). Items assessed the degree to which children's behaviour was a source of concern to respondents. The same five-point likert response scale was used. The reliability alpha was 0.67.Control over childcare was measured using a six-item scale developed by Thomas and Ganster (1995), assessing the degree of choice respondents had in relation to the quality, cost, and scheduling of childcare arrangements. Participants were asked to indicate the amount of choice available to them in relation to each item using a five-point scale ranging from \"hardly any\"=1 to \"a lot\"=5. Cronbach's alpha was 0.90.Work role expectations were measured using a four-item scale developed by Cooke and Rousseau (1984). Items assessed the degree to which respondents agreed that their colleagues and supervisors expected them to prioritize work over family. Respondents answered each item using a five-point response scale ranging from \"strongly disagree\"=1 to \"strongly agree\"=5. Cronbach's alpha was 0.89.Supervisor support was measured using a nine-item scale developed by Shinn et al. (1989). The scale items assess the degree to which respondents' supervisors had displayed emotional and practical expressions of support. The same five-point likert response scale was used. Reliability alphas were 0.73 for emotional support and 0.86 for instrumental support.Work stressors were measured with three items assessing role overload (i.e. having too much to do in a given amount of time) developed by Beehr et al. (1976) and Rizzo et al. (1970) five-item scale assessing role conflict. The same five-point likert response scale was used. Cronbach's alpha was 0.67.WFC was measured using four items developed by Kopelman et al. (1983), and two items developed by Bohen and Viveros-Long (1981). Items assessed the extent to which respondents experienced conflict from the work to the family domain (e.g. \"My work takes up time that I'd like to spend with family/friends\"; \"After work, I come home too tired to do some of the things I'd like to do\"). The same five-point response scale was used. Cronbach's alpha was 0.85.Control over work hours was measured using an eight-item scale developed by Thomas and Ganster (1995), assessing the degree of choice respondents had in relation to the scheduling of work activities. Participants were asked to indicate the amount of choice available to them in relation to each item using a five-point scale ranging from \"hardly any\"=1 to \"a lot\"=5. Cronbach's alpha was 0.79.\nAnalysis: Hierarchical regression analysis was used to test the impact of the independent variables (work domain variables, WFC, and sex) in predicting the dependent variable (FWC). To test H1 and H2, home domain variables were entered in step 1 of the equation, followed by work domain variables in step 2. Entering the work domain variables in this subsequent step enabled examination of the incremental effects of the work domain predictors beyond the effects of the home domain predictors on variance in FWC. In the third step, WFC was entered.To test H3 and H4, the interaction terms (sex x work role expectations and sex x supervisor support) were entered in the final fourth step, permitting the significance of the interactions to be determined after controlling for the main effects of the independent variables. The predictor variables were centred before forming interaction terms, in order to reduce the multicollinearity often associated with regression equations containing interaction terms (Aiken and West, 1991). Changes in R2 were used to evaluate the ability of the interaction terms to explain variance beyond that accounted for by the main effects in the equation.Significant interactions were probed using procedures recommended by Aiken and West (1991). The regression equation was restructured to represent the regression of FWC on the independent variables (work role expectations and supervisor support) for the two different sexes. Two separate regression equations were calculated, one for men and one for women. T-tests were then performed on simple slopes of the equations to determine if they differed from zero.One of the aims of this study was to investigate whether opposite-domain predictors have a direct effect on FWC, or whether the effect is mediated through WFC. To test for mediation, the procedure recommended by Baron and Kenny (1986) was used. In this procedure, three regression models are investigated. First, the mediator (WFC) is regressed on the independent variables (work domain variables); second, the dependent variable (FWC) is regressed on the independent variables (work domain variables); and third, the dependent variable (FWC) is regressed simultaneously on the independent (work domain variables) and mediator (WFC) variables.Mediation is present if the following conditions hold true: the independent variable affects the mediator in the first equation; the independent variable affects the dependent variable in the second equation; and the mediator affects the dependent variable in the third equation. The effect of the independent variable on the dependent variable must be less in the third equation than in the second. Full mediation occurs if the independent variable has no significant effect when the mediator is in the equation, and partial mediation occurs if the effect of the independent variable is smaller but significant when the mediator is in the equation.\nResults: Factor analysis\nDiscussion: The purpose of this study was twofold. The first objective was to investigate the effects of work domain variables on FWC. The second goal was to examine hitherto unexplored differences between men and women in the predictors of FWC.Work domain antecedents of family-to-work conflict\nConclusions and future research: The aims of this study were to investigate the direct effects of work domain variables on FWC, an approach unprecedented in the work-family literature, and to determine whether these work pressures similarly affect both men's and women's experience of FWC. Testing this opposite-domain perspective reveals that work domain variables do exert a significant effect on FWC above and beyond the effects of family domain variables, and independent of WFC. This indicates that organizational work demands and role pressures may have more influence over the degree to which employees' family lives interfere with their work than has previously been assumed, especially for men. The relationship between work role expectations and FWC was found to be significantly stronger for men than for women. Current norms still appear to require men to leave their family obligations at home (Wiley, 1991) and assign priority to the work domain, rendering more salient any family interruptions with work.Further research investigating men's changing attitudes toward involvement in family life and orientation toward work roles may help to convince organizational policy makers of the need to adjust current norms to create a more supportive work-family climate for employees of both sexes. In order to ensure that men and women are equally able to balance the competing responsibilities of work and home, greater gender equity with regard to organizational work-family climate needs to be established. Understanding the rationale behind prevailing managerial attitudes toward men's involvement in work and family roles may help in developing strategies to enact such equity.\n",
"role": "user"
},
{
"content": "Purpose: - To examine the direct effects of work domain variables on family-to-work conflict (FWC), beyond their indirect effects via the mediating variable of work-to-family conflict (WFC), and sex differences in the effects of work role expectations and supervisor support on FWC.\nDesign/methodology/approach: - A survey was conducted among 208 UK public sector employees. Hierarchical multiple regression analysis tested main and moderating effects of work domain variables and sex on FWC. To test for mediation, the procedure recommended by Baron and Kenny was used.\nFindings: - Work domain variables had a significant effect on FWC above and beyond the effects of family domain variables, and independent of WFC. The relationship between work role expectations and FWC was found to be significantly stronger for men than for women.\nResearch limitations/implications: - The cross-sectional design of the study does not permit firm conclusions regarding causality, and the results may be influenced by common method bias.\nPractical implications: - In the face of evidence that organizations are causing the very phenomenon that hurts them, the responsibility to assist employees with reducing FWC is enhanced. Particularly for men, management of organizational expectations to work long hours and prioritize work over family is an area in which employers can and should play a key role if gender equity with regard to organizational work-family climate is to be established.\nOriginality/value: - This study indicates that organizational work demands may have more influence over the degree to which employees' family lives interfere with their work than has previously been assumed, especially for men.\n",
"role": "assistant"
}
] |
Brand co-creation through triadic stakeholder participation: A conceptual framework based on literature review
|
[
"Organization",
"Consumer",
"Literature review",
"Supplier",
"Brand co-creation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Organizations that aim to attract new markets should begin with understanding consumers' needs in those markets. However, the age-old methods of analyzing consumer psyche may no longer be useful today, as they tend to distance the organization from its consumers. One approach that has the potential to bridge this gap is co-creation. It is a process that enables consumers, business managers, and employees to jointly participate in the creation of new products and services and in brand development. Co-creation activities, like participatory events and online communities, allow organizations to not only engage with their consumers but also gain insight into their emotions, feelings, and memories. A well-managed co-creation process can lead to successful innovations and new business opportunities. Prahalad and Ramaswamy (2000) were the first to coin the term \"co-creation\". Over the years, the focus of co-creation has shifted from products to value and finally to the brand. The brand co-creation theory is grounded in the belief that the consumer is no longer satisfied with being a passive buyer of a brand. The theory emphasizes the consumer's role as an active contributor in creating brand experiences (Prahalad and Ramaswamy, 2000; Vargo and Lusch, 2004; Wikstrom, 1996). Further, the internet, social media, and online communities have helped create a fiercely competitive market wherein organizations are assimilating their competencies with consumer participation to improve the value of their brands and experiences (Fuller, 2010). In fact, social media is no longer limited to consumer-to-consumer interactions; it also helps brands and marketers. Brands should therefore effectively manage their social media content to enable better engagement (Viglia et al., 2018). These engagements have become more prevalent as more corporates believe that co-creation can enhance brand competitiveness (Bendapudi and Leone, 2003).\nMethodology: This literature review was conducted in keeping with the principles recommended by Jesson et al. (2011):\nPresentation of findings: The history of co-creation\nAntecedents of brand co-creation: Brand co-creation and consumers\nBrand co-creation outcomes: Brand co-creation has many outcomes and yields benefits for all the stakeholders involved. Different kinds of participants can yield different kinds of benefits. Certainly, the organization sponsoring the co-creation activity gets the most benefit from it; however, consumers and suppliers also benefit from brand co-creation. The sections below discuss the plausible benefits for consumers, organizations and suppliers in detail and discuss the typical outcomes of brand co-creation.\nProposed combined model: Based on the above discussion, we propose the following conceptual model combining the suggested associations among the constructs (Figure 2).\nConclusion: Brand co-creation is an emerging field of study, and there is a gradual but growing appreciation for the importance of its role in branding and marketing as a whole. Especially for companies with limited budgets, the use of brand co-creation as a marketing tool has become hugely important. In fact, organizations with severe resource crunches can use co-creation to learn more about what consumers are looking for from a new brand. The extensive literature review has provided a holistic view of the factors that influence brand co-creation along with the motives behind each of the stakeholders' participation in such events. It shows there are important roles for organizations and consumers to play. It also argues the importance of suppliers, as they are core to the value chain and hence cannot be neglected in a product branding scenario. Along with identifying numerous antecedents, this literature review also addressed the outcomes of brand co-creation.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to find prior research work on brand co-creation and the role of different stakeholders in brand co-creation and to come up with a conceptual model that can be empirically validated.\nDesign/methodology/approach: A keyword approach has been used to find earlier published papers on co-creation and the role of different stakeholders in co-creation. The focus was more on brand co-creation. The inclusion criteria are empirical or conceptual research papers, books, conference papers, peer reviewed and published in English language journals. As the topic is encompassing various themes, a wide variety of academic resources were reviewed.\nFindings: The history of literature brings forth the two major stakeholders consumers and organizations. The importance of third important stakeholder, i.e. suppliers has been mostly overlooked. However, previous studies show that appropriately managed supplier involvement can diminish product and brand development time and cost and it may provide access to newer technologies and better quality. Hence, for a successful brand co-creation one must not forget the importance of suppliers. Along with the motives and outcomes of brand co-creation from consumer and organization perspective this research aims to look into the supplier perspective of the brand co-creation framework.\nOriginality/value: This is the first academic literature review on brand co-creation from a triadic stakeholder perspective of consumers, organizations and suppliers. Also there is no comprehensive model that exists for brand co-creation looking at its input and its overall outcome. This review provides a bibliography of academic literature from 1959 to 2018 covering 150 journals.\n",
"role": "assistant"
}
] |
The relationship of consumer personality trait, brand personality and brand loyalty: an empirical study of toys and video games buyers
|
[
"Personality",
"Brand identity",
"Brand loyalty"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: Nowadays in the trend towards fewer children and population aging, the ages of the customer group who play with toys are going to expand upward and downward from 0-100. Many outstanding stylish toys have been designed to attract office workers aged from 30-40, even to the more affluent 50+age group. It seems that the toy industry will not be able to survive if they do not include these adults as their core targets consumers. Therefore, the toy industry has to keep digging out what adults want then satisfy their innermost desires. The situation is the same as in the video game industry. According to the video game player population distribution in the USA in 2006, the consumer group aged between 18 and 49 was still the majority, taking up to 44 per cent of total sales. Nevertheless, the players aged below 18 and over 49 have also been increasingly emphasized recently. Toy and video game industries are intimately interrelated. Many toys and video games are the extension from cartoon animation or movies, and Japan is the main representative. Japan exports its toy and video game products in great numbers to Taiwan and the total sales ranked the number two among Taiwan's toy importers in 2006.Distinct brand personality plays a key role in the success of a brand. It leads customers to perceive the brand personality and develop a strong connection to the brand (Doyle, 1990). A brand personality should be shaped to be long-lasting and consistent. Besides, it should also be different from other brands and meet consumer's demands (Kumar et al., 2006). Hence, the consumers of those toys and video games are like the brand spokespersons and become the basis for suppliers to build brand personality. With the specific brand personality, consumers of varying personality traits will be attracted and their brand preference will then be further developed. In addition, a company can maintain a good relationship with customers through its brand personality (Aaker and Biel, 1993).Because brands have their own particular personalities, consumers may treat brands as real human beings. In this case, consumers will expect the people's words, attitudes, behavior or thoughts and so on to meet their respective personality traits (Aaker, 1996). Consumers may likely use the brand and products in line with their own personality traits, in other words, all the marketing activities are aimed at having consumers believe and recognize a brand personality, and reinforcing the communication between the brand and the consumer (Govers and Schoormans, 2005), in order to enhance the brand's loyalty and equity.Brand personality has become a widely discussed issue in recent years. It has been emphasized in many brands and products, including durables goods, consumables goods, entertainment and luxury goods, and so on (Kumar et al., 2006; Govers and Schoormans, 2005; Mengxia, 2007). However, only a few toy and video game brands are used as the marketing researches target of brand personality. Thus, try to cover the gap of the literature is the first motivation of this study. Consumers may have their own preference for the brand and product in compliance with their brand personality and personality traits or their own concepts (Govers and Schoormans, 2005). However, in fact, brand preference only involves in the affection in brand loyalty, it may not develop any purchase behavior (Dyson et al., 1996). Only a few researchers have simultaneously combined affective loyalty and behavior loyalty into their investigation on the relationship of personality traits and brand personality with brand loyalty. Therefore, the second motivation of the study is formed to fill this gap as well.This study has three major objectives:1. Explore the relationship of personality traits and brand personality.2. Study the influence of brand personality on brand loyalty.3. Examine the impact of personality traits on brand loyalty.\nLiterature review: Personality trait\nMethodology: Conceptual structure\nData analysis: Sample description\nConclusion and suggestions: Conclusion\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is to explore the relationship of consumer personality trait, brand personality and brand loyalty.\nDesign/methodology/approach: - The convenience sampling method was used to collect primary data. A total of 400 adult consumers were interviewed who looked round or bought toys and video games in Taipei City Mall, and 387 effective questionnaires were collected; the effective response rate was 96.75 per cent. Regression analysis was adopted to test hypotheses.\nFindings: - The major findings were: a significantly positive relationship between extroversion personality trait and excitement brand personality; a significantly positive relationship between agreeableness personality trait and excitement brand personality, sincerity brand personality and competence brand personality; competence and sophistication brand personality have a significantly positive influence on affective loyalty; competence, peacefulness and sophistication brand personality have a significantly positive influence on action loyalty; agreeableness and openness personality trait have a significantly positive influence on affective loyalty; agreeableness and openness personality trait have a significantly positive influence on action loyalty.\nResearch limitations/implications: - The restriction on selecting countries and brands, and the restraint of the sampling coverage present limitations. The paper verifies that consumers with different personality traits will have different cognizance towards brand personality, which can also be applied to the toy and video game industries. The paper proves that a distinct brand personality can appeal to more brand loyalty. It shows that agreeableness and openness of personality traits have a positive influence on brand loyalty.\nPractical implications: - The paper highlights the value of brand personality that benefits a company. It emphasizes the importance of brand loyalty for a company. Consumers who register in agreeableness and openness are the target audience for BANDAI.\nOriginality/value: - The extra value of the paper is to link the theory and practice, and explore the relationship of consumer personality trait, brand personality and brand loyalty.\n",
"role": "assistant"
}
] |
The experience economy approach to festival marketing: vivid memory and attendee loyalty
|
[
"Loyalty",
"Experience economy",
"Festival",
"Script theory",
"Vividity of memory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: Festivals are an important segment of the tourism industry. Festivals are not only considered as tourism attractions (Rao, 2001) and image builders (Prentice and Andersen, 2003; Gokce and Culha, 2009) for the destinations, but also as an economic engine for local communities (Crompton and McKay, 1997). From the perspective of festival attendees, their experience is the primary benefit or value they can get from festivals. Festivals are experiential products as with any other tourism products and services (Grunwell and Inhyuck, 2008). Understanding festival attendees' experience is imperative for festival organizers because attendees' experience is a predictor of their future behavior. Providing satisfactory experience to attendees is the key for the long-term success of festivals (Cole and Illum, 2006).\nLiterature review: Script theory\nMethodology: The study setting: festival description\nResults: Profile of respondents\nDiscussions and implications: The result confirmed that festival experience can be represented in terms of four dimensions: education; entertainment; esthetics; and escapism. The results of the SEM indicated that optimal experience leads to loyalty through vivid memory. Findings from the regression models revealed that vividity of memory is significantly impacted by all four dimensions of experience economy, while loyal visitors are mainly affected by esthetics and entertainment dimensions.\nLimitations and future research: This study has several limitations that the authors would like to address. First, the participants surveyed were only attendees of a traditional campus-wide festival. The results may not be applicable to other types of festivals. Future research is advised to assess the model in diverse festivals and events. Second, the sample investigated in the study involved only college students who attended the VEISHEA festival. To validate the findings in this study, future research should be directed toward larger, more generalized samples. Third, the results of the study indicated that among the four experience dimensions, the esthetic and entertainment experiential dimensions made the largest contribution to vivid memory and loyalty. The results may be partially explained by the features of VEISHEA festival. The authors suggest future research in order to further assess the impacts of the four experience dimensions on consumer attitude and behavior in other festivals with different themes.\n",
"role": "user"
},
{
"content": "Purpose: A desirable experiential environment is an essential source of competitive advantage in the festival industry. Understanding festival attendees' experience is imperative for festival organizers because attendees' experience is a predictor of their future behavior. With the experience economy concept of Pine and Gilmore (1998), the study identified four underlying dimensions of festival attendees' experience (education, entertainment, esthetics, and escapism) and examined the impacts of these experience dimensions on festival attendees' vividity of memory and loyalty.\nDesign/methodology/approach: Data were collected from online surveys completed by 338 attendees of VEISHEA festival. This study employed confirmatory factor analysis, regression analysis, and structural equation modeling to achieve its goals.\nFindings: Experience has a positive effect on vivid memory, which consequently influences loyalty. Each dimension of experience economy significantly influences vividity of memory. However loyalty is affected only by the entertainment and esthetics dimensions.\nPractical implications: Festival marketers are advised to design activities that provide memorable experiential products and services for attendees based on the four dimensions of the experience economy.\nOriginality/value: The study is a pioneer in the evaluation of vividity of memory to the festival context.\n",
"role": "assistant"
}
] |
Quality management capabilities of manufacturing industries in the Western Sydney region: Comparative analysis for quality improvement
|
[
"Quality culture",
"Leadership",
"Total quality management (TQM)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Managing quality is recognised as one of the most significant core processes across many industry sectors including manufacturing and service organisations. The popularity among practitioners and researchers investigating various aspects of quality management practices has increased in recent times, emphasising quality management as widely accepted goal for several companies, and fundamental for the competitiveness of organisations (Nair, 2006). Several studies have been carried out to identify relationships between quality management practices and organisational performance, including key quality management practices contributing to the success of a TQM process (Powell, 1995; Kaynak, 2003; Lakhal et al., 2006; Tari et al., 2007), impact of TQM on organisational/business performance (Hendricks and Singhal, 1997; Kaynak, 2003; Lai and Cheng, 2005), possible interactions between quality management practices (Lakhal et al., 2006) and relationships between quality management practices and operational performance (Terziovski, 2006). These studies have the common goal of finding the relationships between quality management practices and organisational performance. Since quality management practices and performance measures are very broad and can vary across various industry sectors and organisations, depending on requirements of individual organisations, various research studies have attempted to provide guidelines and conceptual frameworks of quality management capabilities and outcomes as the basis for evaluating the impact of quality management in practice.\nLiterature review: Various studies are reported in recent times on a broader range of areas of quality management, including TQM, quality awards, quality/BE and relationships among key elements/attributes (Krishnan, 2016). Most of these studies have emphasised the importance of quality management, in terms of its vital role in delivering quality products and services to customers and very importantly providing competitive advantage with product/service differentiation under increasing competitive pressures at current times (Meftah Abusa and Gibson, 2013). While most of quality management concepts and elements are interrelated in many ways, there are some distinctions, in particular when they are being investigated from theoretical and practical perspectives. Therefore, an overview of various quality management concepts is presented as part of broader literature review, with a view to making a link for the current investigation on quality management capabilities in a selected region of operations.\nResearch questions and hypotheses: The empirical study was designed to assess differences in level of TQM practices (QMAF categories) by size of the organisations in the Greater Western Sydney region. In particular, the intention is to address the following questions:\nResearch methodology: This study adopts empirical research approach based on data collection through a survey method, followed by detailed data analysis using descriptive statistics, MANOVA, ANOVA and Hsu's MCB. A survey questionnaire was designed with a range of questions related to quality management capabilities sighted in the NQAs and best practices highlighted in literature. Data collection was carried out through a postal survey with a target sample of 300 Australian firms in the Greater Western Sydney Region. The firms selected in the region were a mix of large, medium and small organisations.\nResults and analysis: Results of this research are presented using descriptive statistics of quality management capabilities (identified by QMAF categories), categorised by size of organisation (small, medium and large), type of industry sector (four groups of ANZSIC codes) and overall capabilities, in the Greater Western Sydney region. Comparative analysis of quality management capabilities with respect to size of organisation and industry type is carried out using ANOVA, MANOVA and Hsu's MCB, as the basis for identifying significant differences of quality management capabilities in the selected region with respect to size of the organisation and type of industry sector. Comparative analysis is presented as the basis for influencing the government and quality associations when making policies and strategies for promoting quality improvements in organisations that shows lack of quality management capabilities at current times.\nDiscussion: Medium- and small-size organisations have a lot to gain from reapplying leadership, quality culture, improvement methods, partnering focus, human resource management, strategy, business processes, information, knowledge and communication and business outcomes management practices used by the large organisations that were surveyed. The research indicates that significant differences were noted between large organisations compared to medium and small organisations, for each QMAF category tested. This has highlighted the need for reapplying quality management practices of large organisations with higher means to those with the significantly lower means, namely medium and small organisations.\nConclusions and research implications: This paper has presented comparative analysis of quality management capabilities of small, medium and large organisations in the Greater Western Sydney Region, based on data collection through a survey questionnaire. Specifically, it compared the levels of quality management capabilities of organisations identified through nine categories of the QMAF model, by size and groups of ANZSIC codes using descriptive statistics, MANOVA, ANOVA and Hsu's MCB.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to, using a quality management assessment framework (QMAF), provide a comparative analysis of quality management capabilities of organisations categorised by size and groups, based on the Australian and New Zealand Standard Industrial Classification (ANZSIC) code.\nDesign/methodology/approach: A questionnaire-based survey was used for data collection. Statistical data analysis, including descriptive statistics, multivariate and univariate analysis of variance and Hsu's multiple comparisons with/to the best post-hoc test results, was carried out to identify significant differences and similarities in total quality management capabilities between organisations based on the QMAF model.\nFindings: Significant differences in quality management capabilities were identified between large organisations and SMEs while no significant differences were found to exist between medium- and small-size organisations. Most of the QMAF-based capabilities do not seem to show significant differences between the four groups of ANZSIC code of firms, except for partnering focus and business outcomes.\nResearch limitations/implications: The research was limited by the number of returned responses of survey questionnaire from manufacturing organisations in the Greater Western Sydney region. This research provides practitioners with practical guidelines for improving quality management capabilities and can become the basis for comparative analysis in other regions of Australia, and globally. In addition, the research findings can be used by government and quality associations to develop appropriate strategies and policies for supporting the development of quality enhancing programmes in SMEs.\nOriginality/value: The comparative analysis of quality management capabilities by organisational size and industry type advances the previous work on optimum pathways of achieving business outcomes using the QMAF model. In addition, the research has mapped out differences in quality management capabilities, based on a combined scope of size and industry type, especially in the Greater Western Sydney region where a large number of SMEs are located.\n",
"role": "assistant"
}
] |
Is "Do it right the first time" necessarily right? The importance of error management culture in the hospitality industry
|
[
"Creativity",
"Organizational commitment",
"Trust",
"Error management culture",
"Group efficacy"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Resulting from physiological and psychological limitations of humans (Helmreich, 2000), making errors is ubiquitous and it occurs in every industry (Van Dyck et al., 2005). Errors often cause negative consequences in various areas. Research by the National Aeronautics and Space Administration found that 70 per cent of aviation accidents involve human error (Helmreich and Foushee, 1993). The Institute of Medicine of the USA estimates that each year between 44,000 and 98,000 people die as a result of medical errors (Helmreich, 2000). Likewise, error occurs inevitably in the hospitality industry and exerts important impacts on both customers and employees, given the nature of the hospitality sector (Karatepe, 2012). Senders and Moray (1991) indicated that errors may be domain specific, suggesting that research of human errors needs to take into consideration the context of error occurrence. Therefore, it is critical to understand the nature of hospitality industry to realize the special significance of studying errors in hospitality context.\nLiterature review and hypotheses development: Error management culture\nMethodology: Sample\nResults: Sample profile\nConclusions: Although the error management culture has been empirically demonstrated to positively impact organizational performance since the creation of this concept (Van Dyck et al., 2005), the ways the error management culture functions and affects organizational performance remain largely unknown and to be empirically studied in the hospitality industry. The current research addressed this issue by empirically examining the underlying mechanism of organizational error management culture's influences on employee outcome variables that are critical for organizational psychology research in hospitality industry: employee creativity and organizational commitment. First, the findings suggest that error management culture is a significant predictor of group efficacy and manager trust, which in turn are positively related to employee creativity and organizational commitment. Second, this study suggests a new antecedent to reinforce hotel employees' creativity and organizational commitment by fostering the error management culture within the organization. Third, this study tested the underlying mechanisms linking the error management culture with employee's creativity and organizational commitment. Previous studies showed that trust is a key determinant of organizational commitment (Ouchi, 1981) and trust fosters collaboration, which enhances the creativity of the team (Barczak et al., 2010). Building on prior studies, this research tested the mediating effect of manager trust among error management culture, employee's creativity and organizational commitment. Finally, Scholars have demonstrated that perception of work group efficacy has a significant direct effect on organizational commitment (McIntyre et al., 2002). Group efficacy has also been found to mediate the relationship between cooperative group norms, group positive affect and team creativity (Kim and Shin, 2015). Building on previous literature, this study indicated the mediating role of group efficacy among error management culture, organizational commitment and employee creativity.\nPractical implications: The current paper provides practical implications to help organization optimize the management and create favorable organizational culture. Previous research suggested that individuals could learn from errors and error management culture could bring about positive outcomes in the organization (Van Dyck et al., 2005; Guchait et al., 2014; Sosna et al., 2010). Our results extend these earlier studies by demonstrating that error management culture positively relates to manager trust and group efficacy, which in turn lead to higher level of organizational commitment and employee creativity. These results have particular utility and significance in hospitality sector for three reasons. First, hotel practitioners especially managers need to realize the error-prone nature of hospitality industry, which signifies the importance for hotel managers to develop an error management strategy to better deal with errors, for purpose of successful service delivery, customer satisfaction and employee well-being. Second, this study provides a new approach to improve hotel employees' organizational commitment by fostering the error management culture in the organization. The higher level of organizational commitment will not only lead to higher level of service delivery, but also decrease the prominent hotel employee turnover (Behery et al., 2016). Third, this study provides another benefit for hotel to create the error management culture by relating the error management culture to employee creativity. Wong and Pang (2003) indicated that it is important to improve employees' performance by stimulating the motivators to creativity if a hotel wants to be competitive. Wang et al. (2014) also emphasized the importance for hotel front-line employees to be creative.\nLimitations and suggestions for future study: Despite the current study's implications for theory and practice, this study has a few limitations. First, as the same individuals measured all the variables in using the same survey, the relationships among the variables might have been influenced by common method variance (Podsakoff et al., 2003). Future studies should collect data from multiple sources to avoid common method bias. Second, this study targeted manager trust and group efficacy as two mediators and the results showed a partial mediation relationship between error management culture and two outcome variables. Future studies may consider exploring other mediators, such as psychological safety, perceived fairness and locus of control. Third, as we collected the data in three hotels in Turkey, it might raise the issue of generalizability to other cultures. Future study should conduct more studies taking into consideration the different levels of cultural differences.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is threefold: first, to investigate the extent to which organizational error management culture impacts manager trust and group efficacy; second, to examine whether manager trust and group efficacy mediate the impact of error management culture on employee creativity; and third, to test whether manager trust and group efficacy mediate the impact of error management culture on employees' organizational commitment.\nDesign/methodology/approach: Using a survey methodology, 345 front-line hotel employees in Turkey provided survey data. Amos 22.0 was used for data analysis.\nFindings: Three major findings emerge. First, error management culture was found to have a significant positive influence on manager trust and group efficacy. Second, manager trust and group efficacy mediated the relationship between error management culture and employee creativity. Third, manager trust and group efficacy were found to mediate the relationship between error management culture and employees' organizational commitment.\nPractical implications: First, to promote employee creativity and their commitment to the organization, hotels need to cultivate an error management culture. Second, error management culture should be applied in hotels to build employee trust in their manager and boost their collective belief about group competency.\nOriginality/value: This is the first study that identified employee creativity and organizational commitment as outcomes of organizational error management culture. This is also the first study that examined the mediating effects of manager trust and group efficacy which helps in understanding the underlying mechanisms linking error management culture and employee attitudes. The current study provides significant contributions to understanding error management.\n",
"role": "assistant"
}
] |
Building a resilient local council: evidence from flood disasters in Italy
|
[
"Resilience",
"Local authorities",
"Local council",
"Extreme weather event",
"Public sector accounting"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, the management of disasters following extreme weather events (EWE) has become a key topic of concern. Increasing interest among managers, academics, professional bodies and communities has focused on whether more attention should be paid on how organizations can better equip themselves to guard against future disasters. Natural disasters can create abrupt changes to the organizational operating environment, significant damage to both the physical and natural landscape and pose a major threat to the survival of an organization. Recent examples of these types of EWE include the recent floods in the UK (2014); Typhoon Haiyan in the Philippines (2013); the earthquake and subsequent Tsunami in Japan (2011); wildfires in Russia (2010); monsoons and subsequent floods in Pakistan (2010); Bushfires in Australia (2009) and Hurricane Katrina in the USA (2005).\n2. Literature review: 2.1. What is resilience?\n3. Background: This study is context driven, investigating resilience to an EWE, which in this case is a flood event. A general overview of the political and governmental landscape of Italy is provided below as a frame of reference for identifying any distinguishing features that may be pertinent to the study of resilience.\n4. Research method: In view of the expansive area that resilience can cover, a pragmatic decision was taken to investigate resilience only in the context of flood events occurring in Italy. Undertaking an analysis of all aspects of resilience such as from a psychological or ecological viewpoint, although equally valid, was beyond the scope of the current investigation. Hence, we focused on discreet management themes as canvassed in the literature and further discussed in the findings section. In this way the research project was discrete and manageable.\n5. Findings and discussion: As discussed in the research method section, an analysis of the interview transcripts was undertaken to develop themes. Some of the themes extrapolated from the interviews pertained to issues raised from the literature review, whilst other new themes were raised which enhanced overall understanding.\n6. Conclusions and areas for further research: This paper investigates the resilience to EWE in a sample of Italian LCs. An examination of whether resilience is used as a basis for decision and policy making, as well as the factors that promote or inhibit resilience to EWE was undertaken. There were a number of motivations for this study, not least, the paucity of empirical work on resilience undertaken in the context of the public sector, vis-a-vis, local government. In addition, with the current narrative on climate change suggesting that more frequent, severe EWE will occur in the future, it is imperative to gain an understanding as to what extent can the concept of resilience support policy making and operational decision making at the local level.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate resilience to extreme weather events (EWE) in a sample of Italian local councils (LCs), impacted by flood disasters. Whether resilience as a concept is adopted by the affected councils and factors that promote or inhibit LC resilience are explored.\nDesign/methodology/approach: Using semi-structured interviews, the authors investigate seven Italian LCs that were severely impacted by the flood event. An interview protocol was developed and background information collected. A number of themes were drawn from the interview transcripts and relationships with the relevant literature were examined.\nFindings: The findings highlight that the adoption of the concept of resilience is at an early stage in the LCs decision and policy making. The authors find that the financial resources and the external relations management with other public entities, NGOs and local communities, promote the LCs resilience during and after an EWE. By contrast, bureaucratic constraints and poor urban planning restrain resilience. The findings suggest that LCs resilience needs to be distinguished from local community resilience.\nOriginality/value: The paper contributes to the literature on public sector management and investigates the under-researched area of resilience within the context of the public sector, vis-a-vis, local government. In particular the realization that EWE are not the realm only of emergency personnel, but that local government managers have an integral role placed upon them during and especially after the EWE.\n",
"role": "assistant"
}
] |
Effects of strategic information systems on competitive strategy and performance
|
[
"Balanced scorecard",
"Strategic orientation",
"Corporate performance",
"IS strategy",
"IT/IS business value",
"Strategic information systems"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Several studies have demonstrated that creating business strategy value requires effective use of strategic information systems (SISs) (Chan and Huff, 1992; Chen et al., 2010; Marabelli and Galliers, 2017; Newkirk et al., 2003; Philip, 2007; Teo and King, 1997; Wang and Byrd, 2017). Recognizing that SIS is widely utilized, Chen et al. (2010) and Merali et al. (2012) suggest that conceptions emerging from the SIS literature both differ from and complement each other regarding contextual elements of the SIS process, content, desired impact and alignment.\n2. Theoretical framework: This section describes the factors investigated with regard to SIS, strategic orientation and corporate performance.\n3. Research model, hypothesis development and control variables: Based on the literature review, it was postulated that SIS-embedded strategic planning (Whittington, 2014) enables the competitive strategy described by Miles et al. (1978) and influences corporate performance (Kaplan and Norton, 1992). The model is presented in Figure 1 with the hypotheses.\n4. Methodology: This section describes the sample, data and analytic methods.\n5. Results: This section describes the analysis results for the measurement and structural models as well as the reflective constructs. This section also presents the results of the hypothesis tests.\n6. Discussion and conclusion: This study investigated the relationship between SIS and strategic orientation, between SIS and corporate performance and between strategic orientation and corporate performance. Furthermore, this study explored how the effective use of SIS to support business strategy affects these outcomes. Specifically, this study explored the effects of SIS on four distinct strategic orientations - analyzer, defender, prospector and reactor - and the direct effects of strategic orientation types on corporate performance.\n7. Implications and future research: This study contributes to Whittington's (2014) theory and research questions, which focus on the necessity of SIS and strategy researchers to have a joint agenda. This is because the IS field has longstanding interest in the strategic role of SIS, while strategy researchers are increasingly recognizing the significance of material technology in strategy work.\n8. Study limitations: Although this study provides substantial insights into how SIS promotes business value through competitive strategy and corporate performance, it has limitations. First, as mentioned earlier (Section 4), cross-sectional design does not allow a researcher to fully establish the causality between independent variables and dependent variables, and a carefully designed longitudinal study could address this question more successfully.\n",
"role": "user"
},
{
"content": "Purpose: This study argues that strategic information systems (SISs) are necessary for organizations' survival and corporate performance in turbulent economic environments. Applying Miles and Snow's strategy typology, the purpose of this paper is to explore how SIS supports business strategy and corporate performance.\nDesign/methodology/approach: This study uses quantitative survey data from 389 Brazilian companies during economic crises and analyzes them using structural equation modeling.\nFindings: There is strong evidence that SIS promotes capacity and flexibility to create competitive strategies in response to environmental changes. SIS significantly and positively predicts firms' use of prospector strategies, reducing the need to sacrifice efficiency for innovation. SIS can predict corporate performance more strongly than firms' strategic orientations can.\nPractical implications: The results provide organizations insights on how SIS enables strategic planning processes to create competitive strategy and improve performance during economic turbulence.\nOriginality/value: This research demonstrates SIS's positive effects during economic turbulence on competitive strategy and performance, revealing that corporate performance is influenced more by SIS (strategic process) than strategic orientation (content). Hence, this study fills a research gap in the information systems strategy literature by contributing new insights about SIS.\n",
"role": "assistant"
}
] |
DOC questionnaire: measuring how GPs and medical specialists rate collaboration
|
[
"Health services",
"Quality assurance",
"General practitioners",
"The Netherlands"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: A healthcare system in which the general practitioner (GP) is the first contact and gatekeeper for limited and relatively expensive specialist care is greatly influenced by the manner in which GP and specialist work together. For this reason, collaboration between these two medical disciplines has received attention over the last decade. Research from several countries indicates that patients value a solid working relationship between GPs and specialists and that this relationship needs improvement (Berendsen et al., 2009; Schoen et al., 2007). So far though, relatively little research has focused on this collaboration and how it may be improved. Cooperation between GPs and specialists was qualitatively researched nine years ago by Marshall and later-on, quantitatively by Marshall (1998a, b, 1999), Marshall and Phillips (1999). In his research, Marshall focused on the relationship between GP and specialist regarding image formation; organisation and communication, on the other hand, were not extensively addressed. The study resulted in 12 validated questions, which did not offer recommendations for improvement, however.Other research among GPs and specialists was aimed at:* how communication may be optimised (Salerno et al., 2007);* collaboration aspects such as referral (Bowling and Redfern, 2000); and* correspondence (Garasen and Johnsen, 2007; Glintborg et al., 2007).Specifics like the following are also relevant:* pharmaceutical agreements (Kasje et al., 2004);* referrals among specialists (Bridger and Cairns, 1996);* hospital at home (Berendsen et al., 2002; Hood et al., 1999);* specialist groups such as gastroenterologists (Cardin et al., 2004);* physician attitudes towards medical guidelines (Dijkstra et al., 2000); and* GP gatekeeper role (Pena-Dolhun et al., 2001).The literature clearly shows the lack of robust instruments to measure GP and specialist collaboration. Consequently, we developed an instrument based on our earlier, qualitative studies (Berendsen et al., 2006, 2007). Its goal was to assess how GPs and specialists rate mutual collaboration regarding organisation; communication; professional expertise; image and the value of knowing each other. In this article we describe the doctors' opinions on collaboration (DOC) instrument. In future, DOC may be used to evaluate innovative care projects across primary/secondary interfaces between GPs and specialists. As far as we know, this is the first time a questionnaire, addressing collaboration, was completed by both GPs and specialists. Our main research question, therefore, is \"what is the validity of a questionnaire aimed at assessing how GPs and specialists rate the quality of mutual collaboration?\"\nMethods: Questionnaire validity was assessed in a cross-sectional study among GPs and specialists. Our earlier qualitative, explorative research defined six domains essential to the questionnaire: general; organisation; communication; professional expertise; image; and knowing each other (Berendsen et al., 2006, 2007). Based on this layout, questions were formed in the shape of positive and negative statements rated on a five-point scale (1=completely agree to 5=completely disagree). The statements focused on evaluating collaboration. To improve face and content validity, the question list was presented to a number of The Netherlands key figures (GPs and specialists). Questionnaire applicability - understanding, syntax, and time span - also was tested by 12 doctors. The initial questionnaire included 33 items divided onto six domains mentioned above. In a pilot study, the instrument was further tested with a random sample including 148 GPs and specialists in The Netherlands. Consequently, factor analysis, item analysis and reliability tests removed seven items from the instrument, leaving 26 statements. This final list was used in the present study.Statements were the same for GPs and specialists. The \"general\" domain included items on collaboration. \"Organisation\" incorporated statements on delays, waiting lists and patient care after hospital discharge. \"Communication\" consisted of input and colleague attitude statements when consulting on the phone. \"professional expertise\" concerned issues like willingness to educate or learn from each other and the possibility of improving referral between GP and specialist. \"Image\" was used to asses how secure physicians feel when dealing with each other and, finally, the importance of \"knowing each other\" was addressed. Besides these issues, the list also contained questions concerning respondent characteristics such as: age; gender; medical specialty; office setting; experience; whether a trainer; and employment type. To ease analysis, specialties were reduced to three broad groups: physicians; surgeons; and supporting specialists. This 26 item questionnaire was presented to a sample of 550 GPs and 533 specialists randomly selected from The Netherlands Medical Address Book. A total of 47 percent of the GPs (n=259) and 44 percent of the specialists (n=232) replied. Specialists hardly contacting a GP, like nuclear physicians and anaesthesiologists, were not invited to participate. Before the questionnaire was posted, each addressee received an alert about the study. Non-respondents were reminded by letter. This whole procedure was repeated a month later for non-respondents.\nAnalysis: General practitioner and specialist answer sheets were combined. Positively posed statement ratings were inverted so that high scores yielded positive judgements. First, answer distributions was compared. During factor analysis, factors were rotated according to the Varimax criterion (based on the scree plot). Factor loadings >0.4 were considered important. Explained factor variance was calculated and subscales were constructed based on factor loadings. Subscale internal consistency was checked by correlating each item score with the total score minus the item score (item-rest correlation). We also calculated Cronbach's alpha coefficients; reliability coefficients larger than 0.6 were considered acceptable (Nunnally, 1967; Streiner and Norman, 2003). Discriminative ability was tested by calculating differences between respondent groups. This testing was conducted through analysis of variance (ANOVA) and the t-test. To assess construct validity, we tested hypotheses generated from our qualitative research (Berendsen et al., 2006, 2007):H1. Specialists are willing to educate GPs who want to learn from specialists.H2. Specialists are less interested in learning from GPs though GPs would like to educate them as well.H3. A difference in status is still felt, while older GPs seem to feel status less.H4. Both disciplines find it vital to know each other.\nFindings: We conducted our study between March and September 2006. The sample GPs' mean age was 50 years (SD 6.7) and the specialists 51 years (sd 7.6). Gender, practical experience, office setting, employment type and specialities are listed in Tables I and II.\nFactor analysis and item analysis: Six factors produced the best results, explaining 55 percent of the total variance. The domains: general; organisation; communication; professional expertise; image, and knowing each other were tested. The latter four were clearly distinguished (factor loading >0.4). The \"general\" and \"organisation\" domains loaded on two factors. Therefore, it was decided to join them into one domain (organisation). Based upon a low item-rest correlation, the new domain \"organisation\" was reduced from eight to seven items (Cronbach's alpha 0.73); \"communication\" from four to three items (Cronbach's alpha 0.66), and \"professional expertise\" from five to four items (Cronbach's alpha 0.64). The \"image\" domain was unchanged and included three items (Cronbach's alpha 0.78). \"Knowing each other\" comprised six items (Cronbach's alpha 0.9). Owing to redundancy and a wish to keep the list concise, three items were deleted from this domain (Cronbach's alpha 0.83), after which the definitive questionnaire comprised 20 items. Item correlations are shown in Table III. The correlation coefficients between the domains are all <0.4 (Table IV).Comparing GP and specialists answering category per item in the \"organisation\" domain, GPs rate collaboration among specialists poorer than specialists. Specialists rate their care after hospital discharge higher than GPs. General practitioners are more pessimistic about delays and waiting lists. Regarding \"professional expertise\", about 50 percent of GPs and specialists wish to improve referral quality between GPs and specialists.\nValidation: Table V shows the differences between GPs and specialists.Organisation\nDiscussion: The 44 and 47 percent response rates were low. However, concerning age, gender, experience, work setting and employment type, our results reflect Netherlands GP and specialist distribution (Capaciteitsorgaan, 2005; Muysken et al., 2006). This is also true for speciality distribution. Our study was conducted in The Netherlands, where GPs function as gatekeepers between patient and specialist.Our results made it possible to highlight several domains with low inter-correlations. The internal consistency is sufficient to compare respondent groups. Further analysis shows that \"organisation\", \"professional expertise\", \"image\" and \"knowing each other\" clearly produce distinguishing scores for groups with different characteristics (such as GPs vs specialists, trainers vs non trainers). Though the differences are highly significant, one should keep in mind that the \"professional expertise\" effect size is moderate. Communication's discriminative ability was poor, which will need further study.The more positive view specialists have on \"organisation\" could be explained by the many questionnaire statements that deal with issues relating to specialist care access. As outsiders, GPs possibly have a more negative outlook on access and wish to see it improved. General practitioners are also more pessimistic about collaboration among specialists than specialists themselves. An American study showed that limited communication between specialists can indeed cause patient care problems (Arora et al., 2005). When implementing improvements, difference in collaboration perceptions will have to be taken into account. If an incident is not viewed as problematic then motivation to change is low.General practitioners had higher \"professional expertise\" scores. This confirms our qualitative research findings (Berendsen et al., 2006, 2007). Specialists were less interested to learn from GPs, whereas GPs were eager to learn from them. The difference in \"image\" perception in our study was also demonstrated in our qualitative research. Specialists do not consider GPs their equals. Older GPs appear to feel this less, though. Our qualitative research indicated that both GPs and specialists feel it is important to know each other (Berendsen et al., 2006, 2007). Other studies show there is no substitute for direct personal contact between consultant and primary physician (Salerno et al., 2007). This quantitative research, however, showed that university hospital specialists believe that knowing each other is less important. In larger organisations, physicians probably rely more on other referring medical specialists. Also, university hospital complex structures and processes demand more attention from specialists.\nConclusions: Face and content validity were successfully achieved. Construct validity was supported by confirming our four hypotheses. The validation process should be completed with data about test-retest reliability, responsiveness and other aspects of construct validity (convergent and divergent). This study shows that the DOC questionnaire is valid and that it may have the potential to give feedback to both medical professionals and policy makers. Such feedback creates an opportunity to improve collaboration. It is important to keep the differences between groups in mind so that it can be assessed whether they are properly motivated to implement certain changes. Improvements can facilitate collaboration, enhance job satisfaction and strengthen patient care.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to assess the validity of a questionnaire aimed at assessing how general practitioners (GPs) and specialists rate collaboration.\nDesign/methodology/approach: - Primary data were collected in The Netherlands during March to September 2006. A cross-sectional study was conducted among 259 GPs and 232 specialists. Participants were randomly selected from The Netherlands Medical Address Book. Specialists rarely contacting a GP were not invited to participate.\nFindings: - Exploratory factor analysis indicated that the questionnaire, consisting of 20 items, measured five domains: organisation; communication; professional expertise; image; and knowing each other. Cronbach's alpha coefficients ranged from 0.64 to 0.83 indicating sufficient internal consistency. Correlation coefficients between domains were all <0.4. All but \"communication\" clearly produced distinguishing scores for different respondent groups.\nResearch limitations/implications: - This study shows that the doctors' opinions on collaboration (DOC) questionnaire is valid and that it may have the potential to give feedback to both medical professionals and policy makers. Such feedback creates an opportunity to improve collaboration.\nOriginality/value: - The DOC questionnaire is a useful instrument for assessing collaboration among GPs and specialists. It can provide feedback to both medical professionals and policy makers. Such feedback creates an opportunity to improve collaboration.\n",
"role": "assistant"
}
] |
New evidence for underwriting cycles in US property-liability insurance
|
[
"United States of America",
"Insurance",
"Time series analysis",
"Insurance premiums",
"Underwriting cycle",
"Time domain",
"Spectral analysis"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nI. Introduction and motivation: Property-liability insurance premiums, underwriting profits, and the availability of coverage have the tendency to fluctuate over time in fairly regular patterns. The underwriting cycle, as a sequence of expansions and contractions, is usually related to company underwriting strategies. When underwriting profits are increasing, competition for business tends to encourage underwriters to relax underwriting standards, accept marginal risks, and reduce premiums; these lead to higher underwriting losses and lower profits. In the second stage of the cycle, insurers attempt to increase profits by increasing premiums and restricting underwriting standards.The analysis of underwriting cycles is often approached using an autoregressive integrated moving average model. Venezian (1985) conducts an empirical analysis on US property-liability insurance data, and concludes that the underwriting profit follows a second order autoregressive process AR(2), with a cycle length of about six years. The author attributed this autoregressive behavior to the extrapolative ratemaking practice; insurers extrapolate past trends in losses to forecast future losses. Such an extrapolative ratemaking process, largely observed in the US property-liability insurance market, would create a quasi-cyclical pattern of underwriting profits.Cummins and Outreville (1987) studied the ratio of premiums to losses for a large sample of countries, and found empirical support for cyclical behaviors of this ratio, as predicted. A time trend was included in the AR(2) model to account for the downward trend in expenses. Cummins and Outreville (1987) explained that the intervention of external factors, as institutional, regulatory, and accounting reporting rules, can create a second-order autocorrelation among underwriting profits, even in a context of rational expectations theory. The causes of underwriting cycles have been the subject of many studies. Meier and Outreville (2006) summarized these causes as following:* disequilibrium between supply and demand (including the capacity constraint theory);* external shocks; and* general business influences.Since Venezian (1985) and Cummins and Outreville (1987), most empirical studies use an autoregressive process of second order to study the cyclical behavior of insurance results. We point out the papers of Lamm-Tennant and Weiss (1997), Chen et al. (1999), Leng and Meier (2006), Meier (2006), Venezian and Leng (2006), among others.Spectral analysis provides alternative tools useful to study the cyclical fluctuation in insurance data. The frequency-domain models are well suited to exploratory investigations, even when short-time series are available. Venezian (2006) provides a comprehensive analysis on the spectral analysis applied in underwriting cycle research.Most empirical and theoretical studies on this topic are developed on loss ratio, combined loss ratio, unit price or underwriting profit ratio. However, as Weiss (2007) points out, the studies disagree about the nature of nonstationarity detected in the underwriting margins. This study is conducted on the cyclical dynamics of growth rates of US property-liability insurance real premiums. In a perfect market framework, premiums contain information about expected losses, claim settlement costs, discount rates, economic environment, and other information available at the time of pricing. Fung et al. (1998) also consider premiums as the main source of the underwriting cycles. In a similar approach, the economic business cycle is often measured using the growth rates of real gross domestic product.This study highlights some time series methods helpful to analyse the existence and the length of the underwriting cycle. In a temporal framework, we propose to test for unit root cycles in insurance results, by applying the Robinson's test. In a spectral analysis approach, the Fisher g-test for a significant spectral peak and a nonparametric version of this, proposed by Ahdesmaki et al. (2005), are highlighted. These testing procedures are described in Section II. An empirical analysis, conducted on the growth rates of US property-liability insurance real premiums, is done in Section III. Annual time series for net premiums earned is available from Best's Aggregates and Average, and covers the period 1951-2007. Premiums are measured in real terms, using the consumer price index (CPI). First difference of the real premiums in log measures the annual rates of change[1]. The final section concludes.\nII. Time series methods used to analyse the existence of the underwriting cycles: The AR(2) process: Equation 1 where the error term et is a white noise, is often used to study the existence of underwriting cycles and for measuring their length. Such a process produces a cyclical pattern when the autoregressive polynomial ph(L)=1-a1L-a2L2, where L is the lag operator, has a pair of complex reciprocal roots. A necessary condition is that the coefficients to verify D=a12+4a2<0 (thus a2 has to be negative). The length (or the period) of the cycle is then given by LC=2p/arccos(a1/2[?]a2). Stationary cycles, which disappear slowly over time, are generated by an AR(2) process when the roots of the autoregressive polynomial ph(L) have modulus greater than one. Stationarity of the AR(2) process implies some additional conditions regarding the coefficients: a1+a2<1, a2-a1<1 and a1>-1.Autoregressive processes with roots on the complex unit circle can also be useful to study the cyclical patterns. When |a1|<2 and a2=-1, an AR(2) process has complex roots on the unit circle. Consider the AR(2) model: Equation 2 where the frequency o[?](0,p). The autoregressive polynomial ph(L)=1-2 cos oL+L2 has a pair of complex reciprocal roots, respectively, eio and e-io. Such a process generates smooth cyclical pattern with the length LC=2p/o time periods. For o[?](0,p) the cycle length exceeds two periods. As Bierens (2001) points out, this process seems to be more compatible with a persistent cyclical behavior, than an AR model with a real unit root and/or roots outside the unit cycle. The presence of a pair of complex conjugate unit roots generates nonstationary cycles, which persist in time and evolve over time (Gil-Alana, 2006).Asymptotic properties of such models and tests for the presence of a pair of complex conjugate unit roots are studied by Ahtola and Tiao (1987) and Gregoir (1999), among others. Bierens (2001) proposes a nonparametric test for the complex unit roots hypothesis, and examines whether business cycles are due to complex unit roots.Robinson (1994) introduces a general class of seasonal-cyclical long-memory models, in a fractionally integration framework. This is a class of flexible models used to nest a lot of cyclical or seasonal fractionally integrated models. The tests can be used for various forms of nonstationarity, in particular for unit roots cycles. Some recent papers by Gil-Alana (2006), DePenya and Gil-Alana (2006) and Gil-Alana (2007) examine the cyclical behaviour of the macroeconomic and financial time series using the testing procedure proposed by Robinson.In this paper, we apply the Robinson's testing procedure, to detect possible unit roots cycles in insurance data. Consider a stochastic process {Yt, t=1,2,...} satisfying: Equation 3 Equation 4 where d is a real number and et is a covariance stationary I(0) sequence with zero mean. The line of testing procedure is that proposed by Gil-Alana (2006). It is tested for unit root cycles at each of the frequencies[2]: ok=2p/k, where k=2, ... , [T/2]; therefore, the length of the cycles LC=k varies from 2 to [T/2]. For a given frequency ok, the unit roots cycles hypothesis corresponds to the null H0: d=1, which is tested against the alternative H1: d[?]1. Robinson (1994) constructs an asymptotic locally powerful LM test with asymptotic critical values given by the chi-squared kh12 distribution, under mild regularity conditions. The null hypothesis is rejected for large values of the test statistic.Spectral analysis is an alternative tool for studying cyclical fluctuations in insurance data. The frequency-domain models are well suited to exploratory investigations, even when relatively short-time series are available. The periodogram, as an estimate of the spectral density function: Equation 5 is often used to detect significant cycles in insurance time series. If there is a significant cycle at frequency ok, the plot of periodogram I(ok) against the discrete Fourier frequencies ok=2pk/T, k=0,1,2, ... ,[T/2], exhibits a peak at ok.There are few papers using spectral analysis to study the underwriting cycles. Usually, the graph of the spectrum is visually inspected, and certain peaks are identified at specific frequencies; we point out here the paper of Meier (2006) among others. Only the recent papers by Venezian (2006) and Venezian and Leng (2006) highlight the necessity of testing the significance of spectral peaks in insurance data. Such tests help to avoid spurious cycles.An early test proposed by Fisher (1929) can be used to detect a significant cycles, at an unspecified frequency. Fisher's g-test statistic is defined as: Equation 6 where n=[(T-1)/2]. Large values of g lead to the rejection of the null hypothesis H0 of a Gaussian white noise process; therefore, the highest peak (maximum periodogram ordinate) is significant. Under the null hypothesis H0, the p-value of the test is given by the exact distribution of g: Equation 7 where a is the maximum integer less than 1/x. Large values of the g-test indicate the presence of a significant periodic component in the time series.A nonparametric (rank based) version of the Fisher g-test is proposed by Ahdesmaki et al. (2005), and is used to detect periodicities in biological time series data. The test performs well even in the presence of the outliers or nonlinearities, or for non-Gaussian short series. The proposed testing procedure is based on a robust spectral estimator. It is considered a rank-based autocorrelation estimator for the problem of spectrum estimation; this estimator is a moving-window extension of the Spearman rank correlation coefficient. A detailed description of the test is provided by Ahdesmaki et al. (2005).\nIII. Empirical evidence for underwriting cycles: The empirical study analyses the US underwriting cycle using the following three approaches:1. estimating an AR(2) model and deriving the corresponding period of the cycle (traditional approach);2. testing for unit root cycles and detecting the length of the cycles using the Gil-Alana approach, based on the Robinson test; and3. testing for a significant spectral peak by means of the Fisher's g-test and a robust nonparametric version of it proposed by Ahdesmaki et al. (2005)[3].The cyclical behavior of the US property-liability annual growth rates of real premiums Y is analyzed. Annual data cover the period from 1952 to 2007. Table I provides a summary of empirical results, obtained using the above three approaches.The traditional unit roots tests (ADF, KPSS, ERS point optimal test) do not detect some real unit roots (equal to one) in growth rates of real premiums time series[4]. The growth rates of real premiums exhibits a slightly decreasing trend; a linear time trend is included in the AR(2) equation, the coefficient of this being significant at 10 percent significant level (Table I).The estimated AR(2) model: Equation 8 indicates a negative and strongly significant coefficient a2, suggesting a cyclical dynamics of growth rates of real premiums. The correlogram does not detect any correlation patterns in the residuals. Fitted model gives an estimated cycle length of about five years[5]. The Robinson test for unit root cycles suggests also a significant cycle in our data (Table I). The null hypothesis of a unit root cycle in our time series is not rejected, at a five years cycle length. We mention that the length of our time series is still moderate, for an asymptotic statistical test; the scarcity of data in insurance, mainly the short length of time series, limits the strength of the conclusions, in a time domain approach.The analysis in a frequency domain seems to be more appropriated for short-time series. The periodogram for our time series is estimated using a fast Fourier transform and the result is smoothed with a series of modified Daniell smoothers (spans=3). Prior to spectrum estimation, the time trend was removed from the growth rates of real premiums[6]. Figure 1 draws the smoothed periodogram for growth rates of real premium. The highest peak corresponds to a cycle length of about eight years; the Fisher test can be used to test whether this peak is significant or not. A significant peak was detected, according to Fisher test, at 5 percent significance level. The robust nonparametric g-test also indicates a significant cycle in our data. The estimated length of the dominant periodic component is around eight years, higher than the one suggested by the previous two methods. We mention here that the length of the dominant cycles, corresponding to the highest peaks in the periodogram, should be interpreted as suggested by Venezian and Leng (2006). The spectral density for a specified period receives contributions from a range of frequencies (Venezian, 2006).Therefore, all approaches proposed in our study find empirical evidence for a cyclical dynamics of growth rates of real premiums. The conclusions on the length of dominant cycle still diverge. The temporal approaches suggest a dominant cycle of five years length, while in a spectral analysis framework is detected a dominant periodic component with a length of about eight years.\nIV. Conclusion remarks: In this paper, we analyze the existence and length of underwriting cycles in US property-liability insurance, taking the growth rates of real premiums as the insurance result. Compared to the existing literature, the present study innovates in that it highlights additional testing procedures, helpful in studying the underwriting cycles. In addition to the classical AR(2) model, two new approaches are suggested: testing for unit root cycles and testing for a significant peak in the periodogram of the variable. The AR(2) model and the Robinson test for unit roots cycles suggest a significant cycle of about five years in the US insurance market data. Smoothed periodogram for growth rates of real premiums exhibits a significant peak, according to the Fisher g-test and also to the robust nonparametric g-test; in this framework is detected a dominant periodic component with a length of about eight years. Therefore, all three approaches confirm the cyclical behaviour of the growth rates of insurance real premiums. The results on the length of dominant cycle still diverge, according to the approach (time/frequency domain).\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to highlight some testing procedures, both in time/frequency framework, useful to test for significant cycles in insurance data. The US underwriting cycle is measured using the growth rates of real premiums.\nDesign/methodology/approach: - In addition to the traditional AR(2) model, two new approaches are suggested: testing for a significant peak in the periodogram using Fisher g test and a nonparametric version of it, and testing for unit root cycles in insurance data.\nFindings: - All approaches find empirical evidence for a cyclical behaviour of the growth rates of property-liability real premiums. Results on the length of dominant cycle still diverge, according to the approach (time/frequency domain).\nOriginality/value: - Compared to the existing literature, the present study innovates in that it highlights additional testing procedures, helpful to detect significant cycles in insurance time series. The underwriting cycle is analysed through the growth rates of real premiums.\n",
"role": "assistant"
}
] |
Rediscovering "Anekta mein ekta" or "kasrat mein wahdat": South Asian management through unity in diversity
|
[
"Equality",
"Diversity",
"Gender",
"Inclusion",
"Women",
"National cultures",
"Religion",
"South Asia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Welcome to the special issue (SI) of Equality, Diversity and Inclusion (EDI) journal on \"gender, and diversity in organizations in South Asia\". The papers selected in this SI offer a range of national, ideological and practical perspectives on various dimensions of equality, diversity and inclusion related to South Asia and South Asian Diaspora. South Asia's key asset is its people, and over the next two decades it will add at least one million people to its labour force every month (Nayar et al., 2012). The region is plagued by conflict, political instability, corruption and power outages. In a digital, networked and information technology age, Afghanistan, Bangladesh and Nepal have some of the highest reported outages of electricity supply in the world. Furthermore the majority of workers in South Asia are still engaged in agriculture. In fact nearly a third of workers in India and a fifth of workers in Bangladesh and Pakistan are casual labourers with regular wage or salaried workers representing a fifth or less of total employment (World Bank, 2012). Nevertheless South Asia is poised to play a major role in the multisectoral global story of managing equality, diversity and inclusion in the current Asia century.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to offer an editorial introduction to the special issue on \"Gender and diversity in organizations in South Asia\".\nDesign/methodology/approach: - The paper reviews the state of extant research on equality, diversity and inclusion in South Asia. It also offers an overview of the papers included in the special issue and the unique contributions they make to this field of knowledge.\nFindings: - Findings vary for each paper; however, overall, the special issue brings to the fore theoretical ideas, alternative organizing and organisations and challenges involved in diversity management in South Asia.\nResearch limitations/implications: - The papers offer fresh insights in theorizing and managing diversity and equality which depart from the dominant Anglo-centric theorizations in this field.\nPractical implications: - Educators, policy makers and managers in organisations may take into account various findings and recommendations offered in the papers leading to greater awareness of the challenges and opportunities to formulate context-sensitive policies and practices of diversity in South Asia.\nOriginality/value: - Barring a few exceptions, the topic of gender and diversity management in South Asia, as for example the centrality of national culture and values, remains generally ignored in work and organization studies. Papers that are published appear in a wide variety of journals rarely achieving a critical mass. The current (special) issue of Equality, Diversity and Inclusion addresses these gaps and seeks to strengthen understanding of equality, diversity and inclusion pertaining to the region classified as South Asia.\n",
"role": "assistant"
}
] |
Workplace staff canteen is associated with lower blood pressure among industry workers
|
[
"Workplace",
"Workforce",
"Blood pressure",
"Hypertension",
"Diets"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: High blood pressure (BP) has assumed a dominant role in the causation of cardiovascular disease, aligning its high risk (Lewington et al., 2002) to the high worldwide prevalence (Zhou et al., 2017). The scenario is not different in Brazil (Picon et al., 2012). Increasing BP values have been associated with high cardiovascular risk (Kurioka et al., 2014) and elevated population-attributable risk for cardiovascular disease (Tajeu et al., 2017), besides the increase in medical expenditures (Mozaffarian et al., 2015; Picon et al., 2017).\nMaterial and methods: Study population\nResults: In total, 4,818 individuals were interviewed; most workers were men (76.5 percent), the mean age was 35.4 (SD=10.7) years old, and they attended to 8.7 (SD=4.1) years at school; 12 percent were obese and 37.9 percent had overweight. Most workers had high level of physical activity, and 15.8 percent were smokers. A total of 68.3 percent of companies had a staff canteen, and approximately 70 percent of large companies had a staff canteen. Hypertension was 26.2 percent, more prevalent among men (30.4 percent vs 12.4 percent, p0.001) and among workers of companies with no staff canteen (30.6 percent vs 24.7 percent, p0.001). Workers characteristic's according to the presence of staff canteen in the industries are presented in Table I.\nDiscussion: This study was the first study conducted in a representative sample of workers from Brazilian industries investigating risks for high BP. It showed that the presence of a staff canteen in the workplace was associated with higher intake of milk and vegetables servings and lower systolic and diastolic BP. Moreover, there were interactions of systolic and diastolic BP with having a staff canteen in the workplace and intake of different types of meat, milk, fruits, vegetables, and green leafy vegetables. Although a staff canteen in the workplace provides food, workers had lower systolic and diastolic BP, even with high intake of red and sausage meats, compared with those who did work in non-canteen industries. Workers are a distinct group of the general population, since to be admitted to a job and in order to maintain it, it is necessary to be healthy. Despite of this, Asay et al. (2016) reported high prevalence of risk factors for cardiovascular disease in workers, and the presence of hypertension may result in absenteeism at high costs.\n",
"role": "user"
},
{
"content": "Purpose: A staff canteen in the workplace can offer a healthier diet, which may lower the blood pressure (BP). The purpose of this paper is to evaluate whether the presence of staff canteen in the workplace is associated with consumption of healthy food and lower systolic and diastolic BP.\nDesign/methodology/approach: A cross-sectional study was conducted, randomly selecting workers through a multistage sampling, stratified by company size in Brazil. Demographic, socioeconomic, and life style characteristics were evaluated, and weight, height, and BP were measured. Statistical analysis used generalized linear models, controlling for design effect and confounding factors, to assess the association between BP and staff canteen and the intake of food items.\nFindings: In total, 4.818 workers, aged 35.4+-10.7 years, 76.5 percent men, with 8.7+-4.1 years of formal education were enrolled. Prevalence of hypertension was 24.7 percent (p-value <0.001) among workers from industries with staff canteen vs 30.6 percent among those with no staff canteen. Workers of industries with staff canteen consumed higher proportion of fruits, green leafy vegetables, and milk than workers of industries without canteen, and had lower systolic and diastolic BP, independently of the frequency of intake.\nPractical implications: Workers of industries with staff canteen consumed a healthier diet, and had lower systolic and diastolic BP, and lower prevalence of hypertension than workers from workplaces without staff canteen.\nOriginality/value: This study was the first carried out among workers of industries reporting that the presence of a staff canteen in the workplace is associated with lower systolic and diastolic BP and prevalence of hypertension.\n",
"role": "assistant"
}
] |
Assessment of critical failure factors (CFFs) of Lean Six Sigma in real life scenario: Evidence from manufacturing and service industries
|
[
"Project",
"Lean Six Sigma (LSS)",
"Failure factors",
"TOPSIS SIMOS approach"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Today, Lean Six Sigma (LSS) enables industries in achieving quality and enhancing their business with better performance (Antony and Laureani, 2012; Weinstein, 2012; Ahmad and Elhuni, 2014; Thomas et al., 2016). Therefore, synergizing the two approaches helps in attaining better performance, which is quicker than the deployment of each method in isolation (Antony and Laureani, 2012). Hence, many industries have adopted LSS to get CI. Nevertheless, there are obvious restrictions in the areas of LSS (Chakravorty and Shah, 2012; Laureani et al., 2010; Sreedharan et al., 2017). In addition, very few discuss the limitation of LSS. Consequently, this study includes articles on LSS to consider the failure factors for LSS. Critical failure factors (CFFs) are defined as \"the key parts (areas) where things are incorrect\" (Opoku and Abor, 2009). Even, some academic papers have targeted CFFs in information system (Yeo, 2002). Later, Belassi and Tukel (1996) conducted a study on CFFs in project management. However, they only enumerated some factors that lead the plan to success or failure, but there is no definition of CFFs. Later, Albliwi et al. (2014) published an article discussing the CFFs of LSS. It focused on identifying the failure factors through literature review. Very few articles have investigated the impact of CFFs on an LSS project using a quantitative approach. Hence, this study emphasizes on identifying the CFFs of LSS and ranking them by means of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) SIMOS approach.\n2. Literature review: 2.1 Failure factors of Lean Six Sigma\n3. Methodology: The primary commitment of the study is to identify the vital failure factors from the 44 CFFs, enabling the management in preventing project failures. In this study, the quality managers have taken a significant part in the LSS initiatives of the industry. The study was conducted in an SME for the manufacturing industry (Jabbour et al., 2013; Marodin and Saurin, 2015) and Bank (Balfour et al., 2015; Chitnis and Vaidya, 2016) for the service industry. The researchers identified five DMs functioning in different fields of the industry. The DMs have been named as A, B, C, D and E, where A and B are from the quality department, C and D are from the production department, and E is from the planning department. Based on the literature review and industrial needs, 44 CFFs were identified. The DMs were asked to assess the factors on a six-point scale, where 0 represents no knowledge about the CFFs, 5 represents good knowledge about the CFFs. The factors were assessed by the DMs, and their ratings are shown in Tables AI and AII. The DMs went through each factor and rated each factor based on their understanding and awareness about the LSS practice. Once the DMs' assessment was over, the researchers obtained the weights for each factor using the SIMOS approach. And using the TOPSIS approach, we calculated the closeness coefficient value to rank the vital failure factors of LSS.\n4. Results and discussion: Based on the DMs' rating, we ranked the CFFs of LSS using the TOPSIS SIMOS approach. Out of the 44 CFFs, the vital failure factors had to be found. From that, the management could focus on preventing those factors from occurring. Using the cut-off value as 0.5, we obtained 18 CFFs which were found to be vital failure factors in SME (refer Table II) and 24 CFFs found to be vital failure factors for the bank (see Table III). The main contribution of the study was using the TOPSIS SIMOS in both the manufacturing and service industries, which is the first of its kind. Moreover, the study identified vital failure factors of LSS in both the SME and bank, enabling the practitioners to understand the common hurdles in an LSS project. Also, it aids them in taking the necessary measures to make the project successful. Several studies by authors like Albliwi et al. (2014) focused on reviewing the literature for CFFs of LSS. However, an evaluation model is much required for the practitioners in the business environment. Balaji and Senthil Kumar (2016) applied TOPSIS SIMOS for identifying the Lean Resource person, but failed to evaluate the human factors and rank the vital factors of lean an employee should possess. Moreover, very few studies have suggested the cut-off value for the factor ranking. Therefore, in this study, the researchers have proposed a cut-off value aiding in classifying the vital failure factors, which facilitate the management to take necessary steps in resolving the issues in the project.\n5. Managerial implications: The LSS project consists of several parameters and measures. To complete a project successfully, all the parameters need to be understood in a better manner. In such condition, a practitioner needs to know that the project is deviated or fails due to the CFFs. To prevent the LSS project failures, the CFFs need to be analyzed. The CFFs are ranked through expert opinions. Before starting an LSS project, the vital failure factors that affect the project can be removed with a threshold value so that the management can focus on the factors that were above the threshold value. CFFs less than 0.5 are analyzed and removed by the administration at a later stage of the project. This approach can be used until all the CFFs are withdrawn from the project.\n6. Conclusion: In this study, an integrated TOPSIS SIMOS approach was used to rank the CFFs of LSS. Comparing with other methods like AHP, PROMETHEE, DEA and their fuzzy version, the TOPSIS SIMOS approach, is simple and fruitful. The specialty of SIMOS is that it can be used in situations where factor weights cannot be obtained. The proposed model, TOPSIS SIMOS approach, can be easily incorporated into a computer-based system for application and computation. Four main contributions have been achieved for the proposed TOPSIS SIMOS approach. First, the proposed approach reduces the complexity and uncertainty of DMs' scores required for weighing the factors in the MCDM methods. The TOPSIS SIMOS approach aids in attaining specific weight for each factor without ambiguity. Second, in some previous studies, TOPSIS was combined with fuzzy and complicated statistics, which may narrow its application in the real-world applications. The proposed TOPSIS SIMOS approach reduces the mathematical complexity. Third, in this study, we focus the rank evaluation phenomenon in the MCDM methods, and we present a new TOPSIS SIMOS-based methodology to rank the CFFs. The final contribution is the introduction of the SIMOS approach for incorporating weights from the experts in a decision-making environment. The case study performed in this study is in the initial stage of exploration, so further research can be done using the TOPSIS SIMOS approach. Using the vital failure factors, an organization can remove the causes of LSS project failure. Furthermore, this procedure can be applied to health care, accounting and finances, logistics and public utilities, where more projects are taken up with day-to-day application.\n",
"role": "user"
},
{
"content": "Purpose: Many projects focus on the reliable operation of the activities in the project. Any failure in the process activities leads to major problems resulting in waste, defects, equipment damage, which has a direct impact on the consumers. In addition, Lean Six Sigma (LSS) is not new to this issue. LSS projects have faced an interruption in the process flow and unforeseen defects. Therefore, the purpose of this paper is to identify the vital failure factors of LSS projects.\nDesign/methodology/approach: Through extant literature review, the researchers found 44 critical failure factors (CFFs) of LSS. Using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) SIMOS approach, the decision makers' (DMs) rating and weight for each factor were collected. Moreover, the study was conducted in both the manufacturing and service industries to identify the impact of CFFs in LSS projects.\nFindings: CFFs and their evaluation have received little attention in the literature. Most of the previous studies deal only with the identification of the success factors in general. Therefore, the study identified 44 CFFs and ranked them through DMs. In addition, the TOPSIS SIMOS approach ranked the vital failure factors enabling the management to avert the LSS project from failures.\nResearch limitations/implications: The study focused on project failures due to CFFs of LSS. Nevertheless, it did not consider other failure factors of project and knowledge management. Further, the sample used to test the approach was considerably small. Therefore, the study can be repeated with significant samples and the vital failure factors compared.\nPractical implications: In real-life application, all the parameters in the LSS project need to be understood in a better manner. In such a condition, the practitioner needs to know that the project never fails due to the CFFs and TOPSIS SIMOS approach can prevent the LSS project failures.\nOriginality/value: The study applied TOPSIS SIMOS approach to rank the CFFs in an LSS project, which is first of its kind and aids the practitioners to make the right decisions in the business environment.\n",
"role": "assistant"
}
] |
The conceptual arguments concerning accounting for public heritage assets: a note
|
[
"Heritage",
"Asset valuation",
"Museums"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The controversy concerning the appropriate method of accounting for public heritage assets continues on unabated. Accounting standard setters and governments in both Australia and New Zealand require that they be accounted for under generally accepted accounting principles (GAAP). In Australia, these principally comprise SAC 4 (AASB, 1992), AAS 27 (AASB, 1996a), AAS 29 (AASB, 1996b), and the Department of Finance (1995); and in New Zealand, the Financial Reporting Act (1993) and FRS 3 (2002). However, many academic commentators and institutions managing public heritage assets continue to oppose this form of accounting. (Mautz, 1988; Pallot, 1990; Glazer and Jaenicke, 1991; Carnegie and Wolnizer, 1995, 1996; Rentschler and Potter, 1996; Stanton and Stanton, 1997; Barton, 2000; Newberry, 2001). The contemporary review of the situation in New Zealand (Hooper, 2005) highlights the controversy currently occurring there.\n1 Public heritage assets - their importance and nature: Public heritage assets (PHA) play an important role in every nation in protecting and enhancing its history, culture, natural environment and recreational facilities for the enjoyment of citizens. These assets are of such unique importance that governments choose to accord them special treatment by declaring them to be public facilities, open without discrimination to all members of the public, at little or no charge. They are provided for social purposes rather than for the generation of revenue, and governments choose to fund their provision wholly or largely from taxation revenues. Their use is tightly governed by legislation to ensure that the assets are protected from improper use or sale and are to be conserved and maintained in good condition so that they will have long, indefinite lives and can continue to provide their important social benefits into the indefinite future. The decision by a government (or its custodial agency) to put selected heritage assets into the public domain to promote and preserve the nation's culture has two important implications for accounting and financial reporting. First, the items become public goods assets whose market characteristics fundamentally differ from those in private markets; and second, the assets should be accounted for and reported on separately from the normal commercial assets of the custodial entity as assets held in trust for the nation.Because of the conditions under which the benefits of PHA are provided, they become public goods in contrast to private goods provided to buyers in normal commercial markets. Public goods are characterized by non-rival and non-excludable consumption attributes (Stiglitz, 1988, Chapters 1-4; Barton, 1999a, b). Thus, when persons visit museums, art galleries and so on, they do not reduce the volume of services available to other potential users (except where congestion problems arise), i.e. the services are non-rival as they are shared with all users. This contrasts with private goods, such that when a person purchases a good, this reduces the supply available for other potential purchasers; they are not shared goods. Supply must be increased to satisfy additional purchasers' needs. Second, consumption of public goods is non-excludable as all citizens have equal rights of access to enjoy the benefits from the assets and no one citizen can deny access to other members of the public. In contrast, purchasers of private goods obtain ownership of them and they can exclude others from using them. They can use them as they choose to do so, including reselling them. Hence with public goods provided on a shared basis to users, individual users obtain no private rights of possession (property rights); they can only use the good in the manner prescribed in its provision, e.g. view it; and they cannot dispose of the good. Furthermore, their use of the good does not harm the good in any way and shorten its lifespan. These conditions are in complete contrast to the conditions pertaining to private goods.Private firms cannot supply goods under these conditions because they cannot charge a price to recover full cost of service provision from all users. Such a price is likely to be prohibitively high and exclude interested citizens from using the facilities. This defeats the social objective of having them. Second, because most of the costs incurred are fixed or sunk costs, and an optimum pricing policy involves charging a price equal to marginal cost, the price would need to be zero or negligible. But the firm cannot recover its full costs under these conditions. Hence the only solution is for such assets and their services to be provided by government to citizens on a shared basis and to fund them collectively from taxation. Governments can provide such goods to citizens on a more efficient and effective basis than can private firms (Stiglitz, 1988, Chapter 3).Governments make policy decisions to provide the services of specified heritage assets on this basis because they regard the benefits as being sufficiently important to do so, i.e. they consider that the social benefits exceed the financial cost. In this way, they enhance the social welfare of the nation. However, they will not necessarily provide all heritage assets in the nation on this basis - some may continue in private ownership, and as such they will not be open to public use. Rather, governments must choose the most preferred types of heritage items from among the options available to them. The decision as to which ones are to be chosen for public use is a political one made by governments on behalf of citizens, after taking into consideration public requests.\n2 How should public heritage assets be accounted for?: The case advanced for applying GAAP to public heritage assets is based on the sector neutrality argument (McGregor, 1999, p. 3). It is contended that because many of the same types of assets as public heritage assets exist in the private sector, they should be accounted for under the same set of accounting standards. This will then facilitate comparisons of information between the sectors for decision making and performance measurement purposes. Thus Rowles (1992), for example, points out that there are many valuable works of art, ancient artifacts, notable historic buildings and so on in private ownership. This is undoubtedly true and is not in dispute. But the physical attributes of the assets are not the relevant criteria for determining the method of accounting for these special assets chosen by government to be in the public domain. The decision removes the assets from private markets and quarantines them in the public sector under protective legal covenants. This is in accord with the statement in Hooper (2005, p. 8) that \"...English law removes ... heritage assets from the market place\", when quoting Carman. The markets in which the assets and their services are provided are the relevant determinant of the appropriate method of accounting for them, and the markets provide the basis for their valuation. Land provides a good example of this principle. The value of the same plot of land varies according to its use purpose clause - whether it is to be used for community open space, agriculture, housing, retail shops and so on. The sector neutral approach is invalid because it ignores this fundamental determinant of value. The approach is embodied in the new public management (NPM) philosophy, which contends that business market principles should be applied to the public sector (Self, 1993). NPM is largely based on the political ideology espoused in public choice theory (Self, 1993; Corbett, 1996, p. 69; Barton, 2000).Because of the social nature of the benefits, the non-rival and non-excludable mode of provision of the services, and the severe restrictions on their resale, there are no active markets for most PHAs. It is thus not normally possible to obtain a \"fair market value\", i.e. the price at which knowledgeable buyers and sellers are willing to trade the asset. Moreover, where private markets for some items do exist, such as for popular works of art, the prices paid will not encapsulate the social benefits because they are determined only with respect to the private benefits expected to be enjoyed by the buyer (Barton, 2000, p. 228). A socially optimal result requires that social benefits be included in the price. But for most PHAs, there are no private markets at all for the items. Most items in public museum collections, archives, art galleries, community and national parks and so on cannot be sold because the managing entities are prohibited from doing so under their charters.As a consequence of the above factors, PHASs do not satisfy the accounting definition of an asset, viz. (SAC 4, 1992, para 14) \"Assets are future economic benefits controlled by the entity as a result of past transactions or other past events.\" Future economic benefits arise from sales of services to customers, and they must be measurable in cash terms. These benefits form the basis for the financial valuation of the asset. But the services of PHAs are provided freely to citizens and so they are non-revenue generating. Furthermore, because the asset cannot normally be sold, its economic benefits cannot be realized through the sale of the item. The definition also requires that the entity has control of the asset such that it can receive its benefits. Otherwise, for private assets, the owner may receive little or nothing in return for the investment in the asset. However, in the case of PHAs, the benefits accrue to the public at large and not to the government as owner. Thus, as public goods, heritage assets do not satisfy either the cash generating or control aspects of the definition, and they are not normally capable of being financially valued on a reliable, fair market value basis. Because they cannot provide cash for the repayment of the entity's liabilities, it is not appropriate to match them against the liabilities to determine its financial position.Furthermore, substitute valuations for fair market valuations are not normally available, unlike for private assets. Many museum items are discovered in archaeological digs and are handed down over the centuries. They, and historic works of art, cannot be replicated. A fake painting or museum artifact is worthless. Replication costs can be estimated for historic buildings, statues and so on, but these are not necessarily appropriate for financial accounting purposes, though the information may be useful for disaster insurance purposes. Historic costs may be available for some purchased assets, but of what relevance are they for good asset management if they relate back to a distant, past period?Detailed analyses of whether PHAs are assets in terms of the accounting definition, and whether they can be valued on a reliable basis, are covered in Mautz (1988), Pallot (1990, 1992), Carnegie and Wolnizer (1995, 1996, 1997, 1999), Glazer and Jaenicke (1991) and Barton (2000).In conclusion, it is evident that PHAs do not satisfy the accounting concept of an asset because of the above factors and hence they should not be included in a statement of financial position.\n3 Public heritage assets as assets held in trust: Invariably in the charters of the entities that manage PHAs, there are requirements that the items must be protected, conserved and maintained for the benefit of current and future generations, and that they cannot be sold except in rare circumstances. Furthermore, the assets must be available for use by the public at large so that they can receive the social benefits intended. The benefits do not accrue to the government as owner of the assets or to the museum, etc. as the custodial entity. Clearly, these assets should be treated separately from the government's, and the entity's, normal operating assets. Their separate classification has been argued for in the past by Mautz (as facilities, 1988), Pallot (as community assets, 1990), FASAB (as stewardship assets, 1996a, b) and Barton (as trust assets, 2000).I believe it is appropriate to treat PHAs as items held in trust for the nation by the government through its custodial agency. The assets belong to the nation, i.e. the citizens, they are managed as public goods by government on behalf of its citizens, and are not available for sale (unless there is a change in policy). This notion of trust management is embodied in much current legislation. For example, the National Trust of Australia's mission is \"... to ensure the conservation of Australia's heritage ... \" (Department of Finance, 1995, p. 370). Governing bodies of public art galleries, etc. are normally constituted as boards of trustees, and the rebellious New Zealand institutions stress the importance of their custodial role in their opposition to the imposition of FRS 3.The correct method of accounting for assets managed on behalf of others is that they be treated as assets held in trust by the custodial entity. Trustees are required to manage the assets for the good benefit of the specified beneficiaries, here, the public at large, and to conserve them for posterity. This is the appropriate method of accounting for PHAs, and it provides a further reason for not including them in the balance sheet of the custodial entity. It is legally wrong to mix together normal operating assets and trust assets because they are not legally accessible by creditors. The mixed information is grossly misleading and lacks representational faithfulness of the financial position of the entity.\n4 Conclusion: The case for accounting for PHAs as normal commercial assets must be rejected. The justification provided in terms of sector neutrality is theoretically unsound because it ignores the source of value for assets, i.e. the markets in which they are used. Similarly, the physical characteristics of assets used to justify the sector neutral approach are irrelevant for their accounting treatment. Because of their importance for the culture, etc. of the nation, governments make collective decisions in response to popular demand to earmark some treasured items as public heritage assets, and to fund their provision on a collective basis through taxation. It is one of the roles of government to do such things as the services can be provided more efficiently and effectively to users in this manner than through private markets. The designation of such heritage treasures as public heritage assets removes them from normal commercial markets and hence they cannot be valued at \"fair market prices\" on a reliable basis. This, together with the requirement that the custodial entities are not normally allowed to sell the items, indicates that PHAs should not be included in a statement of financial position of the entity. Rather, they are more appropriately reported in non-financial terms in a separate statement of assets held in trust. The title of the article by Hooper (2005) drawn from Oscar Wilde, \"Knowing the price of everything and the value of nothing\", so aptly summarises the futility of ascribing irrelevant financial valuations to PHAs.It is important that the managers of PHAs collect information which is useful for the performance of their functions in providing services to the public in an efficient and effective manner and for the conservation of the assets. Useful information must be relevant to their functions and be reliable. The types of information required are examined in various references such as Glazer and Jaenicke (1991), Carnegie and Wolnizer (1995, 1996), Rentschler and Potter (1996), FASAB (1996a, b) and Barton (2000). This information does not include the financial valuation of the heritage assets on a commercial basis and its inclusion in a statement of financial position because it is both irrelevant and unreliable. The provision of irrelevant and unreliable information frustrates good management of the assets rather than enhances it.\n",
"role": "user"
},
{
"content": "Purpose: - This paper seeks to reply/comment on the paper by Keith Hooper et al., titled \"'Knowing the price of everything and the value of nothing': accounting for heritage assets\".\nDesign/methodology/approach: - This note uses the economic theory of public goods to complement the explanation and analysis of Hooper et al's article on accounting for public heritage assets by New Zealand museums.\nFindings: - Using public goods theory from economics, the paper explains why the \"sector neutral\" approach, which underlies the application of commercial accounting standards to public heritage assets, is unsound; why \"fair market prices\" cannot be obtained for public heritage assets because of their being public goods; and why they should be accounted for outside the statement of financial position as assets held in trust by the custodial entities.\nOriginality/value: - The opposition of some of the museums to the accounting standards being imposed on them is soundly based in the economic theory of public goods.\n",
"role": "assistant"
}
] |
New insights into underreporting of time: the audit partner context
|
[
"Audit partners",
"Audit quality",
"Ethical acceptability",
"Organizational ethical culture",
"Pressures",
"Underreporting of time"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Audit firms are essentially profit-making entities. Their survival depends on the relationship between audit fees and the cost of performing an audit engagement (International Auditing and Assurance Standards Board (IAASB), 2014). In the pursuit of profitability, audit firms attempt to minimise the costs of their services. However, the need to serve the public interest forces them to provide high-quality service, which increases the costs of audit engagement. These opposing objectives lead to conflict between cost and quality (McNair, 1991).\n2. Literature review and hypothesis formulation: 2.1. URT in the audit partner context\n3. Research design: 3.1. Methodology\n4. Results: 4.1. Measurement of variables and descriptive results\n5. Discussion: This study provides insights into the cost/quality conflict from the perspective of audit partners in Spanish audit firms. It provides empirical data regarding self-reported URT, offering new insight into the variables that may influence URT among high-ranking auditors. In particular, it analyses the effect of the pressures perceived by auditors with respect to audit budget constraints on URT. It also tests the relationship between URT and other variables, such as a firm's ethical culture, auditors' ethical acceptability of URT, and the influences of peers and superiors on ethical conflict resolution.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide evidence on underreporting of time (URT) by audit partners and the factors that drive URT at their level. In particular the study tests the relationship between URT and the following variables: pressures perceived by auditors related to audit budgets, the ethical acceptability of URT, the influence of peers and superiors on the resolution of ethical conflicts, and organisational ethical culture. A deeper analysis of URT practices is necessary, as failing to correctly report the total hours worked by audit partners poses a threat to audit quality, and can have a detrimental effect on individual auditors, audit firms, and even the auditing profession itself.\nDesign/methodology/approach: A multiple regression data analysis was conducted, based on 84 responses from Spanish audit partners working in small- and medium-sized audit firms.\nFindings: The results reveal that URT is affected by the pressures perceived by auditors related to audit budgets, the ethical acceptability of URT, and the influence of peers and superiors on the resolution of ethical conflicts.\nOriginality/value: This study contributes to international literature on the organisational context of audit firms by analysing URT from the perspective of audit partners. It also has practical implications, as it focuses on understanding the beliefs and behavioural patterns of audit partners, which is critical to proposing mechanisms for avoiding dysfunctional behaviour at all levels of audit firms.\n",
"role": "assistant"
}
] |
Longitudinal models for dynamic segmentation in financial markets
|
[
"Dynamic segmentation",
"Financial products",
"Latent class Markov models",
"Latent growth models"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In this paper, two alternative approaches to clustering categorical data are compared: the latent class Markov (LCM) approach and the latent class growth (LCG) approach. Markov type models describe change between consecutive time points while growth models study the development of behaviours over time. Both approaches have been shown to be very interesting for dynamic market segmentation (Paas et al., 2007; Bassi, 2012). In this paper the focus is on understanding which approach performs better in terms of fit and of usefulness of the obtained results. The LCM model represents dynamics over time across latent states with conditional probabilities following a Markov chain. Covariates may affect both the initial state of the latent chain and transition probabilities. In the context of market analysis, latent states aim at representing segments of consumers. The LCG model identifies distinctive groups with different trajectories: it may be seen as an extension of the growth curve model (Nagin, 2005) in which a categorical variable is used to capture heterogeneity in developmental trajectories; moreover, the relationship between latent classes and covariates may be estimated as well as the effect of covariates on the latent trajectory. Groups identified by the mixture may be interpreted as market segments.\n2. LCM models: Let us consider the simplest formulation of LCM models (Wiggins, 1973), which assumes that true unobservable transitions follow a first-order Markov chain. As in all standard latent class model specifications, local independence among the indicators is assumed, i.e., indicators are independent conditionally on latent variables.\n3. Growth models: Growth models study the development of individuals over time capturing the dependence introducing one or more latent variables (Vermunt, 2007). Basically, growth models are regression models for two-level data - time points nested within individuals - in which time enters as a predictor.\n4. LCM and LCG models compared: Both approaches described in the previous paragraphs can be usefully employed for dynamic segmentation since they allow to identify groups of customers with similar characteristics, i.e., market segments, to follow their behaviour over time and, if necessary, to predict it. However, the two approaches rely on different assumptions.\n5. The application: the survey and the data: For the application, data on financial product ownership by Italian households collected with the SHIW conducted by the Bank of Italy is used. The survey has been running since 1965 and, with few exceptions, was conducted on an yearly basis till 1987. In the sequel it was run every two years. In 1989 a panel component was introduced so that at each wave the sample consists of two components: a panel sub-sample made up of households who participate in previous waves and a fresh cross-sectional sub-sample. Table I contains the survey design and the dimension of the two sample components from 1987 to 2010. The survey collects information on income, savings, consumption expenditure and the real wealth of Italian households, as well as on household composition, demographic characteristics and labour force participation (Giraldo et al., 2001). In the paper, the sample of 1,834 households who participated in all waves from 2002 to 2010 is considered.\n6. Results: The best fitting LCM model to our data has five latent states which represent market segments, a first-order stationary Markov chain and all considered covariates significantly affect the initial state and transitions of the latent chain. In this application, response variables are the 13 binary indicators yijt assuming value 1 if household i holds financial product j at time t, i varies over 1 to n, sample dimension, j over 1-13 and t over 1-5, since five survey waves are considered, from 2000 to 2010. The LCM model with concomitant variables is estimated in one step (Vermunt, 2010) starting with one latent state per each latent variable and increasing the number of latent states till the Bayesian information criterion (BIC) increased. On this model, the assumption of time-constant transition probabilities was accepted by means of the conditional test. Response probabilities are set constant in order to make segments stable over time. Table IV lists estimation results: segment sizes and response probabilities. The largest group (50.10 per cent) is composed of households owning only a bank or a postal deposit and very few forms of other assets, showing that they rely heavily on liquid savings forms for transactional purposes; the group with dimension 12.42 per cent is that comprising the poorest households: they do not own any kind of financial asset, moreover, an important percentage does not even hold a deposit. 19.57 per cent of families owns a deposit and has made some investment in financial assets with a preference for less risky ones such as postal bonds and COOP. A segment with dimension 9.23 per cent contains households that mainly possess state bonds (as well as deposits). Finally, a segment made of 8.68 per cent of families owns a deposit, state bonds and one or more risky financial assets, with a quite diversified portfolio. To help interpretation, in Table IV segments are ranked in ascending order of product penetration rates, from households owning only DEP to households owning more sophisticated financial products.\n7. Conclusion: Dynamic market segmentation is a very important topic in many businesses where it is interesting to gain knowledge on the reference market and on its evolution over time. Various papers in the reference literature are devoted to the topic and different statistical models are proposed.\n",
"role": "user"
},
{
"content": "Purpose: Dynamic market segmentation is a very important topic in many businesses where it is interesting to gain knowledge on the reference market and on its evolution over time. Various papers in the reference literature are devoted to the topic and different statistical models are proposed. The purpose of this paper is to compare two statistical approaches to model categorical longitudinal data to perform dynamic market segmentation.\nDesign/methodology/approach: The latent class Markov model identifies a latent variable whose states represent market segments at an initial point in time, customers can switch to one segment to another between consecutive measurement occasions and a regression structure models the effects of covariates, describing customers' characteristics, on segments belonging and transition probabilities. The latent class growth approach models individual trajectories, describing a behaviour over time. Customers' characteristics may be inserted in the model to affect trajectories that may vary across latent groups, in the author's case, market segments.\nFindings: The two approaches revealed both suitable for dynamic market segmentation. The advice to marketer analysts is to explore both solutions to dynamically segment the reference market. The best approach will be then judged in terms of fit, substantial results and assumptions on the reference market.\nOriginality/value: The proposed statistical models are new in the field of financial markets.\n",
"role": "assistant"
}
] |
Conceptualising the influence of the cultural orientation of Latin Americans on consumers' choice of US brands
|
[
"Latin America",
"Cultural dimensions",
"Country of origin effects",
"Consumer choice",
"Advertising",
"Branding and brand management",
"Cultural orientation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: According to Fastoso and Whitelock (2011), there is a need for more research on emerging markets. Although some research interest has recently been shown in Latin America in management areas, Latin America has been relatively neglected in both the international business and marketing fields (Burgess and Steenkamp, 2006). Specifically, the influence of US brands on Latin American consumer behaviour is an under-researched topic that is particularly important to investigate, given that the gross domestic product of this region is nearly US$ 4 trillion and the population of over 550 million is highly important to the global economy (International Monetary Fund, 2010). More critically, Latin America is a fertile ground for generating new theories and models for different contexts (Fastoso and Whitelock, 2011).\nImportance of the topic: Although culture and cultural orientation have been well researched in international business and marketing literature, and even though there has been a growing interest in theories related to buying intentions and consumers' attitudes and perceptions with reference to a product's country of origin, it is found that there is a lack of adequate research on the relationship between consumers' cultural orientation and their bias towards imports and domestic products (Yoo and Donthu, 2005). Similarly, there is a lack of research on country-of-origin effects on consumers' purchase behaviour, which includes purchase intentions towards foreign products and consumers' evaluation of such products in emerging countries (Sharma, 2011). In particular, research on ethnicity as a potential influence on consumer ethnocentrism has been largely ignored (Yoo and Donthu, 2005).\nLiterature review: Country of origin\nConceptual framework and propositions: The proposed conceptual framework below consists of various constructs such as consumer ethnocentrism, consumer xenocentrism and conspicuous consumption. This conceptual framework is shown in Figure 1.\nGeneral discussion: This study has reconciled disparate theories from the prior literature review to identify how Latin Americans' cultural orientation influences their preferences for foreign brands over domestic ones. First, this paper covered the country of origin construct, which it described as the \"made in\" description of a product, and which has the capability to communicate the quality and value of a product to consumers (Phau and Cheong, 2009). The influence of a product's country of origin on consumers' perception and judgements has been widely studied in the marketing, consumer behaviour and international business literature. However, there is a lack of research on the effects of country of origin on consumers' purchase behaviour, including purchase intentions. Nevertheless, through the study of consumers' cultural orientations, focusing particularly on consumer ethnocentrism and xenocentrism, it was possible to determine that consumers have different perceptions about foreign and domestic brands and products, depending on the cultural orientation consumers follow (Roth and Romeo, 1992).\nTheoretical and practical implications: This conceptual paper has provided a valuable insight into how consumers' cultural orientation affects their consumer ethnocentrism and xenocentrism; especially, it examines how a consumer's cultural orientation relates to favourable evaluations of domestic (versus foreign) products. The inclusion of conspicuous consumption and consumers' national characteristics as moderating variables recognises the fact that Latin American consumers are not homogeneous, but vary significantly. While there are some common cultural characteristics as seen from studies by Hofstede (1980) and Ronen and Shenkar (1985), some studies have shown the existence of significant differences (Lenartowicz and Johnson, 2003). We are aware that some countries have been more exposed than others to the US influence over the years, and that impacts on the consumers' national characteristics. However, we are unable to predict the direction and the magnitude of this influence, which requires empirical testing. This paper contributes to the growing body of research on consumer orientations and country-of-origin effects, providing a better interpretation of consumer behaviour in international and domestic markets. In the same way, this paper has contributed to the marketing literature in the field of consumer behaviour, advertising and international business (Fastoso and Whitelock, 2011). Importantly, the concept of xenocentrism has provided an alternative framework for interpreting and understanding Latin American consumers' behaviours, and this can be also be applied when examining the consumer behaviour of diverse ethnicities and cultures.\nLimitations and further research: Although this paper contributes to the marketing literature in several ways, there are some limitations that are worth mentioning for further research. First, this paper has solely examined high levels of consumer ethnocentrism and xenocentrism, ignoring moderate and low levels of intensity. Low ethnocentrism refers to when a consumer is not concerned about their interaction with other cultures and may purchase foreign products; additionally, a consumer with moderate ethnocentrism diminishes contact with other cultures, but buys foreign products only in specific circumstances. The same definition can be related to xenocentrism in a reversed manner. Low consumer xenocentrism is when consumers prefer foreign products but perceive domestic products as more convenient and inexpensive, and consumers with moderate xenocentrism levels prefer foreign products, but still purchase domestic products such as convenience goods and other necessities. Thus, it would be interesting to examine how different levels of consumer ethnocentrism and xenocentrism among Latin American consumers and among other ethnicities and cultures influence attitudes and behaviours towards foreign and domestic brands and products.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to identify the main cultural factors that influence Latin American consumers' intentions to purchase US brands. Although culture and cultural orientation have been well researched in international business and marketing literature, there is a lack of research on the relationship between consumers' cultural orientation and their bias towards foreign and domestic products.\nDesign/methodology/approach: The paper reviews the extant literature with a particular emphasis on the key constructs of consumer ethnocentrism, consumer xenocentrism, conspicuous consumption and consumers' national characteristics. Based on this review, the authors propose a conceptual model showing the influence of cultural orientation on the selection of US brands in Latin America.\nFindings: The review of the literature shows that previous studies support the proposition of cultural orientation and preferences for foreign versus domestic products among Latin American consumers. Accordingly, in their conceptual framework, the authors posit that consumer ethnocentrism negatively influences the selection of US brands, while xenocentrism does the opposite. Conspicuous consumption is posited as moderating the influence of consumer xenocentrism on purchase intentions of the US brands. On the other hand, national characteristics of consumers in Latin America are posited as moderating the influence of both consumer ethnocentrism and consumer xenocentrism on the selection of US brands.\nPractical implications: The authors also present important theoretical and practical implications that contribute to the growing body of research on consumer acculturation and country of origin effects, providing a better interpretation of consumer behaviour in the context of international and domestic markets.\nOriginality/value: This study fills a significant gap in the understanding of the impact of cultural orientation and conspicuous consumption on selection of US brands in Latin America. Its conceptual framework can provide the basis for future empirical studies and also improve understanding of emerging markets.\n",
"role": "assistant"
}
] |
Economic evaluation of cooperation scenarios in supply chains
|
[
"Business planning",
"Information systems",
"Distribution",
"Procurement",
"Supply chain management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nSummary: In this paper we explore the potentials of inter-organizational cooperation in the field of Supply Chain Management. On the basis of the framework of Wyner and Malone we examine different coordination mechanisms. Using an analytical approach as well as simulation runs we show that the centralized coordination scenarios usually lead to superior results compared to different decentralized approaches in the field of procurement planning and distribution planning. This quantified value of cooperation can consequently serve as a basis for a profitability analysis on the usage of information and communication technology on an inter-organizational level.\n1. Introduction: Supply Chain Management aims at integrating all key business processes across the entire supply chain (Handfield and Nicols, 1999). Thereby, an integrated data processing is a precondition for the design of interorganizational business processes in supply chains (Gunter et al., 2006). This requires a comprehensive integration of both intra- and inter-organizational information and communication technology. However, this integration poses a challenge to many companies, because in most cases supply chain partners are using incompatible systems (Themistocleous et al., 2002, pp. 1087-8; Jharkharia and Shankar, 2005). In addition, specific Supply Chain Management software which enables cooperative planning of processes is required.In order to decide upon investments in Supply Chain Management software, companies have to evaluate the benefits of cooperation. On the one hand, it is widely accepted that through cooperation in supply chains companies can achieve positive economic effects by combining resources, such as production plants, employees, or information (Handfield and Nicols, 1999; Gebauer and Buxmann, 2000; Udomleartprasert and Jungthirapanich, 2006). On the other hand, less is known about how to quantify the value of cooperation. Against this background this paper estimates the value of cooperation in the fields of procurement as well as distribution planning.In section 2 we commence by describing different cooperation scenarios in supply chains and opportunities to support these by means of information technology. section 3 introduces an analytical approach to evaluate different cooperation scenarios in the field of procurement planning. Thereby the reduction of the bullwhip effect which is achieved by different cooperation scenarios acts as an indicator to evaluate cooperation benefits. In section 4 we use simulations on the basis of our prototype SCOptimizer to quantify the benefits of cooperative procurement and distribution planning. A quantified value of cooperation in the field of procurement as well as distribution planning can provide a basis for a profitability analysis of the usage of information and communication technology on an inter-organizational level. Thus, the overall aim of both approaches is to increase transparency of benefits achieved by cooperation in supply chains. The paper ends with conclusions.\n2. Cooperation scenarios in supply chains: The main idea of cooperation in supply chains is that integrated, cooperative planning among the actors, e.g. suppliers, manufacturers, distributors as well as logistics service providers, leads to better results than isolated planning (Handfield and Nicols, 1999; Udomleartprasert and Jungthirapanich, 2006). In the following, we will differentiate between three cooperation scenarios (Wyner and Malone, 1996) (see Figure 1).In the case \"no cooperation\", there is neither information sharing nor any other kind of coordination of the planning processes between the supply chain members. \"Decentralized cooperation\" implies that the supply chain members first plan independently, but in the next step exchange information about their individual plans that may be relevant for the planning processes of other members. In the scenario \"centralized cooperation\" a central instance conducts the planning for all involved members of the supply chain.The different kinds of cooperation can be supported by information and communication technology such as EDI and XML/EDI. These technologies for instance can facilitate the exchange of information in a scenario like \"decentralized cooperation\". However, \"centralized cooperation\" requires a deeper integration, in which all relevant planning data (e.g. demand or production capacities) is available to a central planning instance. Based on this information, the central instance for example determines the production or sales plans for all involved supply chain members. Meanwhile, a variety of vendors offer Supply Chain Management Software products that support these functionalities (Knolmayer et al., 2002). In order to achieve the objectives of cooperation and to improve the efficiency and flexibility of supply chains, it is imperative to integrate the different IT-applications on company as well as cross-company level (Themistocleous and Irani, 2003; Themistocleous et al., 2004).However, the results of an empirical survey among manufacturers, suppliers, distributors, and logistic service providers from 25 countries in the European Automotive Industry show the main reason for not implementing Supply Chain Management Software was that companies were unable to quantify the benefit for their organization (Buxmann et al., 2004). This result accentuates a well known and hard-to-solve problem: Usually, it is more difficult to evaluate the potential benefits than it is to estimate the costs of information technology (also see Wieder et al., 2006). However, the decision to implement Supply Chain Management Software depends on the assessment of the benefits obtained in supply chain cooperation. This is the focus of the following sections.\n3. The bullwhip effect from a cooperative perspective: It has been recognized in many diverse markets that demand variability increases as one is moving up a supply chain from the customers through the suppliers. The so-called bullwhip effect has considerable cost implications. Above all, it leads to increasing inventory and material costs as well as increasing manufacturing expenses due to excess capacity, inefficient utilization of capacity, and overtime (Simchi-Levi et al., 2000, pp. 84-6).Lee et al. (1997) differentiate five fundamental causes of the bullwhip effect:1. non-zero lead-times;2. demand signaling processing;3. price variations;4. rationing and gaming; and5. order batching.The effects of these factors appear independently but enhance each other causing the bullwhip effect to increase. Recently, different authors have analyzed the complex relationships in this context. Steckel et al. (2004) emphasize the influence of lead times. Disney and Towill (2003) propose a control theoretic approach to measuring and avoiding the bullwhip effect. Li et al. (2005) discuss some novel phenomena, including the existence of an \"anti-bullwhip effect\". Zhou and Disney (2006) analyze the bullwhip effect in a closed loop supply chain.In the following, we will discuss the bullwhip effect with regard to the different cooperation scenarios displayed in section 2.In case there is no cooperation, every member of the supply chain conducts its own demand forecast and decides upon the order quantity. In order to quantify the bullwhip effect, Chen et al. (2000a) assume that every stage of the supply chain follows a simple order-up-to inventory policy, i.e. at the end of every period the inventory is accumulated to the order-up-to-point. Furthermore, they assume that all members of the supply chain use the simple moving average to estimate the demand and to schedule the order-up-to-point (for a discussion of the impact of different forecast models on the bullwhip effect see Chen et al., 2000b; Zhao et al., 2002; Zhang (2004)). In addition, safety stocks are not contemplated.The bullwhip effect is expressed by the variance of the orders placed by the different stages of the supply chain relative to one another - see equation (1) (for the derivation see Chen et al., 2000a): Equation 1 where Li=lead time between stage i and i+1 of the supply chain; p=number of observations used in the moving average.This ratio describes the increase in variance of the orders placed by stage k, compared to the variance of orders placed by the customer. This situation is characterized by a multiplicative increase in the variability at each stage.A strategy for reducing the bullwhip effect is to make customer demand information available to every member of the supply chain (Lee et al., 1997). In this scenario the bullwhip effect can be quantified by equation (2) (see for the derivation once more Chen et al., 2000a): Equation 2 This situation represents the scenario \"decentralized cooperation\". In this case, the increase in variability of the orders at each stage is an additive function of the lead time and the lead time squared. Hence, compared to the non-cooperative scenario the bullwhip effect is reduced, but not eliminated (also see Fiala, 2005, p. 420).In the case \"centralized cooperation\" one member of the supply chain decides on all orders on every stage of the supply chain. To the extent that in doing so demand forecasts on the different stages are no longer needed, the bullwhip effect can be eliminated.Decreasing the bullwhip effect improves profitability remarkably: Based on simulations, Metters shows that eliminating the bullwhip effect caused by the forecast error can increase profits by 5-10 percent (Metters, 1997, p. 91). As Croson and Donohue (2005, p. 249) point out, from a behavioral perspective sharing information can also provide a means to affect behavior and increase trust throughout the supply chain. Generally, the cost reductions resulting from information sharing in supply chains (also see Cachon and Fisher, 2000; Krol et al., 2005) represent the value of cooperation which has to be compared to the costs, e.g. of implementing Supply Chain Management Software.\n4. SCOptimizer - a prototype to evaluate the benefits of cooperation in supply chains: This section presents our prototype SCOptimizer. The purpose of this software is to support the evaluation of cooperation in the fields of procurement and distribution planning. First, we describe the open architecture, second we show how to quantify the benefits of the different cooperation scenarios in the fields of procurement and distribution planning.4.1 The open architecture\n5. Conclusions: Inter-organizational cooperation is an integral part of Supply Chain Management. However, often the question arises how to quantify the benefits of cooperation. Hence, the goal of our paper is to examine economic approaches to evaluate different cooperation scenarios in the context of Supply Chain Management. On the basis of the framework of Wyner and Malone we examine alternative cooperation mechanisms. Using an analytical approach as well as simulation runs we are able to quantify the advantages of cooperative planning in the fields of procurement planning as well as distribution planning. Thus, the overall aim of both approaches is to increase transparency of benefits achieved by Supply Chain Management. The results can be considered as an indicator to measure the economic value of cooperation.In order to realize these cooperative advantages in supply chains respective Software solutions are required. However, inter-organizational cooperative planning processes currently are just rudimentarily supported by Supply Chain Management Software, although we can affirm - and we have shown it in the simulation model with the SCOptimizer for the fields of procurement and distribution planning - that these planning areas involve a high benefit potential if performed cooperatively. The latent possibilities present a set of challenging opportunities for both software vendors and implementing companies in the near future.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this research is to examine economic approaches to evaluate different cooperation scenarios in the context of Supply Chain Management.\nDesign/methodology/approach: - The value of different cooperative scenarios is analyzed in the fields of procurement and distribution planning using an analytical approach as well as simulation runs.\nFindings: - On the basis of the framework of Wyner and Malone coordination mechanism in the field of Supply Chain Management was examined. Using an analytical approach as well as simulation runs it can be shown that the centralized coordination scenarios usually lead to superior results compared with different decentralized approaches in the field of procurement planning and distribution planning.\nOriginality/value: - Empirical data show that the difficulty in quantifying cooperation benefits is the main reason for not to implementing inter-organizational software solutions such as Supply Chain Management software. Therefore, a prototype is developed to quantify the value of cooperation in the fields of procurement planning and distribution planning.\n",
"role": "assistant"
}
] |
An integrated model for performance management of manufacturing units
|
[
"Analytical hierarchy process",
"House of quality",
"Quality function deployment",
"Performance management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Globalisation and liberalisation create a need for having a manufacturing management system to estimate the performance of the manufacturing industry. Amaratunga and Baldry (2002) define performance management (PM) as the use of performance measurement information to effect a positive change in the organizational culture, systems and processes, by helping to set agreed-upon performance goals, allocating and prioritizing resources, informing managers to either confirm or change current policy or programs direction to meet these goals and sharing the results of performance in pursuing these goals. PM both precedes and follows performance measurement. The effective conduct of PM is generally divided into two stages:1. performance measurement; and2. performance improvement.Performance measurement is recognized as an important part of the manufacturing strategy literature. It is a process of quantifying actions, where measurement is the process of quantification and action correlates with performance. The definitions of three important terms in the context of performance measurement as given by Neely et al. (1995) are the process of quantifying the efficiency and effectiveness of action. A performance measurement system can be examined at three different levels: the individual performance measures, the performance measurement system as an entity, the relationship between the performance measurement system and the environment in which it operates. The key to the evaluation of performance measurement in our view has to be based first and foremost on identifying the function of the performance measurement system; and this, again, depends largely on the organizational context and the organizational culture management intent and strategy. Thus, performance measurement sets the agenda for bringing the more relevant-, integrated-, balanced-, strategic- and improvement-oriented PM (Tangen, 2004).Performance improvement is the positive change which is brought about by process re-engineering, reflecting the concern of the customer. Process improvement identifies the redundant and missing performance measures, as well as identifies potential conflicts between performance measures and targets for each performance measure. Quality function deployment (QFD) has been used to improve the manufacturing process improvement. This technique converts the \"voice of the customer\" into design, engineering, manufacturing and production terms to ensure the product meets the needs of the customer. Therefore, it is an effective tool to integrate marketing strategy with product development process. The QFD procedure uses a series of matrices called house of quality (HOQ), to express the linkages between the inputs and outputs of the different phases of development (Hauser and Clausing, 1988). The HOQ typically contains information on the \"what to do\" (customer requirements), \"how to do\" (engineering characteristics), relation measures between customer requirements and engineering characteristic's, as well as the correlation measures among the engineering characteristics and benchmarking data compared to competitors (Tan et al., 2004; Tang et al., 2002). In the redesign process, the HOQ needs to be developed and the optimal set of service requirements needs to be determined.Thus, the objective of this paper is to demonstrate the need for the manufacturing PM model, integrating performance measurement and performance improvement. Moreover, it explores the relationship between the measuring variables considering the objective and service quality factors, which are based not only on costumer focus but also on the technical parameters based on the operational system.The paper is organized as follows. A brief theoretical background is presented on the performance measurement framework, and performance measuring tools are discussed followed by the problem identification and research methodology. A case study of two manufacturing units to illustrate the applicability of the model follows.\n2 Theoretical background: The following reviews serve as the background within which measurement of manufacturing performance is undertaken and the subsequent development of the integrated model is proposed. Next, the framework and dimensions on PM are presented.2.1 Review on performance measurement framework\n3 Problem identification: Various authors have through their papers explored the application of AHP-QFD combined tools in a variety of fields that range from education to sports. Koksal and Egitman (1998) applied the combined AHP-QFD approach to improve the education quality for a Middle East Technical University. Lam and Zhao (1998) used the combined AHP-QFD approach to identify appropriate teaching techniques. The AHP was used to evaluate the relative importance weightings of the students' requirements with respect to three criteria: skills development, interest and knowledge, and examination and job. Wang et al. (1998) suggested that the HOQ can be represented as a hierarchy if AHP was used together with QFD. The customer requirements and technical/design requirements in QFD can therefore be regarded as the criteria and alternatives in the AHP, respectively. Partovi (1999) applied the combined AHP-QFD approach to aid in project selection. Partovi and Epperly (1999) used the combined AHP-QFD approach to determine the composition of the US peacekeeping force deployed in Bosnia. Zakarian and Kusiak (1999) evaluated and selected the multi-functional teams using the combined AHP-QFD approach. Badri (1999) and Chuang (2001) applied the combined AHP-QFD approach in the facility location problem. Kwong and Bai (2003) used the combined AHP-QFD approach to aid in new product development. Lu et al. (1994) applied the combined AHP-QFD approach to evaluate and select the functional characteristics of environmentally friendly products. Partovi and Corredoira (2002) used the combined AHP-QFD approach to prioritize and design rule changes for the game of soccer. The objective was to increase the attractiveness to soccer enthusiasts. Myint (2003) proposed the combined AHP-QFD approach to aid in product design. Bhattacharya et al. (2005) applied the combined AHP-QFD approach to aid in robot selection. Partovi (2006) used the combined AHP-QFD approach to evaluate and select facility location for a company producing digital mass measurement weighted products for industrial use. Hanumaiah et al. (2006) presented the combined AHP-QFD approach to deal with the rapid tooling process selection.Our literature review shows that there is a research gap in the application of the integrated AHP-QFD tool in the PM of the manufacturing sector.With this background, paper intends to exploit this unexplored area by using this tool to create an integrated closed loop model for PM in the manufacturing sector. The research seeks to devise an integrated methodology to evaluate and analyse the PM of manufacturing organizations through the process of performance measurement aided by multi-criterion decision-making tools such as AHP and EBG and the subsequent improvement of the performance measures using QFD by constructing a suitable HOQ matrix. The EBG model has been used to quantify the performance considering the objective and service quality factors. These factors are further evaluated by AHP required for the EBG model. Whenever required, QFD has been used to redesign the existing processes. The developed model is further exemplified using a case study on two similar manufacturing units.\n4 Research method: The proposed model for PM in manufacturing organizations consists of two phases, the fundamental idea is to first measure the performance and if required set a strategy for performance improvement. The detailed discussion on the methodology is listed in a subsequent section.Phase 1. This phase is related to the manufacturing performance measurement. This will involve identifying the parameters and classifying them into objective and quality factors. The objective factors include the cost and time dimensions which can be further classified into effective or ineffective. A structured survey will then be conducted at the organizations included in the study. An OFM and quality measure will then be calculated. The EBG model is used to quantify the manufacturing performance measure. AHP is used to evaluate the manufacturing quality factor measure (QFM) required for the EBG model.Phase 2. This phase is related to the manufacturing performance improvement. In this phase, QFD is used to improve manufacturing performance. The basic steps for manufacturing performance improvement include:1. Identifying the customer requirements.2. Identifying the manufacturing design requirements (MDRs).3. Relating customer requirements to MDRs.4. Conducting an evaluation of competing manufacturers.5. Evaluating the MDRs and development of targets.The processes that have been redesigned need to be implemented in the organization and need to be periodically reviewed so that the overall improvement of the performance quality of the organization can take place. The flowchart provided in Figure 1 illustrates the process.4.1 Manufacturing performance measurement\n5 Model validation by case study: To demonstrate the applicability of the model in manufacturing organizations, we rely on a case study approach on two identical valve manufacturing companies, namely, Unit A and Unit B, located at the Tiruchirappalli Regional Engineering College Science and Technology Entrepreneurs Park in Tiruchirappalli, India. The details of the study are provided below.5.1 Manufacturing performance measurement\n6 Conclusion: This paper demonstrates the need for a manufacturing PM model. A literature survey has been carried out on the various frameworks and tools used. An integrated closed model to enhance PM has been proposed. It also provides a means to identifying the current performance of an organization and a methodology to improve it further. An important contribution of this model is that it combines both qualitative and quantitative dimensions of manufacturing performance measurement. The proposed model provides an opportunity to operationalise the relationship among the cost, time, and service quality dimensions. Both objective and manufacturing quality factors have been converted into consistent dimensionless indices to measure the system performance.The case study presented in this paper has demonstrated the applicability of the model to support a manufacturing unit. It has shown how performance measures are identified and how they can be calculated for two different units using time, cost, and service quality dimensions. The case study proves the usability of the EBG model for the PM process. From the SPM, the performance of the manufacturing organizations is analysed and manufacturing parameters is redesigned using QFD wherever necessary.Improving performance is a never-ending process and organizations should strive to achieve it for attaining the optimal level of cost and profit, as well as increase customer satisfaction and goodwill, and gain potential future business. Hence, the process of measuring and redesigning the manufacturing performance measures needs to be monitored and the implementation plans reviewed often. Finally, this model for PM can be extended to the service sector.\n",
"role": "user"
},
{
"content": "Purpose: - The objective of this paper is to develop an integrated model for performance management (PM) of manufacturing industries.\nDesign/methodology/approach: - The proposed integrated model consists of performance measurement by the extended Brown Gibson model by considering the objective and the service quality factors. The quality factor measure has been evaluated by using the analytic hierarchy process. On the non-compliance of the performance measures with the satisfactory levels, quality function deployment is used to redesign the existing manufacturing process.\nFindings: - This study provides a way to identify the current performance of an organization and a methodology for further improvement. An important contribution of this model is that it combines both the qualitative and quantitative dimensions of manufacturing performance measurement. Both the objective and manufacturing quality factors have been converted into consistent dimensionless indices to measure system performance.\nPractical implications: - This study has demonstrated the applicability of the model to support a manufacturing unit. It has shown how performance measures have been identified and how they can be used to calculate the two different manufacturing units using time, cost and service quality dimensions. Improving performance is a never-ending process and organizations should strive to achieve it to attain the optimal level of cost and profit, as well as increase customer satisfaction and goodwill, and gain potential future business. Hence, the process of measuring and redesigning manufacturing performance measures needs to be monitored and the implementation plans reviewed often, which is successfully done by this integrated model.\nOriginality/value: - We contend that the integrated model for PM, illustrated with a practical case in this paper makes a contribution to the never-ending process of performance enhancement for both theory and application, and assists in expanding the boundaries of theory and practicality in this area, thus highlighting the novelty of our approach.\n",
"role": "assistant"
}
] |
Recognition of prior learning: opportunities and challenges for higher education
|
[
"Higher education",
"Recognition of prior learning",
"Australian qualifications framework",
"Areas of learning"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Recognition of prior learning (RPL) is a process by which both formal learning for recognised awards, informal learning from experience and non-formal learning for uncertificated but planned learning (e.g. workshops/seminars that are uncertificated) is given academic recognition (Garnett et al., 2004). Learning from experience includes knowledge and skills acquired in an unplanned way through life and often as a result of the learning demands of work (both paid and unpaid). The description of such learning as \"prior\" means that it has happened in the past and is often taken to mean prior to entry to a course of study towards a qualification. This paper refers international developments in RPL and then focuses upon the extensive and innovative use of RPL (especially learning from experience) by Middlesex University and the developing RPL work at the Australian Institute of Business (AIB) in order to highlight some of the key opportunities and challenges of RPL\nInternational developments in RPL: In Australia, RPL was introduced in 1992 as part of the national framework for the recognition of training. RPL was designed for vocational education and training at polytechnic and adult education colleges. Since then RPL has slowly become accepted for the purpose of any accredited education and training, including higher education.\nRPL: the case of Middlesex University in the UK: Middlesex University is one of the largest higher education institutions in the UK and maintains a strong focus on professional education developed as Middlesex Polytechnic. Middlesex University recognises that learning occurs throughout life, especially through active participation in training courses, continuing professional development (CPD) activities and in the workplace. Learning also occurs in non-employment situations such as the home and voluntary work. The outcome of this learning may be acquisition of skills and/or knowledge that is comparable to learning gained from taught courses offered by Middlesex University and other institutions of higher education. Such learning has been recognised by (some) professional bodies and, implicitly, in terms of career progression by industrial, commercial, public sector and voluntary organisations.\nRPL: the case of the AIB: Could the lessons of Middlesex University be transferred to a higher education institution in Australia? AIB is one of the most established private higher education institutions in Australia. AIB offers the full suite of business degrees from Bachelor level to doctorate (Master of Management, Doctor of Business Administration and PhD). The mission of AIB reflects the philosophy of its owners: to be a practical business school offering work-applied degrees. The work-applied learning approach is embedded into all courses offered at AIB. The reason for the work-applied approach is its impact on the performance of managers working with real problems and seeking to produce real solutions for their organisation. With a large proportion of practising managers among AIB's student cohort, the impact of the work-applied learning approach for the students is noticeable and immediate (Abraham, 2012).\nApplying RPL for organisational as well as individual development: In this age of the \"knowledge economy\", the belief is widely expressed that organisations are only as good as their people. The concept of intellectual capital (Stewart, 1997) singles out knowledge and skills as an organisation's primary assets. Individual knowledge forms the basis for communication of information to others who will then make sense of it in the light of their own personal knowledge. For individual knowledge to become organisational knowledge and thus fully contribute to the intellectual capital of the organisation, it must be shared and accepted by others. This may be problematic as individual knowledge is often unrecognised not only by the organisation but often by the individual holding the knowledge. In these cases, the knowledge is \"tacit\" and its use within the organisation is limited. It follows that a key concern for organisations must be facilitation of the recognition of knowledge, that is, making tacit knowledge explicit so that it can be shared within the organisation. In other words, does the organisation know how good its people are and take full advantage of their knowledge and skills? And can the organisation make its people even better?\nConclusion: In summary, although RPL has been practiced in higher and vocational education in the UK for 30 years, it may still be under-used. The growing significance of workforce development and the development of work based and work-applied learning programmes has demonstrated how to extend the use of RPL. The Middlesex approach uses RPL as the starting point for developing programmes customised to the needs of the individual and their organisation thus shifting the criteria for recognition of learning achievement from matching a prescribed existing university course to learning of value to the individual and their organisation. Some higher education providers are not in a position to offer highly customised programmes like those offered by Middlesex. However, AIB and other higher education providers can learn much from the Middlesex approach about the processes and procedures to assess individuals for admission or credit based on RPL. Australia has the opportunity to build on substantial overseas experience of RPL and to extend current practice to the level of flexibility achieved by higher education providers such as Middlesex University. If Australian institutions learn from the overseas experience and overcome the factors that have contributed to the under-use of RPL then Australian RPL developments have the potential to help position Australia as a significant player in the education industry worldwide.\n",
"role": "user"
},
{
"content": "Purpose: Recognition of prior learning (RPL) is a process by which both formal learning for recognised awards, informal learning from experience and non-formal learning for uncertificated but planned learning is given academic recognition. The paper aims to discuss these issues.\nDesign/methodology/approach: This paper refers to international developments in RPL and then focuses upon the extensive and innovative use of RPL by Middlesex University and the developing RPL work at the Australian Institute of Business.\nFindings: The Middlesex experience of recognition of learning from experience as part of the development of customised work-based learning programmes demonstrates the potential of RPL for business and management programmes.\nOriginality/value: The use of RPL for admission and/or credit in standard programmes enables individuals to have their work-based knowledge acknowledged as relevant, worthwhile and equivalent to learning obtained in the higher education classroom.\n",
"role": "assistant"
}
] |
Evaluating the effectiveness of a community-based dietary intervention in Nottingham
|
[
"Evaluation",
"Health education",
"Nutrition",
"Cooking skills",
"Healthy eating",
"Dietary intervention"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Dietary intake and eating behaviours in England are related to socioeconomic position. People from lower socioeconomic groups tend to have diets that are less healthy than people from higher socioeconomic groups (Public Health England, 2013). Those living in areas of high deprivation are more likely to lack nutritional knowledge and cooking skills (Caraher et al., 1999; Darmon and Drewnowski, 2008). A lack of nutritional knowledge and cooking skills is associated with an unhealthy diet including reduced consumption of fruit and vegetables (Dickson-Spillman and Siegrist, 2011) and can influence increasing levels of overweight and obesity (Butland et al., 2007).\nMethod: Over the course of the study duration there were ten programmes delivered with a total of 69 participants. This evaluation used a mixed methods design. Data were collected at different points in time using varied methods such as self-reported questionnaires, semi-structured interviews and focus groups. The quantitative component reflects the most usual evaluation method of community-based dietary intervention as undertaken by practitioners delivering such services (Garcia et al., 2013; Barton et al., 2011). A self-reported questionnaire was used at pre-intervention and post-intervention which measured consumption of fruit, vegetables, fat and sugar consumption. The fruit and vegetable questions used in the pre- and post-questionnaire were posted to all participants at the six months' follow-up stage.\nEthical considerations: As an evaluation of a community-based dietary intervention, the proposal did not require formal ethical approval from an NHS Research Ethics Committee. The primary research in this paper was undertaken as part of postgraduate course at the Sheffield Hallam University and the research proposal was been considered and approved via the university ethical procedures. Informed consent to take part in the study was sought from past participants of the EWL programme. Participants were invited to take part in the evaluation and were free to withdraw without reason at any time from the evaluation process. Collection and storage of data throughout the evaluation process followed information governance standards and was anonymised to ensure confidentiality throughout the study.\nResults: Quantitative data\nDiscussion: The evaluation of the EWL programme provides evidence of both the immediate improvements in healthy eating, immediately on completion of the course, and at the 3, 6 and 12 months' follow-up stages. Data also point to evidence of positive and sustained changes in participants' cooking and eating behaviours. This evaluation contributes to the fairly limited literature on the longer term impact of these types of community dietary interventions.\nConclusion: Overall the findings suggest that EWL has a positive role in influencing change in dietary habits and behaviour and maintaining these at post-intervention. Although some of the changes have been small they have nonetheless made significant and wider impact; this potentially provides evidence for the usefulness of these interventions in any public health strategy to promote and improve nutritional knowledge, cooking skills and therefore dietary behaviour. Community-based dietary and cooking skills interventions can provide one approach to improve dietary behaviours over the longer term and contribute to public health approaches to obesity prevention and reducing health inequalities.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to evaluate the effectiveness of the Eatwell for Life (EWL) programme, with a particular focus on longer term effectiveness in terms of dietary behaviour and the wider impact. EWL is a six-week community-based dietary intervention which aims to increase nutritional knowledge, cooking confidence and provide the necessary skills to support behavioural change in relation to eating a balanced diet. There have been many evaluations of community-based dietary interventions, but most focus on brief measures and changes examined at the end of each course.\nDesign/methodology/approach: A mixed method evaluation was conducted using a self-reported questionnaire, focus groups and semi-structured telephone interviews. A follow-up evaluation was conducted at 3, 6 and 12 months with a purposive sample of EWL participants.\nFindings: A total of 66 participants completed both pre- and post-intervention questionnaires. A total of 22 participants took part in the qualitative follow-up evaluation. The mixed method evaluation demonstrates improvements in participants' fruit and vegetable consumption and a reduction in participants' sugar consumption. Qualitative data highlight key themes such as \"cooking from basic ingredients\", \"knowledge of key healthy eating messages\", \"changes in eating, cooking and shopping habits\" and \"wider influences on family and friends' diets\".\nOriginality/value: This paper is useful to public health nutritionists and other practitioners delivering community-based dietary and cooking skills programmes and those commissioning such provision. It contributes to existing evidence of sustained change over time targeting those in areas of high deprivation.\n",
"role": "assistant"
}
] |
Quality of patient information in emergency department
|
[
"Accuracy",
"Accessibility",
"Data quality",
"Emergency department",
"Timeliness",
"Hospital information system"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Data are one of the key resources currently used in organizations (Wongsim and Gao, 2011). High-quality data are those that are appropriate for use by the customer (Leitheiser, 2001; Almutiry et al., 2013; Xu et al., 2003). The quality of data is a key factor in determining the level of healthcare in a hospital, and its improvement entails an improved quality of health and treatment and ultimately increases patient satisfaction. The quality of health services depends on the quality of hospital information, and the quality of information is a competitive element in the development of organizations (Ratnaningtyas and Surendro, 2013). There is evidence of substantial errors in the data stored in healthcare information systems (Lin and Chan, 2000). Good and timely data from health information systems form the basis of the entire health system (Nutley and Reynolds, 2013).\nMethod: The present cross-sectional study was conducted in 2016. The study population consisted of the health records of the emergency patients and the information system of the 640-bed Shahid Beheshti Hospital in Kashan. The study sample for the emergency patients consisted of 385 randomly selected health records. The researcher and an experienced nurse with more than five years of experience in emergency departments visited this department's admission and discharge unit immediately after the patients' discharge and billing in morning, afternoon and night shifts on work days as well as holidays, and assessed ten records per work shift over a period of three months and collected the data. This study assessed five dimensions of quality, including accuracy, accessibility, timeliness, completeness and definition. Data were collected using a researcher-made checklist. The information elements of the initial checklist were determined and confirmed based on the views expressed by three experts in health information management, a number of scientific articles, reviews of literature (Innes et al., 2001; Pollock et al., 1998; Department of Health and Human Services, 2017) and the assessment of data elements in the hospital information system in question. The checklist data were divided into five groups, including admission (26 items), laboratory (1 item), nursing (14 items), medical secretary (15 items) and financial affairs (25 items). Of the five dimensions of data quality, three were placed in one checklist (namely, accuracy, completeness and timeliness) and two in another (data accessibility and definition). Each dimension of data quality was assessed as follows:\nFindings: According to the results, the overall quality of emergency patients' information in the information system of Shahid Beheshti Hospital in Kashan was 86 percent, and of the five dimensions of data quality, accuracy had the highest (87.7 percent) and accessibility the lowest (62.1 percent) frequencies (Table I).\nDiscussion: According to the results, the quality of emergency patients' information in the hospital information system was good (86 percent). Of the five dimensions of data quality, accuracy (87.7 percent) was good and accessibility (62.1 percent) was moderate. In a study by Stewart et al. (2001), data quality was reported as favorable. In a similar study by Manya and Nielsen (2016), the quality of a health information system was reported as moderate for accuracy, completeness and timeliness. The quality of information was also at a moderate level in a study by Saghaeiannejad Isfahani et al. (2011). Other studies reported an inappropriate quality of data in hospital information systems (Glele Ahanhanzo et al., 2014; Vafaee et al., 2010; Garrib et al., 2008; Jebraeily et al., 2015; Porter et al., 2006; Ammenwerth et al., 2007; Kihuba et al., 2014). The disparity of findings between the present study and similar studies appears to be due to the present study's use of health information technology experts as the hospital information system users in the admission and emergency departments of the select hospital and the ongoing assessment of the records in the hospital information system by the hospital deduction team, including medical record experts, nursing professionals and health IT experts. While supporting clinical decisions, data quality leads to an improved quality of care (Kihuba et al., 2014), and diagnostic data can provide the basis for many health activities, such as quality care and reimbursement (Peabody et al., 2004). Human factors are one of the main factors in the successful implementation of health data systems (Barzekar and Karami, 2014). To improve the quality of hospital information systems, it is recommended that graduates in health IT be employed for data entry and working with the system and the supervising hospital deduction team.\nConclusion: According to the results, data quality was good, and the accuracy, completeness, timeliness and definition dimensions were in a favorable state, while accessibility was in a moderate state. In terms of the quality of data, the laboratory produced poorer data with regard to accuracy and timely recording compared to other departments studied. Completeness and definition of data in the admission department were inferior compared to other departments. Physicians had poorer access to information compared to other groups. Laboratory technicians appear not to take adequate care in properly recording data, and they lack sufficient knowledge, time or motivation for early recording of data. In the admission department, congestion of patients or their escorts in the emergency department of the study hospital (the only central hospital of the region) can be one of the reasons for failing to complete all admission information items. Moreover, poorly defined admission information slows down the process and causes congestion of patients and their accompaniments in the emergency department. Regarding physicians' access to information, per case payment to physicians makes them prefer visit and treat a greater number of patients. Furthermore, a lack of modern information technology at patients' bedside can be another reason for insufficient motivation to use hospital information system, and physicians' preference to use paper medical records.\n",
"role": "user"
},
{
"content": "Purpose: Data comprise one of the key resources currently used in organizations. High-quality data are those that are appropriate for use by the customer. The quality of data is a key factor in determining the level of healthcare in hospitals, and its improvement leads to an improved quality of health and treatment and ultimately increases patient satisfaction. The purpose of this paper is to assess the quality of emergency patients' information in a hospital information system.\nDesign/methodology/approach: This cross-sectional study was conducted on 385 randomly selected records of patients admitted to the emergency department of Shahid Beheshti Hospital in Kashan, Iran, in 2016. Data on five dimensions of quality, including accuracy, accessibility, timeliness, completeness and definition, were collected using a researcher-made checklist and were then analyzed in SPSS. The results are presented using descriptive statistics, such as frequency distribution and percentage.\nFindings: The overall quality of emergency patients' information in the hospital information system was 86 percent, and the dimensions of quality scored 87.7 percent for accuracy, 86.8 percent for completeness, 83.9 percent for timeliness, 79 percent for definition and 62.1 percent for accessibility.\nOriginality/value: Increasing the quality of patient information at emergency departments can lead to improvements in the timely diagnosis and management of diseases and patient and personnel satisfaction, and reduce hospital costs.\n",
"role": "assistant"
}
] |
Managing religious diversity in secular organizations in France
|
[
"Diversity management",
"Religious diversity",
"Secularism",
"France"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Religion is an important but understudied component of diversity management (Day, 2005) and researchers have called for more research on this topic (e.g. King et al., 2009). European countries are increasingly confronted with religiosity in the workplace, leading to questions and tensions regarding human rights and organizational arrangements (Bader et al., 2013). Organizations have to deal with the delicate act of finding the balance between complex and sometimes conflictual issues such as freedom of expression, workplace accommodations and anti-discrimination legislation, while avoiding feelings of injustice amongst its workers. Indeed, religious expression in the form of dress, veils or crosses in the workplace is an aspect of diversity that needs to be managed if one wants to ensure that the aforementioned tensions do not escalate into conflicts that could lead to negative outcomes for organizations and potential losses in employee performance and motivation.\nLiterature review: Diversity management\nMethodology: An inductive, interview-based methodology has been used to examine the understudied phenomenon of how managers deal with the notion of secularism on the one hand and the reality of religious diversity at work on the other hand. We developed this methodology using the guidelines for qualitative methodology outlined by Miles and Huberman (1994) and Strauss and Corbin (1990).\nFindings: This paper examined the way in which managers deal with religious diversity in secular organizations with a specific focus on France. The findings reveal three distinct strategies. First, we identified a \"flexibility within the rules\" strategy in which managers try to accommodate religious practices by making allowances, create mutual understanding and trust. Second, a separation strategy emerged in which managers keep work and religion clearly separated. Those managers expressed a strong adherence to rules and perceived the implementation of allowances to be difficult not only for their own organization but also in light of third parties with whom they worked. Third, the findings reveal a \"common-ground\" strategy in which managers stressed the communalities between individual workers, downplayed their differences and created a strong corporate culture to which all employees could relate. A strong vision and clear corporate values that everyone had in common helped managers achieve this goal. The different aspects are explained in more detail below. Table II provides additional illustrative quotes for each key theme.\nDiscussion: As religious diversity is increasing in the workplace (Eurofound, 2017), it is important to better understand how managers deal with this in the context of secular organizations. Managers who have to deal with diversity issues often encounter \"clashes\" in which they have to handle contradictory aspects of managing diversity (Hennekam and Tahssain-Gay, 2015; Hennekam et al., 2017) and managing secularism and religion is one of such clashes. The present study provides new insights by the identification of different strategies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine how managers deal with religious diversity in secular organizations in France.\nDesign/methodology/approach: In total, 28 semi-structured in-depth interviews with managers in France were conducted, transcribed and analyzed.\nFindings: The findings reveal three distinct strategies. First, the authors identified a \"flexibility within the rules\" strategy in which managers try to accommodate religious practices by making allowances, create mutual understanding and trust. Second, a \"separation strategy\" emerged in which managers keep work and religion clearly separated. Those managers expressed a strong adherence to rules and perceived the implementation of allowances difficult not only for their own organization but also in light of third parties with whom they worked. Third, the findings reveal a \"common-ground\" strategy in which managers stressed the communalities between individual workers, downplayed their differences and sought to create a strong corporate culture to which all employees could relate.\nPractical implications: The expression of religious beliefs in the workplace is increasing. However, little is known about how managers deal with the perceived clash of secularism and the presence of different religions in the workplace. Implications for managers such as taking into account perceptions of justice, practical issues as well as the importance of communication and education are discussed.\nOriginality/value: Religion is a deep-level and understudied aspect of diversity management that deserves more attention given the increase in religious diversity in the workplace.\n",
"role": "assistant"
}
] |
Unredeemed gift cards and the problem of not providing customers with value
|
[
"Retailing",
"Money",
"Consumer behaviour",
"Brand equity"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Attempting to quiet my non-academic friends and to maintain a tenacious grip on the \"real world,\" I recently attended a practitioner's conference on the rapidly growing business in gift cards and other prepaid services. The other attendees were decidedly upbeat as consumers in various market segments seem to like giving up their money now in order to receive benefits at a later date. The economist and financier within recoils from this notion, because money has a time value and is worth more today than tomorrow. Yet, it was predicted that over $70 billion in prepaid gift cards alone would be sold during 2006 (American Express, 2006), with some seeing the figure approaching $100 billion in 2007. People are apparently abandoning their finance lessons, handing retailers, phone companies and banks their money for the promise of future goods and services.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to demonstrate how the \"money for nothing\" attitude about gift card non-redemption, that pervades retailing, directly conflicts with a customer focus. Further, it aims to show how short-term financial benefits are completely offset by long-term increases in indirect costs and damage to brand equity.\nDesign/methodology/approach: - The paper uses examples of behavior to demonstrate common attitudes of practitioners revolving around gift card issuing and redemption. Additionally, the use of simple case study brings into question the common assumptions about the short- and long-term effect on organizational performance of non-consumer-centric policies.\nFindings: - The paper finds that claims that retailers get \"money for nothing\" on unredeemed gift cards are illusionary. The actual costs of non-redemption are quite high because the retailer misses the opportunity to bring both existing and, importantly, new customers into the retail environment. Further, brand equity may suffer if consumers feel that retailers have taken advantage of them. Brand equity damage can grow exponentially if media and governmental interests step into the fray.\nOriginality/value: - The real issue is whether retailers pay lip-service to their gift card-holding public or really embrace the potential value that a gift card-holding customer generates. Long-term benefits accrue to organizations that reject the \"money for nothing\" mentality and actively try to attract gift card shoppers.\n",
"role": "assistant"
}
] |
Investigating factors of intellectual capital to enhance achievement of strategic goals in a university hospital setting
|
[
"Health care",
"Intellectual capital",
"Management",
"Accounting",
"Reports"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The expectations of the community for health services delivery by health providers have been progressively increasing during the last decades, strengthening the demand for quality and outcomes (Schwartz and Pogge, 2000). This strengthening demand requires extra financial and organisational efforts from health systems (Bordignon and Turati, 2009). However, the budget constraints of countries' health systems keep the tension between the cost of health services provision and the ability to deliver high quality, specialised and personalised treatments high. In this context, countries have often redesigned the organisational model of their health system and revised the role and goals of the various actors (Shortell et al., 1994; Ruef and Scott, 1998; Lukas et al., 2007). Managers of health systems search for more effective ways to deliver care (Ferlie and Shortell, 2001). In the past, the focus was on constraining growth in the cost of care, whereas emphasis is now being given to improving the quality and outcomes of care. Ferlie and Shortell (2001) report that the essential factors required for quality improvement are: leadership at all levels, a culture supporting learning throughout the care process, effective teams' development and greater use of information technologies. Thus, to meet the challenge of quality improvement, health care organisations will have to rely heavily on developing their ability to generate and manage knowledge.\n2. Literature review: The accounting literature has strongly advocated IC visualisation for management control and strategic management (Mouritsen et al., 2001a; Roos et al., 2001; Choo and Bontis, 2002). In recent years, the relevance of IC accounting research has been confirmed with regard to the progressive focus on the management control and strategy area (Guthrie et al., 2012). In reference to the setting of knowledge-based organisations, the criticalities associated to the implementation of management control and strategic control systems have been pointed out (e.g. Widener, 2004). One basic issue is related to the fact that the knowledge which is necessary to build systems in such organisations often resides in individuals. Thus, extracting knowledge and defining a representation of its dimensions to enhance the strategic and managerial process of the organisation is a challenging pathway.\n3. Design of the study: The paper aims first at developing and applying an IC framework to enhance the visualisation of strategic IC characterising the management of university hospitals. Second, it aims at enhancing the use of IC visualisation for strategic management. The study also allows testing of an IC framework for investigating the factors that are the foundation of the integration between hospital and university.\n4. The problem to be solved: Following national and regional policies, since 2012 the local province has been engaged in a revision of the localisation, volume and quality of health services delivery. In consideration of innovations in technology and medical science, the new policies stem from the idea that increased efficiency would be achieved through the concentration of services and economies of scale. Thus, the number of hospital beds had to be decreased, the number of local hospitals reduced, the supply process had to be managed jointly with other health care organisations in the region to improve efficiency and investments had to concentrate on recognised excellences (i.e. on those services/processes that expressed the ability to innovate and to deliver highly specialised care).\n5. The IC-based report definition: An IC-based report was discussed in regard to the relevance of the strategic gap and the intangible nature of the value drivers that emerged from the data collection. The IC-based report emerged as the solution to the identified problem. So far, the research process had enabled us to identify several drivers, such as the governance structure; the professionals and the career paths; the relations with scientific societies and professional organisations; and the relation with the local health unit, with the volunteer association, and with other academics beyond those affiliated to the UH. Furthermore, relevance towards strategy formulation and value creation was assigned to the research outputs, the process supporting knowledge creation, the teaching outputs and the impact of scientific research on health care activity.\n6. Discussion: The study enabled some primary results; a critical issue was unfolded, the problem diagnosed, the solution identified, adopted and evaluated. To that end, IC accounting knowledge has been relevant: it contributes to the development of organisational practice.\n7. Conclusions: This paper is based on an action research project focusing on the development of IC practice for strategic purposes in the real-life setting. The studied university hospital investigated its organisation's process to identify its value drivers and relocate its strategy management, using the IC approach. The project allowed observation of some changes in the organisation's practice, in terms of management accounting, strategic management process and communication with institutional stakeholders.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to report on an action research project carried out in an Italian university hospital that was facing a strategic challenge. The role of intellectual capital (IC) for university hospital strategic management is discussed after developing and applying an IC framework to enhance the visualisation of strategic IC elements.\nDesign/methodology/approach: An action research process has been applied in the studied setting based on Susman and Evered's (1978) definition of the engaged research cycle.\nFindings: The action research process allowed a gap between theory and practice to be bridged; the strategic control process resulted supported by new measures; a different approach to strategy management was launched, and other organisations perceived the relevance of the IC representation and wished to import it.\nResearch limitations/implications: Research limitations are related to those recognised for the interventionist research approach.\nPractical implications: The paper contributes to the improvement of managerial and accounting technologies for practitioners managing university hospitals and discusses a university hospital's strategic goals.\nOriginality/value: The paper represents a methodological contribution related to the interventionist research stream of literature, and enriches the limited studies focused on IC in health care organisations. Furthermore, the paper enables appreciation of the role of academics in the convergence of theory and practice.\n",
"role": "assistant"
}
] |
A comparative data analytic approach to construct a risk trade-off for cardiac patients' re-admissions
|
[
"Clustering",
"Decision support systems",
"Healthcare management",
"Data mining",
"Business analytics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: 1.1 Motivation\n2. Proposed methodology: The methodology adopted in this study can be split into four sequential phases, as can be seen in Figure 1. Phase 1 consists of three steps, of which data acquisition represents the first step. In the second step, the data set is cleaned (missing and/or invalid values, outliers, etc., are dropped), and re-coded. Potentially important variables are selected (for deployment in the classification process) in the third step by applying statistical methods. Phase 2 represents the model training phase, which consists of three sequential steps: cross-validation, where the entire data set is split into five mutually exclusive folds in order to decrease the bias and increase the robustness and the reliability of the models; application of resampling techniques on the training sets; and training of the prediction models. In phase 3, the prediction algorithms are tested and the best model among the ones employed is selected based on certain measurement criteria. After the selection of the best model, patient-specific risk scores are developed and finally patients are classified/grouped into five risk groups in Phase 4 via applying a clustering-based data mining method. Additional information on each of these phases and steps are provided in the associated sub-sections of Section 2.\n3. Results and discussion: 3.1 Feature selection results\n4. Conclusions and future research: Andel et al. (2012) claim \"quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured.\" Capitalizing on the large amounts of data that hospitals generate, methods such as predictive data mining can play an important role in advancing undertakings in this direction that may have been impossible just a decade ago. Such methods have shown positive abilities in modeling highly complex data-rich phenomenon, such as in the field of medical informatics. In our study, we have formulated a data mining methodology that enables hospitals and medical decision makers to provide patients better care by utilizing available resources more efficiently and, at the same time, decrease the costs to and penalties from Medicare. Our study has contributed to existing work done in the field, by applying machine learning models for predicting readmissions, yielding the highest predictive precision to the problem to our knowledge; obtaining a patient-specific risk score; and grouping/stratifying the patients into various risk groups (from very low to very high risk) to finally facilitate decision makers in interpreting the predictions and finding a trade-off between risks, costs and resources. Therefore, the real contribution of the current study is not the classification methods that have been employed, but it is rather the end-product provided to the medical decision makers. Given the probabilistic score that is obtained, the medical experts will be able to confidently rely on the score provided, while there will be a small percentage of the outcomes that the medical experts should use their own intuition/incentive to make the final decision. To summarize, in this study, we started out with the following questions:\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to formulate a framework to construct a patient-specific risk score and therefore to classify these patients into various risk groups that can be used as a decision support mechanism by the medical decision makers to augment their decision-making process, allowing them to optimally use the limited resources available.\nDesign/methodology/approach: A conventional statistical model (logistic regression) and two machine learning-based (i.e. artificial neural networks (ANNs) and support vector machines) data mining models were employed by also using five-fold cross-validation in the classification phase. In order to overcome the data imbalance problem, random undersampling technique was utilized. After constructing the patient-specific risk score, k-means clustering algorithm was employed to group these patients into risk groups.\nFindings: Results showed that the ANN model achieved the best results with an area under the curve score of 0.867, while the sensitivity and specificity were 0.715 and 0.892, respectively. Also, the construction of patient-specific risk scores offer useful insights to the medical experts, by helping them find a trade-off between risks, costs and resources.\nOriginality/value: The study contributes to the existing body of knowledge by constructing a framework that can be utilized to determine the risk level of the targeted patient, by employing data mining-based predictive approach.\n",
"role": "assistant"
}
] |
Measures of marketing performance: a comparative study from Spain
|
[
"Tourism management",
"Hospitality management",
"Manufacturing industries",
"Performance measures",
"Marketing strategy",
"Spain"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Despite the importance of measuring business performance (see, for example, Meyer, 1998), there is little research on the measures used to evaluate marketing effectiveness[1]. Due to this, The Marketing Science Institute made this question one of its research priorities since the year 2000. Reasons for the lack of research marketing metrics include the complexity of unravelling short- from long-term effects (Dekimpe and Hanssens, 1995), the difficulties of measuring brand equity (Marketing Leadership Council, 2001) and the perhaps excessive importance business management ascribes to financial measures (Eccles, 1991; Kokkinaki and Ambler, 1999).Given that a firm's survival depends on its capacity to create value, and value is defined by customers (Day, 1990), marketing makes a fundamental contribution to long-term business success. Therefore evaluating marketing performance is a key task for management. However, businesses that concern themselves with rigorous evaluation of marketing results are in the minority (Ambler, 2000).The present research examines further the differences in marketing metrics between the Tourism and Hospitality and Industrial Spanish firms. Second, we analyse business orientation as antecedents in the marketing performance measurement.The paper is structured in four sections. The first lays out the theoretical framework and formulates hypotheses. The following section sets out the methodology, then the principal results are described. The last section draws together conclusions and implications of the present research.\n2. Theoretical framework: 2.1. Measurement of marketing effectiveness\n3. Methodology: For this research we conducted a postal survey addressed to the chief executive officer of 500 firms in Spain with an income higher than EUR15 million (150 Tourism and Hospitality companies and 350 Industrial companies). Travel agencies, hotels and others constitute the Tourism and Hospitality category, while the Industrial category is made up of companies from Textile and Clothing, Electric Machinery, Telecommunication and Computer Machinery.Table I shows the details of the data collection process. A sample of 174 CEOs completed the questionnaire that included items to be answered in a seven-point Likert scale for customer and competitor orientation scales and a five-point Likert scale for the measurement of business performance. Of the final sample, 42.8 per cent is made up of businesses with incomes lower than EUR60 million. The rest are businesses with incomes greater than EUR60 million.Customer orientation was determined with a six-item scale and competitor orientation with a five-item scale adapted from the one developed by Narver and Slater (1990). A Cronbach reliability test was conducted on the survey data. The results in Table II show that both scales present coefficients near to the level recommended, that is, 0.7.Marketing success measures are captured by the six categories established by Kokkinaki and Ambler (1999):1. financial measures (as turnover, contribution margin and profit);2. competitive market measures (as market share, advertising and promotional share);3. consumer behaviour measures (as consumer penetration, loyalty and customer gained);4. consumer intermediate measures (as brand recognition, satisfaction and purchase intention);5. direct costumer measures (as distribution level, profitability of intermediaries and service quality); and6. innovativeness measures (as products launched and their revenue).\n4. Results: In this section, we show the results obtained regarding marketing metrics and, later, we test the hypotheses on the relationship between business orientation and the importance of the different categories of metrics.4.1. Descriptive analysis of the measurement of marketing performance for tourism and hospitality firms and for industrial firms\n5. Conclusions and implications: In the empirical study we have found notable differences between Tourism and Hospitality and Industrial managers in the way their firms assess marketing effectiveness. In particular, the results indicate:* There are some disparities in the system for evaluating marketing performance. First, T&H managers seem to be more satisfied with the measures used of marketing effectiveness than industrial managers. Second, Tourism and Hospitality managers ascribe more importance to consumer behaviour and intermediate metrics than industrial managers. The leading role of the financial measures is shared with the consumer and intermediate measures as penetration level, loyalty level, customer satisfaction or new customers gained for the T&H managers, while this leadership of financial measures is maintained for the industrial managers. In this sense, Tourism and Hospitality managers appear to have internalised the marketing concept more coherently than their Industrial counterparts. As Levitt (1983) indicates, profits are not the main goal of marketing, the main goal of marketing is to satisfy customers. Hence to measure marketing success businesses should base themselves above all on criteria relating to customers (Bhargava et al., 1994).* There are differences in the type of benchmark used for measuring marketing performance. Among industrial businesses, the main benchmark is the previous year, while for the Tourism and Hospitality firms, the marketing plan is used more often, especially in the case of consumer behaviour and consumer intermediate measures. Both sectors place less emphasis on comparison of results with the competition than, e.g. Eccles (1991) suggests.* There are few differences in the frequency of financial measurement of marketing assets. In both type of business they are usually evaluated annually. As Barreda (1999) indicates, quantifying increases and decreases in these assets is an essential part of measuring marketing effectiveness.* In both types of companies customer and competitor orientation coefficients have a positive effect on the different categories of measures analysed. Specifically there is a close relationship between customer orientation and the categories of measures focused on the consumer. As was foreseeable, we have also confirmed the positive relationship between competitor orientation and evaluation of marketing success from measures based on competitors. Therefore, in general we found a close association between business orientation and the way it assesses its marketing performance.In light of these identified shortcomings, we indicate some implications for T&H managers in their task of assessing marketing effectiveness. First, managers have to be aware of the importance of an accurate measurement of marketing performance. The aim of this measurement is to provide knowledge and understanding of the reason for and consequences of any particular marketing decision. So the measurement of marketing performance has to be considered as a regular and systematic process for managers. The main results of this research highlight that the source of this process must be in a call to customer and competitor orientation of the firms.In the systematic process indicated, accuracy is in a high frequency in the measurement process and a broad financial and non-financial metrics. Moreover the characteristics of T&H business imply that it is necessary to pay special attention to the consumer metrics,that is, regularly measuring the customer satisfaction level, consumer loyalty, new customer gained, customer retained or customer lost. T&H managers have to give these consumer metrics priority in the way it assesses its marketing performance.This work has various limitations. This research is exploratory and subjective rather than an objective measurement of business performance. Although this is not uncommon in the literature and is justified by reference to, e.g. Dess and Robinson (1984). For future research we suggest refining categories to individual metrics and considering elements of the marketing mix as well as marketing as a whole. Finally, we believe it would be very useful to investigate these issues in the context of new technologies.\n",
"role": "user"
},
{
"content": "Purpose: - Despite the importance of assessing business performance, there is limited research on the ways of measuring marketing effectiveness. Aims to redress this issue.\nDesign/methodology/approach: - This paper focuses on six categories of marketing effectiveness measures to compare the ways of measuring marketing performance in two groups of Spanish firms (Tourism and Hospitality firms and Industrial firms) in an exploratory study; and also studies the effect of business orientation (customer and competitor orientation) on the measures used to evaluate marketing performance.\nFindings: - It was found that consumer-based measures have a leading role in the evaluation of marketing effectiveness in Tourism and Hospitality companies and customer orientation affects the importance of these consumer-based measures.\nResearch limitations/implications: - Research is exploratory and subjective measures of business performance were used.\nPractical implications: - The main implications are addressed to the Tourism and Hospitality managers for improving the ways of measuring marketing effectiveness.\nOriginality/value: - This is a study of the measurement of marketing effectiveness in Tourism and Hospitality companies in comparison with the industrial counterparts.\n",
"role": "assistant"
}
] |
Board structure and ownership in Malaysia: the case of distressed listed companies
|
[
"Boards of Directors",
"Chief executives",
"Corporate ownership",
"Financial performance",
"Corporate governance",
"Malaysia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In recent years, criticisms have been targeted at corporate governance especially calling for reforms of the board of directors (Geneen, 1984; Kesner et al., 1986, Lorsch, 1989; Levitt, 1998). The argument is that the board of directors has not discharged its fiduciary roles effectively. In Asia, in the aftermath of the 1997 financial crisis, most Asian countries have sought to strengthen the corporate governance, transparency and disclosure levels (Ho and Wong, 2001).This paper seeks to examine whether board structure and ownership structure are associated with financial distress in Malaysia. The primary contribution of this study is the setting in which it is conducted, in which it examines the role of corporate governance on financial distress status within an emerging economy. Unlike the previous studies, which were done in North America or the UK, in this emerging economy, it is predicted that the market for corporate control is weak (Mak and Li, 2001). Significant efforts to improve corporate governance in Malaysia were only seen during the 1997-1998 crisis with the publication of the Report on Corporate Governance in 1999. Furthermore, concentrated shareholdings by individuals and families together with significant equity holdings by the government are two features that distinguish the ownership patterns of Malaysian companies from companies in the USA or in the UK, which could add complication to the corporate governance. This ownership pattern presents corporate governance issues, particularly ones that involve minority shareholders, that may not be similar to the ones peculiar to companies in those developed countries. In addition, the fact that Chinese and Malays dominate the economics and politics in Malaysia respectively provides a unique research setting that is not found elsewhere. The separation results in the regulators being predominantly Malays and the businesses being owned by the Chinese. Therefore, findings from this study should add to the understanding of institutional differences, which directly affect the operations of the boards of directors. Shleifer and Vishny (1997, p. 740) state that \"Most of the available empirical evidence [on corporate governance] ... comes from the United States ... Unfortunately, except for the countries just mentioned, there has been extremely little research done on corporate governance around the world ...\" Another contribution is that the definition of financial distress adopted by this study is different than the one used in previous studies (e.g. Elloumi and Gueyie, 2001) where this study adopts the definition of financial distress as used by the Bursa Malaysia. Companies facing financial distress are expected to face a number of corporate governance issues, particularly on the issue of directors' effectiveness in discharging their monitoring roles.This paper proceeds as follows. In the next section, the theoretical development will be covered, followed by the methodology section. The empirical results will be presented in the subsequent section, followed by the discussion section. Finally, the summary and conclusion section will follow.\nTheoretical development: Financial distress and corporate governance\nMethodology: In this study, financial distressed companies are defined as those that are classified in the Bursa Malaysia PN4 sector. Following the 1997 crisis, the Bursa Malaysia issued the Practice Note 4 in 2001, which outlines the criteria used by the Bursa Malaysia to identify companies that are required to regularize their financial conditions. Failure to regularize their financial conditions within the stipulated time results the company being de-listed. The deadline given to these companies was 31 December, 2002. Four criteria have been outlined by the Bursa Malaysia, and the fulfillment of at least one criteria results in the company being referred as \"an affected listed issuer\":1. deficit in the adjusted shareholders' equity;2. appointment of receivers and/or managers over the property of the listed issuers;3. adverse opinions or disclaimers in respect of the going concern form the auditor; or4. appointment of special administrators pursuant to the provisions of the Pengurusan Danaharta Nasional Berhad Act 1998.The Bursa Malaysia implemented PN4 classification in 2002. The decision on categorizing a firm into PN4 is made by Bursa Malaysia if the listed firm fulfills one of the above criteria. Trading of shares of these affected companies is either suspended or restricted.Vanhorne (1971) describes two situations of corporate failure: technical insolvency and insolvency in bankruptcy. A company is said to face technical insolvency when it is unable to meet its current obligations and insolvency in bankruptcy occurs when a company's net worth is negative. Thus, based on the definition, Bursa Malaysia PN4 companies fall under insolvency in bankruptcy. According to Wruck (1990), a company enters into financial distress following economic distress, decline of performance or poor management. As at 31 December, 2002, a total of 86 companies were classified under the Bursa Malaysia PN4 sector, after excluding finance companies. A matched company for each PN4 company was determined on three bases: Bursa Malaysia sectorial classification, similar total assets and positive EPS throughout the period of study. The financial years that this study was interested in were the periods prior to the PN4 classification introduced by the Bursa Malaysia but after the crisis, i.e. 1999-2001.To test the hypotheses, the pooled logistic regression analysis was carried out, which is as follows: Equation 1 where:i: firm 1 through 86;\nResults: Out of 86 non-finance companies that were in the Bursa Malaysia PN4 sector as at 31 December, 2002, four companies were excluded due to change in financial year ends and data unavailability. As a result, a total of 504 firm years were available for the pooled analysis (1999: 170, 2000: 170, 2001: 164). Results of the descriptive statistics and the t-tests are shown in Table I.Results in Table I support all the selection criteria used in choosing the matched companies. Results in Table I also show that distressed firms have a lower liquidity ratio as opposed to non-distressed firms and the latter's liquidity ratio is five times higher than the former's. Thus, the gearing and liquidity ratios are associated with distressed firms, supporting the evidence among bankrupt and non-bankrupt companies (e.g. Altman, 1968; Platt and Platt, 2002). The earnings per share (EPS) of the distressed sub-sample is significantly negative compared to the EPS of non-distressed firms, which is significantly positive. An analysis also indicates that the EPS of distressed sub-sample has been negative since the 1998 financial year (1998: -RM1.54, 1999: -RM1.4, 2000: -RM0.82, 2001: -RM0.76). This evidence thus suggests that these firms have started to experience financial difficulty immediately after the 1997 crisis.Table II shows the results from the pooled logistic regression analysis. Results in Table II show that the influence of years (i.e. 1999, 2000 and 2001) is not significant. As for hypothesis testing, results in Table III suggest that H1 is not supported. In Model 2 of Table II, board independence was operationalized by combining independent directors and non-independent non-executive directors, which were divided by the board size, the finding of which shows that board independence remained insignificant. Results with respect to CEO duality are mixed. In Models 1, 2 and 3 where CEO duality is defined as the CEO-cum-board chairman, the relation is found to be negative but it is not significant. Thus, H2 is not supported. However, in Model 4, when CEO duality is defined as executive chairman, the influence becomes negative, as predicted and significant. Thus, H2 is supported.Model 3 tests also whether the effects of management ownership on financial distressed status is curvilinear by including MOWN2 in Model 3. Results suggest that the relation of management interests with financial distressed status is not linear. At the lower levels, interests by management lead to an alignment of interests. At the higher levels, the evidence supports the management entrenchment theory. This is generally consistent with Morck et al. (1988) and Short and Keasy (1999). Findings in Table II, for all models, provide support for H3, H4 and H5. Table III presents results from additional analyses to test further the influence of ownership and board size.In Model 1, outside blockholders are broken down into two: percentage of government-linked outside blockholdings and percentage of non-government related outside blockholdings. Government-linked entities are government-linked institutional investors (both the Federal and the State-linked funds and companies) and government-supervised funds. Findings of Model 1 indicate that H1 is not supported, while H2, H3, H4 and H5 are supported. Model 2 in Table III tests the association of inside blockholders on financial distressed status. Results are similar to those found earlier. In Model 3 of Table III, the percentage of government-linked outside blockholders was treated as a dummy variable. Results show that both non-government outside blockholders and government-linked variables are not significant in determining the firm status.Finally, in Model 4, a variable board size is included. Jensen (1993) and Lipton and Lorsch (1992) argue that large board boards are not effective compared to smaller boards. Yermack (1996) provides empirical evidence supporting this contention. Nonetheless, having more directors on a board could enable the firm to utilize the expertise and services of the directors particularly when the additional directors are outsiders as argued by the resource dependency theory (Tricker, 1984, p. 171) and could serve as the window to the outside world and are better able to form environment link (Goodstein et al., 1994). Thus, larger board sizes should lead to lower probability of a firm to become distressed. Findings by Chaganti et al. (1985) show that non-failed firms have larger boards than failed firms. However, an oversized board size could lead to the problem of coordination (Forbes and Milliken, 1999), which potentially leads to the board becoming less involved in strategic decision making (Judge and Zeithaml, 1992). Thus, Jensen (1993) argues that the maximum size of a board is seven or eight. Based on the extant evidence, Hermalin and Weisbach (2003, p. 13) conclude that \"The data therefore appear to reveal a fairly clear picture: board size and firm value are negatively correlated.\" Nevertheless, neither too small a board nor too large a board may be effective. Rather increasing the size the board up to the optimum number (7-8 as argued by Jensen (1993)) should lead to effective monitoring. While increasing the size once it has reached the optimum size should result in the board becoming ineffective. The Malaysian boards were found to have about eight directors (Abdullah, 2001). Thus, it is predicted that there is a negative association between board size and the distressed status. Results in Model 4 of Table III are as predicted and consistent with Jensen's (1993) argument and the findings by Chaganti et al. (1985). Further, the board size in Malaysia in the present study is about seven, which is similar Mak and Li's (2001) findings that found Singaporean board size being eight.\nConclusions: Findings of the present study do not support the contention that board independence is significant in explaining a firm's distressed status. This evidence contradicts that of Elloumi and Gueyie (2001) but consistent with the findings by Chaganti et al. (1985). The passive roles of independent directors could be attributed to the nature of their appointments and their conflict of roles (Vicknair et al., 1993; Perry, 1995; Bhagat and Black, 1997). An analysis was carried out to determine if the composition of the board had been changed during the period 1999-2001. The percentages of independent directors for both distressed and non-distressed sub-samples changed marginally during the period, suggesting some board restructuring in 2001, the year prior to the implementation of PN4 classification in 2002. Independent directors may not be aware of their responsibilities. Thus, requiring them to undergo training may help them to improve their monitoring skills. Future research, perhaps, as argued by Daily et al. (2003), could apply such theories as resource dependence and stewardship as well collecting process-oriented data that could offer alternative explanation of the board's roles.Evidence of the roles of CEO duality in determining the firm's financial distress is mixed. This evidence is consistent with previous studies in Malaysia (Abdullah, 2004; Abdullah and Mohd-Nasir, 2004) and in Canada (Elloumi and Gueyie, 2001). The insignificance of CEO duality could perhaps be due to the lack of consensus on the recommendation for the separation, as argued by Elloumi and Gueyie (2001). In fact, the existing corporate governance codes, e.g. the Cadbury Code, 1992 (Cadbury Committee, 1992); the Hampel Report (1998); the Malaysian Code on Corporate Governance (Finance Committee on Corporate Governance, 2001), do not prohibit the practice of combining the board chairman and CEO roles, provided that companies disclosed in the annual reports the reason for the departure. Furthermore, previous literature and evidence on the significance of CEO duality are mixed (Berg and Smith, 1978; Chaganti et al., 1985; Daily and Dalton, 1994a) and some even have questions on the benefits from the separation (Goodwin and Seow, 2000). Combining the top two roles enable urgent strategic decisions to be made faster especially during declining performance.The findings reveal the significant associations of ownership structure with the distressed status, lending support for agency theory (Jensen and Meckling, 1976; Jensen, 1993). These findings are also consistent with the findings of Elloumi and Gueyie (2001) who found that interest by non-executive directors and outside blockolders are negatively associated with financial distress. The evidence also confirms the curvilinear effects of management on interests on firm's value found by Morck et al. (1988) and Short and Keasy (1999). This evidence reflects the importance of the ownership pattern in corporate governance among Malaysian companies.\n",
"role": "user"
},
{
"content": "Purpose: - This study seeks to examine the influence of board independence, CEO duality and ownership structure on the firm financial distressed status using a sample of distressed companies and a matched-pair sample of non-distressed companies listed on the Bursa Malaysia.\nDesign/methodology/approach: - This study utilized publicly available data from annual reports of a sample of 86 non-finance distressed firms listed on the Bursa Malaysia and a sample of matched 86 non-distressed firms for a period covering the 1999-2001 financial years.\nFindings: - Board independence and CEO duality are not associated with financial distressed status. Management and non-executive directors' interests are associated negatively with financial distress. A negative association is also documented for outside blockholders. The evidence also supports the contention that ownership by non-executive directors and outside blockholders effectively increases their incentives to monitor management in ensuring their wealth in the firms is intact.\nResearch limitations/implications: - One limitation of this research is that it relies on publicly available data and agency theory. Future research could apply other theories, such as resource dependency and stewardship. Use of process-oriented data could also improve the findings.\nPractical implications: - Independent directors need to undergo training to help them improve and be aware of their responsibilities.\nOriginality/value: - This paper offers evidence on the extent to which distress is associated with corporate governance from a developing country. The paper should be of interest to the regulatory bodies and practitioners.\n",
"role": "assistant"
}
] |
Finding the right management approach in independent hotels
|
[
"CVF",
"Effectiveness",
"Leadership",
"Resource acquisition",
"Rewards mechanism",
"Independent Hotels"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Independent hotels are a significant part of the hotel industries of many countries, including the USA, UK, Italy and The Netherlands (Horwath HTL, 2017). In The Netherlands, though the chains are an important part of the industry, independents are responsible for 40 per cent of general revenue. In 2018, out of the 3,503 hotels in the Netherlands 2,840 were independent (Chappell, 2019, p. 92). Thus, the position of independent hotels in The Netherlands is typical, which is why it was chosen as the source of data for this study which examines small and medium-sized independent hotels.\nTheory and hypotheses development: This section first describes the two main constructs used in this study: leadership and organisational effectiveness. Then, what is known about the relationship between them is reviewed and the overlap between effectiveness and performance is pointed out. This leads to our first two hypotheses concerning the relationships between these constructs. Next, the selection of Reward Mechanisms as a factor in the relationship between leadership and effectiveness is justified and the construct is described. The next three hypotheses are proposed concerning its intermediating role between the two main constructs. Finally, the selection of Ability to Acquire Resources as a factor in the relationship between leadership and effectiveness is justified and this construct also described. Three hypotheses are proposed concerning its intermediating role between the two main constructs.\nEmpirical investigation: Method\nDiscussion and conclusion: Conclusions\n",
"role": "user"
},
{
"content": "Purpose: This study aims to identify factors that managers can influence to improve effectiveness by examining the relationship between leadership and effectiveness in small and medium-sized independent hotels.\nDesign/methodology/approach: In total, 272 valid questionnaires were collected from managers of 83 small and medium-sized independent hotels in the Netherlands by using selective and snowball sampling techniques. A two-stage structural equation modelling approach was used to test the hypotheses.\nFindings: Surprisingly, in contradiction to previous studies, neither transactional nor transformational leadership had a direct effect on organisational effectiveness. Instead, the effect was through the intervening variables of ability to acquire resources and reward mechanisms.\nResearch limitations/implications: The contradiction between this and previous studies may be due to the fact that constructs used in this study do not have the universal applicability that is customarily assumed and further investigation is needed.\nPractical implications: This study suggests that managers in this type of hotel, first, should be adaptable in their leadership style and not rely on transactional leadership alone; second, managers must pay attention to resource acquisition; and third, managers must retain staff by using the right reward mechanism.\nOriginality/value: This study contributes to the literature of leadership and effectiveness in small and medium-sized independent hotels by being the first to focus on the impact of ability to acquire resources and reward mechanism. It also makes an important contribution to the growing body of hospitality literature that probes the supposed universal validity of organisational studies' constructs.\n",
"role": "assistant"
}
] |
The interplay of products from the same product line: the role of brand reputation
|
[
"Store brands",
"Line extensions",
"Experimental design",
"Inferences"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Brands are increasingly vertically segmented, offering products at different levels of quality and price in the same category. For example, Norelco electric shavers range in price from $19 to 155 in Amazon.com - a difference in price of over 700 per cent. In the same Web site, one can buy a simple \"ear bud\" Sony headphone for <$10, as well as a sophisticated professional Sony headphone for $700. The importance of vertical line extensions has been recognized by marketing academics, increasingly turning their attention to this line of research (Heath et al., 2011; Kim and Lavack, 1996; Lei et al., 2008).\nConceptual background: Research on brand extensions has mostly examined situations in which a brand extends into different product categories (Aaker and Keller, 1990; Ahluwalia and Gurhan-Canli, 2000; Balachander and Ghose, 2003; Broniarczyk and Alba, 1994; Dacin and Smith, 1994; DelVecchio and Smith, 2005; John et al., 1998; Keller and Aaker, 1992; Reddy et al., 1994; Sood and Keller, 2012; Volckner and Sattler, 2006). A key finding in this literature is that extension favourability is a function of the perceived fit between the parent brand and its extension (Aaker and Keller, 1990; Park et al., 1991; Volckner and Sattler, 2006). As fit between the parent brand and its extension increases, parent brand beliefs become more readily available, increasing consumer confidence about the new product, resulting in more positive evaluations. Although some recent research has delineated circumstances under which fit may not be important (Milberg et al., 2013; Milberg et al., 2010), perceived fit between parent brand and category extension remains as one of the key determinants of extension evaluation. Interestingly, despite its importance, there seems to be no consensus on what perceived fit means. Some researchers have adopted a feature-based similarity perspective, that is, it examines the extent to which the parent brand share features with the extension (Aaker and Keller, 1990; Boush and Loken, 1991). Another group of researchers have conceptualized fit as shared abstract meanings or benefits (Broniarczyk and Alba, 1994; Park et al., 1991). Fit has also been defined around usage purpose (e.g. tennis rackets and tennis balls, bicycle and bicycle helmet) or common goals (Martin et al., 2005). While these definitions serve well for brand extensions across categories, they do little to illuminate evaluations of extensions within a category, but at different price and quality tiers.\nStudy 1a: In this study, we contrasted the impact of a value product on a premium product from the same brand, when the brand is either a store brand or a regional brand without a well-defined image.\nStudy 1b: In Study 1a, we contrasted store brands and non-store brands. In Study 1b, we continue our investigation by examining the role of existing brand reputation on the impact of a value product on a premium product. We use retailer's image to manipulate brand reputation.\nMethodological issues: In addition to examining the role of reputation on expectations of quality and price of products from the same brand, another goal of this research is to examine methodological concerns with the measure of quality and price used in previous research (Palmeira and Thomas, 2011).\nStudy 2: In this study, participants were asked to make a choice between a national brand and a premium store brand. The premium store brand was described as in our previous studies. We manipulated the value brand that was provided in the set, as being either a value store brand or a value national brand. Participants may have different perceptions of a value store brand and value national brand. Therefore, to ensure the comparability of the conditions and proper analysis of the resulting data, it was important to keep the available alternatives fixed. To that end, while a value brand was always included in the set, participants were told that it was out of stock for each product category. So, in condition 1, the set was national brand, premium store brand and value store brand (out of stock), and in condition 2, it was national brand, premium store brand and value national brand (out of stock). This way, we were able to test whether indeed the presence of a value store brand signals a superior quality for a premium store brand in a choice task.\nGeneral discussion: Theoretical implications\n",
"role": "user"
},
{
"content": "Purpose: The main aim of this paper is to examine the role of brand reputation on the impact of value product on perceptions of a premium product from the same brand. As a secondary goal, it tests and extends existing findings from judgment tasks to a choice task.\nDesign/methodology/approach: Two online experiments are presented. In Study 1 (1a and 1b), participants provided quality and price judgments to products. Brand reputation was manipulated by comparing common store brands to non-store brands (Study 1a) and to upscale store brands (Study 1b). In Study 2, we examined whether findings indicating a positive effect of a value store brand on a premium store brand extends to a choice context. Participants made choices between a premium store brand and a national brand in the presence of either a value store brand or a value national brand.\nFindings: It was found that brand reputation plays an important role in the interplay of products in line extensions. While the positive impact of a value brand on a premium brand is at its strongest level for a regular store brand, it still has a moderate size for a non-store brand without a defined reputation, as well as for an upscale store brand. Second, using a choice task, we reject an important rival explanation for the impact of a value store brand on a premium store brand observed in previous research.\nResearch limitations/implications: The authors have focused on consumers' expectations of products. While research has shown that these expectations play an important role in evaluations, future research may directly examine perceptions after consumption. The findings also offer an opportunity for future research to examine the differences in perceptions between store and non-store brands at different positioning levels, as well as other factors that affect brand reputation.\nPractical implications: The findings have two practical implications. First, our results indicate that when a manufacturer produces two products in the same category at different levels of quality, there is some benefit in letting consumers know about this relationship. The authors consistently found no negative impact on the brands and often a positive impact on the premium brand. While effects are stronger for common store brands, they are likely to emerge for any type of brand, albeit weaker.\nOriginality/value: This paper contributes to the nascent literature on multi-tier brands and vertical extensions in several ways. First, the role of brand reputation was examined and how it interacts with positioning in line extension context. Second, we show that the effect of a value brand on a premium brand is stronger for store brands, but still existent for non-store brands. These results offer implications for practice and open opportunities for future research on multi-tier store brands.\n",
"role": "assistant"
}
] |
Socioeconomic determinants of profitability of fresh fish marketing in Ogun State, Nigeria
|
[
"Nigeria",
"Marketing",
"Profit",
"Fresh fish"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Fish is defined as an aquatic animal caught by man since the early times for food value (Spore, 1986). Fresh fish is the most important fish product for direct human consumption (Delgado et al., 2003; FAO, 1995). The maximum potential fish production from current marine fisheries is estimated to be around 80 Mt per year (FAO, 2010), two-thirds of global fisheries production consumed directly by humans and a third is processed as feed for aquaculture and livestock industries (Smith et al., 2011). Fish was long termed the \"doorman's protein\" as it is often the most popular diet in the world (Falodun, 2011). Fish is one of the most important sources of animal protein in developing countries such as Nigeria (Teutscher et al., 1990). According to Eyo (1992) and Olatunde (1998) fish constitutes 40 percent of the animal protein intake in Nigeria and the figure had risen to 60 percent (FAO, 2007). Fish is the cheapest source of animal protein (Samson, 1997; Amao et al., 2006) compared to other animal protein sources such as beef, pork and chicken and is often referred to as \"rich food for poor people.\" Fish provides essential nourishment, especially quality proteins and fats (macronutrients), vitamins and minerals (micronutrients), providing an important complement to the predominantly carbohydrate-based diet of many poor people in developing countries. It also serves as a source of income for those involved in fisheries, aquaculture and fish trade which can be used to purchase other additional food items. Fish and fish products contributed about 4.47 percent to the agricultural share of the nation's gross domestic product (GDP) in 2003 (Ojo and Fagbenro, 2004) and an average of 6 percent to Nigeria's GDP in 2006 (Kainga and Adeyemo, 2012).\nLiterature review: A market is defined as an area over which buyers and sellers negotiate the exchange of a well-defined commodity (Olayinka and Aminu, 2006). According to Gregory et al. (1994) marketing is the process of planning and executing the concept, pricing, promotion and distribution of ideas, goods and services to create exchanged that satisfies both the individual and organizational objectives. It includes all the processes (assemblage of goods, storage, transportation, processing, grading and financing) involved from the point of production of a commodity to the point of final consumption (Crammer et al., 2001). These processes ensure that the right product is available at the right place, at the right price and at the right time to fully satisfy the consumer (Beierlein and Woolverton, 1991; Okoh et al., 2008).\nMethodology: Study area and data collection\nResults and discussion: Socioeconomic characteristics of the fresh fish marketers\nConclusion and recommendations: It can be concluded that fresh fish marketing in the study area is economically rewarding and profitable. It is capable of creating employment, augmenting income, improving the standard of living of the people as well as alleviates the problem of malnutrition typical of the developing countries including Nigeria.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the socioeconomic determinants of profitability of fresh fish marketing in Ogun State, Nigeria.\nDesign/methodology/approach: The study was a cross-sectional survey of 120 fresh fish marketers selected randomly from four major fish markets in Ado-Odo Local Government area of Ogun State, Nigeria. Data were collected using structured questionnaire which was designed to solicit information on the marketers' socioeconomic and marketing characteristics, operating costs and returns, and problems associated with fish marketing in the study area. A combination of descriptive statistics, marketing margin, budgetary and ordinary least square regression analyses were employed to analyze the study data.\nFindings: The study showed that female (85.8 percent) dominated fresh fish marketing. The percent marketing margin of fresh fish was 34.55 percent. The percent marketing investment of 20,906.03, 20,453 per month and 1.43 were realized, respectively. The result of the regression analysis revealed that profit from fresh fish was significantly determined by education, proportion of household members involved in fresh fish marketing, marketers experience, capital, number of sales outlet and purchase price.\nResearch limitations/implications: The findings was based on information supplied by the fresh fish marketers in the study area based on the authors memory recall since most of the respondents do not have diary where records of activities were kept before the survey.\nPractical implications: This study contributes to the existing literature in fish marketing and will provide empirical information to policy makers in the formulation of appropriate policies. It will also serve as a guide to practicing and prospective fresh fish marketers and to researchers who may investigate further into the subject matter.\nSocial implications: The social implications from the findings on the return on investment of 1.43 implies that for every one naira invested by fresh fish marketers, a return of 1.43 and a profit of 0.43 were obtained. The study concludes that fresh fish marketing is an economically rewarding and profitable venture in the study area. It also recommends the need to provide credit facilities to finance storage facilities of this group of marketers.\nOriginality/value: The study is original in nature and revealed the economic status of fresh fish marketing in Ogun State, Nigeria.\n",
"role": "assistant"
}
] |
Epstein on alienation: a rejoinder
|
[
"Political economy",
"Property",
"Capitalist systems",
"Product liability",
"Regulation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Inalienability is the doctrine that there are certain possessions, attributes, characteristics which should not legally be allowed to be sold, or even given away. Instances under contention[1] include blood, babies for adoption, and even liberty itself. If these are inalienable, they may not be disposed of by their owner in any way.Alienability, in contrast, is the notion that the legitimate owner of anything may bestow it on any willing recipient, whether by sale or gift. If the items mentioned above are alienable, there will be markets in human blood, for babies, and people will be allowed to sell themselves into slavery, should they find a willing buyer.Full alienability of everything is another way of describing the system of laissez faire capitalism. Full inalienability of all things characterizes complete socialism.What is Epstein's goal? This paper:... first seeks to explain why the right of alienation is a normal incident of private ownership. Thereafter it seeks to examine the principled reasons for limiting that right (Epstein, 1985).This author, in other words, is a moderate in the socialism vs capitalism debate. He occupies a middle position, advocating that while certain objects should perhaps be allowed by law to be fully alienable, e.g. nuts and bolts, ribbon, Frisbees and rubber bands, there are others, which should not legally be alienable at all. As typical of this perspective, there are still others that should be subject to partial inalienability and partial alienability.Epstein (1985, p. 970) announces that he bases his analysis on:... the assumption that the core function of the law is to protect all persons and their property against the force and fraud of others.The thesis of the present paper is that just law is entirely incompatible with inalienability, in any manner, shape or form; no proper legislation would limit or restrict, let al one ban, full alienability; that if Epstein is serious about what he sees as \"the core function of the law,\" he must of necessity arrive at this same conclusion. Since he does not, I deduce that while he pays lip service to this ideal, his actual analytic framework is incompatible with it.\nI. Why permit alienation?: For an article that argues, ultimately, for restraining alienation, Epstein (1985, p. 971) starts off well enough from my perspective:As a first approximation it appears that any restraint upon the power of an owner to alienate his own property should be regarded as impermissible... To the person who thinks of rights as being acquired by first possession, the right of alienation seems to be an inescapable element of the original bundle of property rights. If alienation is not acquired by the person who has obtained ownership by taking possession of the property, then who else can claim it, and by what possible warrant?He continues on a similar wavelength:To insure that exchanges can go forward, rights of alienation must be vested somewhere, or resources will remain fixed in the hands of those who do not want them. There seems no better place in which to locate exclusive rights of alienation than with the parties already entitled to possession and use (Epstein, 1985, p. 972).But even here there are signs of weakness. For one thing, his vision is altogether too pragmatic and utilitarian, as opposed to rights based. Resources not free to flow to those who value them most will certainly reduce overall wealth. But this is rather a superficial and Coasean[2] reason for preferring that they do.For another, this statement implies that the law could vest property in the hands of A, and the right to alienate it in the hands of B, and that it does not do so solely as a matter of convenience. But matters are a bit more limited than that. One may go so far as to say that it is a necessary part and parcel of ownership that one can give away, sell or otherwise alienate the property. Indeed, if one is forbidden to alienate something, then, and to that extent, one does not really own it at all. If a man is permitted by law to occupy some land, but not sell it, then he is merely a legal squatter, not the owner of it[3].It is, perhaps, possible for some - certainly not for libertarians of the sort that Epstein is widely characterized as[4] - to suppose that all private landowners are, or can ever be, is squatters. Property rights to land, in this vision, could properly only belong to the crown, or to the state in more democratic eras[5]. But what of the individual person? It is more difficult to claim that people are really owned by the crown, in that they cannot sell themselves; that they are mere squatters of themselves, so to speak. So we arrive again at the original issue, only this time for people, not land and other property: it is not the case that ownership of the person could vest in himself, or in someone else. It must necessarily be entrenched only in himself. This being the case, the right of alienation, then, must inhere in the individual.\nII. Controlling external harms: Prevention of the improper use of force\nA note on distribution: Epstein opposes the limitation of alienation to achieve distribution (e.g. theft, although he does not characterize it in this manner) from rich to poor. He notes the \"odd results that occur\" (Epstein, 1985, p. 988) en this takes place. I oppose interference with alienation rights in any case, so we do not find ourselves on opposite sides of the issue in this case.\nConclusion: Epstein (1985, p. 990) states:... the purpose of this paper was to explain why restraints upon alienation persist in a world in which a sale of assets is generally a welcome event.In my view, our author does not so much explain this phenomenon as attempt to justify it. He does so on the grounds of \"external harm and the common pool\" (Epstein, 1985, p. 990). I have found these allegations wanting. I, therefore, propose that all limitations on alienation, e.g. the doctrine of inalienable rights, be dropped, forthwith.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to counteract Epstein's views on the alienability of property. Epstein favors limitations of laissez-faire capitalism regarding such things as guns, liquor, narcotics, certain books and voting and this paper aims to criticize them from the perspective of full, free enterprise.\nDesign/methodology/approach: - The main method is that of the reductio ad absurdum. For example, Epstein favors prior restraint on books giving information as to how an atomic bomb may be built. He does so on grounds that such information can be extremely harmful. Marxist books are far more harmful. Yet Epstein would not ban them. So his case for prior restraint is undermined.\nFindings: - Epstein's case for restrictions on alienability is unfounded.\nPractical implications: - If the message of the paper is incorporated into public policy, the practical implication is that any move in the direction of laissez-faire capitalism will be much closer than by implementing Epstein's recommendations.\nOriginality/value: - This paper should interest people concerned about how much government regulation of the economy is justified. What is new is that Epstein, one of the most extreme defenders of the minimal state, is not a full advocate of this position. His arguments for exceptions to free enterprise private property rights system are untenable.\n",
"role": "assistant"
}
] |
Promotion and shoppers' impulse purchases: the example of clothes
|
[
"Retailing",
"Promotion",
"Shopping",
"Clothes",
"Impulse purchases"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this issue.\nIntroduction: The importance of studying how promotion affects shoppers' impulse purchases is evident because although many purchases are based on retrievals of explicit memories from seeing a commercial about a product, some purchases are based on implicit memories (Shapiro and Krishnan, 2001). Impulse buying may thus be triggered by unconscious memories of advertising reaching consumers through a variety of promotional channels (e.g. Shapiro and Krishnan, 2001).\nThe impulse purchase context and the promotional channels in focus: Our view on the overall impulse purchase context draws on Stern (1962) who categorised impulse purchases according to the level of planning and exposure to marketing stimuli in the decision making process. Stern's (1962) categorisation commences with pure impulse which includes no preplanning at all; reminder impulse buying when consumers recall some marketing communication activity; suggestion impulse buying where the consumer responds to stimuli that brings attention to a latent need; and fourthly planned impulse buying where a consumer responds to promotion through to the intention to purchase but waits until the point of sale to actually make the decision.\nHypotheses: Impulse purchases induced by direct-mail marketing\nMethodology: Data collection and sample\nAnalysis and results: Descriptive analysis\nGeneral discussion: A conclusion drawn from testing our hypotheses is that shoppers with a favourable view on direct-mail marketing and TV commercials respond more positively to in-store promotion. Hence, the interactive effect of the three promotional channels increases the shoppers' general impulse purchase tendency. However, as shown in Table IV there is no significant individual relationship between direct-mail marketing, TV-commercials and shoppers' general impulse purchase tendency. This finding can be explained by the implicit memories that such promotion create in the shoppers' minds. Hence, shoppers may not see the link between seeing such ads and purchasing the advertised items. Our findings therefore suggest that although TV commercials and direct-mail marketing may induce some impulse purchases their primary contribution is to make shoppers respond more positively to in-store promotion. Hence, while they do not increase the impulse purchase tendency directly, there is a synergy effect that enhances the effect of in-store promotion.\n",
"role": "user"
},
{
"content": "Purpose: This paper draws attention to the fact that impulse buying is common behaviour among modern shoppers and that a consumer's impulse purchases of items such as clothes may be a result of the retailers' promotional activities. The purpose of this paper is therefore to investigate how promotion through channels such as direct-mail marketing, TV commercials and special in-store displays affects consumers' impulse purchases of clothes.\nDesign/methodology/approach: Data for the testing of five hypotheses was collected by distributing a questionnaire to randomly selected households in North Western France and Northern Sweden. The data collection rendered 493 complete questionnaires, of which 332 were from France and 161 from Sweden. Seemingly unrelated regression (SUR) is the principal estimation technique.\nFindings: The results suggest that shoppers with a positive attitude towards direct-mail marketing and TV commercials also respond positively to in-store promotion. Hence, there are interactive effects between the three promotional channels that increase shoppers' general impulse purchase tendency. Furthermore, the findings demonstrate that the frequency of exposure to in-store promotion has a significant effect on this tendency.\nOriginality/value: This paper contributes to extant knowledge on impulse buying behaviour by examining the product-specific impulse tendency, which is defined as the degree to which consumers make impulse purchases of a particular product category.\n",
"role": "assistant"
}
] |
The digitalization of retailing: an exploratory framework
|
[
"Transformation",
"Retailing",
"Framework",
"Digitalization",
"Retailer-consumer interface"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Digitalization[1] is one of the most significant on-going transformations of contemporary society and encompasses many elements of business and everyday life. This transformation is important for the retail sector, which both affects and is affected by this development. Retailers provide consumers with various digital products and services that are adapted to the use of digital technologies and are simultaneously affected by the new forms of consumption associated with these digital technologies. While digitalization has a long history in retailing (see, e.g. Salkin, 1964; Watson, 2011), the significance of the transformation is becoming increasingly visible.\nDigitalization and the retailer-consumer interface: In this paper, \"digitalization\" refers broadly to the integration of digital technologies into retailing, as in the following definition: \"Digitalization: Integration of digital technologies into everyday life by the digitization of everything that can be digitized\" (Businessdictionary.com 2014-10-22). Although this general term involves all types of digital technologies, we focus on the integration of Internet-connected digital technologies in particular and on the interface between retailers and consumers. The integration of digital technologies in retailing can mean a slight transformation of previously existing activities, processes, actors, and goods but may also introduce new types of products, services, etc. Thus, digitalization refers both to a transformation from \"analogue\" to \"digital\" (e.g. a shift from cash to electronic payments) and to the facilitation of new forms of value creation (e.g. accessibility, availability, and transparency) (Amit and Zott, 2001). Employing the notion of \"digitalization\" rather than \"digital\" implies that this transformation is on-going and has no clear beginning or end. Thus, we approach the notion of digitalization as something that is emerging and in the making rather than something already achieved (e.g. Hagberg, 2008). In line with Fors (2010), we conceive digitalization as an open and dynamic concept that has not been fully defined; it is an on-going process that should be approached with sensitivity and openness to what it might encompass. What Moisander and Eriksson (2006, p. 258) said about the information society is equally applicable to digitalization: it is \"not merely something that is imposed on people and organizations but something that people and organizations \"do\" and produce themselves through everyday practice and social interaction\". Digitalization is not something that takes place \"outside\" of retailing and is then transferred to retailing (which the notions of \"impact\" and \"effects\" would wrongly suggest) but is an on-going transformation of retailing to be studied from \"within\".\nConceptual framework: To capture the increasing importance of digitalization for the retailer-consumer interface, we base our conceptual framework on the analysis of consumption developed by Ritzer (2001) and Lehdonvirta (2012). Ritzer (2001) outlines four elements of consumption: processes, sites, subjects, and objects. These elements are further developed by Lehdonvirta (2012) within the context of digitalization. We use a similar breakdown as our starting point but with slight changes in terminology. Instead of processes, we use the concept of exchange to refer to the various activities taking place at the retailer-consumer interface. Exchange is considered neutral here - it does not adopt a particular actor role or view (i.e. retailer or consumer) as a starting point. We use the concept of setting instead of sites to refer to the different situations in which retailing occurs. We use the concept of actors instead of subjects to encompass not only consumers but also retailers. As the relationship between consumers and retailers is transformed by digitalization, using the concept of actors allows for an exploration of less clear-cut roles and identities. For example, Hagberg (2010) shows how actors in e-commerce exchanges are configured differently and that \"actors\" are not necessarily human; instead, \"actors\" are the assembly of different entities that participate in the exchange (including, e.g., digital devices in the store or at home). Finally, we use the concept of offering in place of objects to refer to both products and services and to stress that products and services may be transformed by digitalization. Drawing on studies exploring modes of exchange in retailing (Kjellberg and Helgesson, 2007; Hagberg, 2010), we propose that there are symbiotic relationships between the four different elements and that changes in one of the elements potentially contribute to changes in the other elements. Figure 1 graphically presents our framework, and each element is developed further below on the basis of the previous literature.\nLiterature employed: The literature consists of papers in retailing, consumption and marketing journals complemented by other academic work identified through this literature. We used the databases Scopus and Business Source Premiere to identify recent works using a combination of keywords, including retail*, digital*, internet*, and consumer*, followed by more specific searches based on the elements included in the framework and by scrolling the content of the journals directly. The targeted journals are as follows (with the number of articles employed in parentheses): Journal of Retailing (ten), Journal of Retailing and Consumer Services (ten), Journal of Consumer Culture (eight), International Journal of Retail\nElement I: digitalization of exchanges: With the emergence of the internet during the late 1990s, new companies arose that were focused on e-commerce. Soon, however, many retail chains established e-commerce sites, while many e-commerce firms simultaneously established fixed stores; thus, there was an emergence of so-called multi-channel retailing (e.g. Reynolds, 2002; Schoenbachler and Gordon, 2002). In recent years, the notion of multi-channel retailing has gradually shifted to a focus on omni-channels (e.g. Brynjolfsson et al., 2013; Pantano and Viassone, 2015; Verhoef et al., 2015). Importantly, the concept of multi-channels implies a separation between channels (although they are combined), whereas the omni-channel concept is more focused on customers and on providing them with the ability to move between \"channels\" seamlessly during one integrated purchasing process (e.g. Piotrowicz and Cuthbertson, 2014; Verhoef et al., 2015). In order to analyse the relationships between channels, exchanges can be divided into three facets: communication, transaction, and distribution (Peterson et al., 1997). Communication involves access to and the exchange of information. Transaction involves the transfer of ownership, including monetary transactions and the actual purchase. Distribution refers to the physical and tangible exchanges of products. This division is akin to Rotem-Mindali and Weltevreden's (2013) framework of information gathering, purchasing, and delivery, as well as Grewal et al.'s (2013) framework of pre-purchase, purchase, and post-purchase; nonetheless, it is more neutral because it does not assume any specific actor's perspective.\nElement II: digitalization of actors: Digitalization of actors consists of an increased intermixing of human and digital technologies on both the retailer and consumer sides. There is also an increased blurring of boundaries between retailers and consumers and new types of intermediaries, changing roles, and increased importance of different actors.\nElement III: digitalization of settings: The digitalization of the retail setting takes place through a change of retailing's traditional settings, i.e., the fixed store and the consumers' homes. In addition, the digitalization of settings also involves the emergence of new forms of settings, particularly those settings related to the proliferation of mobile devices in everyday life. Taken together, these developments lead to an increased intermixing of digital technologies, both in terms of time and place.\nElement IV: digitalization of offerings: The distinction between products and services was problematized long ago (e.g. Levitt, 1981; Vargo and Lusch, 2004; Araujo and Spring, 2006), and digitalization further contributes to the blurring of these boundaries. Retailers typically offer what are traditionally considered both products and services, and the term \"offering\" includes both to allow for this blurring. Digitalization contributes to changing offerings by including the digitalization of products themselves, i.e., the integration of digital technologies in what is offered. Digitalization also contributes to possible extensions to and the changing nature of the offerings as well as to how these offerings are priced and paid for.\nConclusions and implications: This paper has outlined the more all-encompassing transformations of digitalization of retailing by discussing how it influences four elements of the retailer-consumer interface. On the basis of the previous literature, it has analysed how exchanges are transformed through changes in communication, transactions and distribution; how actors are transformed through the intermixing of humans and digital technologies, the increased blurring of boundaries, and new actors, roles and relationships; how settings are transformed to include traditional and new settings as well as their intermixing; and how offerings are transformed through changes of products and services, extensions of offerings and new forms of pricing. As shown above, the transformation of retailing due to digitalization is extensive along the retailer-consumer interface and is likely to have significant implications for research and practice. Thus, the framework contributes to moving the retailing literature beyond discussions of digitalization as e-commerce and studies of specific, narrow aspects of digitalization. Drawing on this analysis, it is possible to discuss the theoretical and practical implications.\nLimitations and future work: This paper has outlined an overall framework for understanding how digitalization influences the retailer-consumer interface. Inevitably, this ambition restricts the possibility of providing an in-depth analysis of each of the four elements and the sub-categories of each element. Space limitations have also forced us to omit some of the practical examples that might have clarified certain arguments. Although there is a great need for research on specific, concrete and recent practical examples, there is also a danger in being too focused on contemporary practices that are quickly replaced. This difficulty has led us to emphasize the conceptual arguments rather than provide numerous practical illustrations. Finally, our intention has not been to provide a complete review of the existing literature but rather to provide an overall conceptual framework based on relevant research. There is, thus, a future need for more traditional studies regarding the digitalization of retailing as well as empirical tests of the claims put forth in this paper.\n",
"role": "user"
},
{
"content": "Purpose: Digitalization denotes an on-going transformation of great importance for the retail sector. The purpose of this paper is to analyse the phenomenon of the digitalization of retailing by developing a conceptual framework that can be used to further delineate current transformations of the retailer-consumer interface.\nDesign/methodology/approach: This paper develops a framework for digitalization in the retail-consumer interface that consists of four elements: exchanges, actors, offerings, and settings. Drawing on the previous literature, it describes and exemplifies how digitalization transforms each of these elements and identifies implications and proposals for future research.\nFindings: Digitalization transforms the following: retailing exchanges (in a number of ways and in various facets of exchange, including communications, transactions, and distribution); the nature of retail offerings (blurred distinctions between products and services, what constitutes the actual offering and how it is priced); retail settings (i.e. where and when retailing takes place); and the actors who participate in retailing (i.e. retailers and consumers, among other parties).\nResearch limitations/implications: The framework developed can be used to further delineate current transformations of retailing due to digitalization. The current transformation has created challenges for research, as it demands sensitivity to development over time and insists that categories that have been taken for granted are becoming increasingly blurred due to greater hybridity.\nOriginality/value: This paper addresses a significant and on-going transformation in retailing and develops a framework that can both guide future research and aid retail practitioners in analysing retailing's current transformation due to digitalization.\n",
"role": "assistant"
}
] |
Segregated and not equal? Occupation, earnings gap between urban residents and rural migrants in Vietnam
|
[
"Discrimination",
"Vietnam",
"Decomposition",
"Propensity score matching",
"Occupation segregation",
"Rural–urban migration"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Despite the apparent importance of the topic, labor market segregation between urban residents and rural migrants has received little attention outside of China. In the past decades, Vietnam has experienced a remarkable increase in its internal migrant population in the cities. The census data show that about 4.5m people (6.5 percent of the population) aged over five migrated between 1994 and 1999. Of which, 1.6m were rural-urban migrants (Dang et al., 2003). It rose to 7.7 percent in 2009 (Le et al., 2011). Poverty, land shortages and a lack of opportunities are the push factors that drive rural residents to leave their home village. The new economic opportunities (jobs and income) unleashed by the market reform in 1986, Doi Moi (Renovation), are one of the pulling factors that attract rural residents to migrate to urban areas (Cu, 2005).\n2. Decomposing the wage gap: 2.1 Traditional decomposition: Oaxaca and Neumark\n3. Empirical results: 3.1 Data\n4. Conclusions: Correcting for the occupational self-selection, migrants earn 17 percent less than the locals in terms of their (offered) hourly earnings despite several reforms of the ho khau system. It is the unexplained rather than explained part, and within-job earnings differential (59 percent) rather than between-job difference that account for most of the pay gap.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the earnings differentials between the locals and the rural-urban migrants in urban labor market in Vietnam.\nDesign/methodology/approach: Using the new Vietnam Rural-Urban Migration Survey 2013 (VRUMS2013) that is specifically designed for rural-urban migration, the author applies Appleton et al.'s (1999) procedure correcting for potential selectivity to decompose the offered earnings gap between the locals and the rural-urban migrants into within- and between-occupation pay differential. Bootstrapping is used to derive the standard errors for the decomposition results. The author further applies the propensity score matching (PSM) method to check whether the results are robust by restricting the sample to the \"common support.\"\nFindings: Within-job difference, particularly, the favorable treatment toward urban workers contributes significantly to the overall and total unexplained earnings gap. Further, between-job pay differential attributed to the over-representation of urban workers in high-paying job also helps to widen the gap. These results are robust restricting to the \"common support\" sample using PSM.\nResearch limitations/implications: Due to the sample size, occupations are only classified into three broad categories. Finer classification will allow a better comparison between the contributions of between and within-occupation to earning inequality. The data are only limited to a few cities and do not include other urban centers that also receive rural-urban migrants.\nPractical implications: Policies to promote equal pay and alleviate within-job \"discrimination,\" especially the preferential treatment favoring the locals (rather than to provide equal access to different jobs) are crucial for migrants' labor outcome. Moreover, this study can, to some extent, be seen as a timely contribution for the debate on household registration reform in general and in Vietnam specifically. Given China's announcement to grant permanent household registration (hukou) to unregistrated migrants in late 2015, investigating whether there is a two-tier labor market in the cities in Vietnam is particularly important for the ongoing debate regarding future of household registration system (ho khau).\nOriginality/value: This is the first study in Vietnam on rural-urban migration and occupation segregation - an area that has been relatively less well studied in developing/transitional countries. Vietnam is also one of the few developing countries who have household registration system in place. This has made it an interesting case. The author uses a new survey data to apply the Appleton et al. (1999) decomposition on the offered wage gap rather than observed wage gap. Standard errors of the decomposition results are bootstrapped and a robust check using propensity score method is conducted.\n",
"role": "assistant"
}
] |
Gender differences in engineers' burnout
|
[
"Gender",
"Stress",
"Israel",
"Performance management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The current study focused on gender differences in burnout among high-tech engineers, a professional group that has rarely been studied by burnout scholars, and it utilized a longitudinal design, often called for (Maslach et al., 2001; Peiro, et al., 2001; Schaufeli and Enzmann, 1998; Shirom, 2003) but rarely carried out in burnout research. In addition to gender differences in burnout, other variables assumed to be related to burnout were investigated including gender differences in style of coping, utilization of peer support and work-home conflict.\nHypotheses: H1. Based on the masculine culture of the engineering profession and the prejudices women engineers encounter, it was predicted that women engineers will report higher levels of burnout compared to men engineers.H2. Gender differences in emotion focused coping were predicted, with women engineers reporting higher levels compared to men engineers, but no gender differences were predicted in problem focused coping.H3. Gender differences in the utilization of social support were predicted, with men reporting higher levels of support from peers and women reporting higher levels of support from family members.H4. Gender differences were predicted in the experience of work-home conflict, with women reporting higher levels of conflict than men.\nMethod: Procedure\nResults: Results of t-tests comparing men and women's levels of burnout in Times 1 and 2 revealed significant gender differences in burnout at both T1 and T2, with women reporting higher levels of burnout in both (Table I).The means of at Time 2 presented in Table I are significantly higher than the mean Israeli national burnout score of 2.8 (SD = 1.2) (Pines, 2005). The t-tests between burnout at Time 1 and Time 2 did not reveal a significant increase during the seven months period (t = 1.50 p = 0.14). The fact that the level of burnout remained stable during this period is also evident in the high correlation (0.71, 9 < 0.01) between burnout at Time 1 and Time 2.Table II presents the results of t-tests comparing men and women's coping (problem focused vs emotion focused) and use of support (peer support vs family support). There was no gender difference in problem-focused coping, but women reported more emotion-focused coping than did men. In addition, while men received more support from their peers, women received more support from their families.In response to the open question who is more burned out in engineering, both men and women assumed that women are more burned out, but more women than men thought so: 52 per cent of the men as compared to 84 per cent of the women (kh2 (1)=32.6, p< 0.001).Men and women's explanations for this gender difference are presented in Table III.As can be seen in Table III, men were more likely to attribute women's higher level of burnout to internal causes that have negative connotations such as women's greater sensitivity and weakness, as well as their lack of technical understanding and abilities relative to men. On the other hand, women were more likely to attribute women's higher level of burnout to external causes such as slow promotion and the fact that they are primary care providers for their children and family (kh2 (1)=62.3, p< 0.001).Similar finding were revealed in the responses to the question about the greatest sources of burnout in the respondents' own work. Women were more likely than men to describe family pressures (57 per cent vs 37 per cent) and time pressures (18 vs 14 per cent) as the main causes of their burnout, whereas men were more likely than women to attribute their burnout to work related factors (47 vs 21 per cent) kh2 (1)=21.9, p< 0.002.\nDiscussion: The examination of gender differences in high-tech engineers' burnout revealed higher levels of burnout among women as compared to men engineers at both the start and end of the seven months period in which the study was conducted. These findings support the first hypothesis of the study as well as previous studies of gender differences in burnout that reported higher levels of burnout among women (e.g. Etzion, 1984, 1987; Etzion and Pines, 1986; Golembiewski et al., 1993; Maslach and Jackson, 1986; Pastore and Judd, 1993; Pines and Aronson, 1988). The higher burnout can be explained by the masculine culture of the engineering profession that makes it difficult for women to be integrated (Robinson and McIlwee, 1991) and by the gendered social, cultural, psychological and economic disadvantages that force women engineers to swim against the tide of prejudice (Kusku et al., 2007).The gender difference in engineers' burnout was also found in the respondents' estimation of who is more burned out in the engineering profession. Here, while both men and women assumed that women engineers are more burned out, more women (84 per cent) than men (52 per cent) thought so. This finding is not surprising, because it can be expected that the people who experience more burnout (in this case women) will be more sensitive to it in others (in this case other women).While this gender difference in burnout can be explained by internal causes (such as women's greater sensitivity or their lack of technical understanding), as suggested by the men engineers, it can also be explained by external causes of the type mentioned by the women engineers (such as the fact that women are primary care givers for their children and family as well as family and time pressures). It can be further assumed, that the fact that their male colleagues attribute their burnout to such negative internal causes, contributes to women engineers' burnout. The women's perspective is supported by research on women in engineering, science and technology, which showed that the major barriers to these women's career progression were related to issues such as lack of opportunities for career advancement and lack of encouragement from management (Wynarczyk and Renner, 2006).The other gender differences found in the study - in style of coping, in social support and in work-family conflict - can also help explain the gender difference in burnout.The findings that women engineers use emotion focused coping more than men but use problem focused coping as much as men, support the second hypothesis of the study as well as studies showing lack of gender differences in problem focused coping (e.g. Hamilton and Fagot, 1988; Havlovic and Keenan, 1995) and women's greater tendency to use emotion focused coping (e.g. Ptacek et al., 1992; Stein and Nyamathi, 1999). The lack of gender difference in engineer' problem focused coping can be explained by the finding that masculine gender role, in both men and women, predicts such coping (Nezu and Nezu, 1987) and by studies that show similarity between women and men engineers in both intellectual abilities and personality attributes such as need for achievement and autonomy (Jagacinski and LeBold, 1981).The gender differences in social support, showing that women receive more support from their families whereas men receive more support from their peers, support the third hypothesis, previous studies (e.g. Etzion and Pines, 1986; Pines, 2003; Piltch et al., 1994; Ptacek et al., 1992) as well as feminist psychoanalytic theory on women's \"self in relations\" (Chodorow, 1978; Miller, 1976) and an evolutionary theory that focused on men's innate tendency to bond (Tiger, 1969). These findings point attention to the fact that women engineers received less support from their colleagues than men, which can be explained by the masculine culture of the engineering profession that makes it difficult for women to be integrated socially and professionally (Robinson and McIlwee, 1991). This interpretation helps explain not only women engineers' burnout, but the paucity of women who choose an engineering career, a topic that has received growing attention from policy makers as implying the underutilization of an important human resource.The finding of gender differences in the experience of work-home conflict confirms the fourth hypothesis and studies documenting the relationship between this conflict and burnout (e.g. Burke, 1994; Burke and Greenglass, 2001; Etzion and Bailyn, 1994; Kossek and Ozeki, 1999) and studies that documented women's greater tendency to experience this conflict (Etzion and Bailyn, 1994; Neal and Hammer, 2006; Pines and Aronson, 1988).The finding that women engineers experience greater work-family conflict and greater stress as a result of this conflict may be related to the explanation offered by Gerdes (1995) for the difference between women in masculine as compared to traditional careers in the emotional distress caused by stressful events at home, namely that the nature of the work in a masculine career makes it more difficult for women to combine their career and motherhood roles.The gender differences found in the causes of burnout (for women: family and time pressures; for men: work-related factors) support Greenglass' (1991) conclusion that instead of focusing on gender differences in burnout, research should focus on gender related differences in precursors of burnout - work sources in men and both work and family variables including role conflict in women. The finding that the level of burnout reported by high-tech engineers, both men and women, was higher than the Israeli national level of burnout supports Kunda's (1992) qualitative observations of high levels of burnout among high-tech engineers.The comparison between the engineers' burnout in Time 1 and Time 2 did not reveal a significant increase in burnout during the seven months period. However, the small increase and the means obtained, which suggest danger signs of burnout, may affirm the view of burnout as an insidious process that people are unaware of until it is upon them.The gender differences in burnout, coping, support and causes of burnout can be viewed as contradicting Schneider's (1987) attraction, selection, attrition (ASA) model. According to the ASA model, people who are attracted to engineering, who pass the organizational selection that screens out those who do not have what it takes and who develop over time the skills needed to be a successful engineer, end up being rather similar. The gender differences found in the study suggest that this may not be the case.Despite its limitations, especially, the reliance on self-report measures and a sample that is unique both culturally (Israel), professionally (high-tech engineers) and historically (the study was conducted during a period in which the highly successful Israeli high-tech industry has experienced a crisis), the study, as a longitudinal study, has an advantage over cross-sectional studies that tend to dominate burnout research.The study has important implications for gender theory and research, by focusing on a small and unique group of women in the highly valued masculine profession of high-tech engineering. In addition, the findings of the study have important practical implication. They provide a tentative answer to the question that was raised recently by Scott and Nolan (2007) and applies both to career progression and the world of paid work of women in engineering, science and technology: what are the social networks and processes which constrain or enable women engineers from progressing in these careers? The answer, it seems, has to do with the ways to encourage and support the small group of women who choose engineering as a career. The need for peer support should be especially noted in light of the fact that women engineers need it more but receive it less than their male counterparts.Opens in a new window.Table I.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to investigate gender differences in burnout, style of coping and the availability of peer support among high-tech engineers\nDesign/methodology/approach: - A longitudinal study investigated gender differences in burnout, style of coping and the availability of peer support among high-tech engineers, an interesting occupational group from a gender perspective both because of the masculine culture of the engineering profession and the many prejudices against women engineers. Both the masculine culture and the prejudices help explain the paucity of women engineers and predict high levels of burnout among them.\nFindings: - The paper's findings supported this prediction. They revealed a significant gender difference in burnout, with women engineers reporting higher levels of burnout than men. The gender differences in burnout were interpreted as related to other findings: women's greater tendency to utilize emotion-focused coping, their smaller peer support and greater work-family conflict.\nOriginality/value: - In addition to their implications for gender theory and research and for burnout theory and research, the paper's findings point to the need to encourage and support the small and unique group of women engineers.\n",
"role": "assistant"
}
] |
The complexity of technological capital and legal protection mechanisms
|
[
"Intellectual capital",
"Complexity theory",
"Economic sustainability",
"Competitive advantage"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the current economic environment, developing and exploiting the intellectual capital formed by technological capabilities is seen as one of the basic foundations upon which firms should base their competitiveness, especially in dynamic and knowledge-intensive industries (Afuah, 2002; DeCarolis, 2003; Nicholls-Nixon and Woo, 2003; Zott, 2003; Wang et al., 2004). In fact, one of the issues addressed by the strategic management literature has been the analysis of the features of intangible assets that determine the creation and maintenance of competitive advantage, and subsequently appropriating rents, derived from such firm capabilities (Helfat, 2000; Makadok, 2003; Lippman and Rumelt, 2003; Peteraf and Bergen, 2003).The resource-based approach and its subsequent developments - the dynamic capabilities approach (Teece et al., 1997; Bowman and Ambrosini, 2003) or knowledge-based theory (Nonaka and Takeuchi, 1995; Grant, 1996; Mohrman et al., 2003; Spender, 1996; Styhre, 2004; Un and Cuervo-Cazurra, 2004) - have mainly focused on analyzing the criteria that must be fulfilled for resources and capabilities to be a source of sustainable competitive advantage. A study of the previous literature shows that there are almost as many proposals of business asset evaluation criteria as studies based on their analysis. Nevertheless, the variety of terminology and overlap of proposals hinders comparison of such studies and development of the strategy discipline (Peteraf, 1993; Black and Boal, 1994).From our point of view, rather than the mere theoretical description of criteria dealing with the strategic value of knowledge, the most important issue rests on those variables explaining the fulfillment of such criteria. Precisely, the purpose of this research is focused on studying the role of legal protection mechanisms in sustaining competitive advantage, taking into account the possible contingent effect of the more or less complex nature of the technological knowledge to be legally protected.From an organizational point of view, our analysis is justified because many different effects of knowledge complexity are still understudied (Kim and Wilemon, 2003; Mote, 2005), like its influence on operations productivity or its effects on either system renewal or external agents' possibilities to understand how leader competitors' capabilities work. This last question will centre our attention on the specific case of technological intellectual capital, since previous empirical models relating knowledge complexity to maintaining competitive advantage do not provide conclusive results at all (McEvily et al., 2000; Spencer, 2003).Furthermore, although some researchers either argue theoretically (Griliches, 1990; Ernst, 1995; Liebeskind, 1996, 1997) or demonstrate empirically (Scherer, 1965; Comanor and Scherer, 1969) that the legal protection of knowledge is one of a firm's most important instruments in taking full advantage of the results of its research[1], other authors support contrasting arguments. Empirical results increasingly show that the constant effort to legally protect knowledge may, in fact, be greater than the potential benefits from doing so (Rivette and Kline, 2000; DeCarolis, 2003; Spencer, 2003). The complete analysis of these variables is oriented to better understanding all these relationships.Besides the sign and intensity of the relationship between variables, the results obtained here offer new empirical evidence on the resource-based view. In addition, they enable us to acknowledge the existence of a substitution effect between both knowledge protection alternatives - legal and complexity-based ones. Therefore, although different studies have defended legal protection systems as being highly efficient alternatives in all cases, our study shows evidence that challenges their use under high complexity conditions.The paper is structured as follows. First, the theoretical framework is developed. Second, the model and the hypotheses to be tested are defined. Third, the most important empirical features are addressed, and, finally, results and most relevant conclusions, as well as possible areas of future research, are presented.\n2. Theoretical foundations and model of analysis: 2.1 The concept of technological intellectual capital\n3. Methodology: sample, data analysis and measures: Once we have defined the concept of complexity and the model of analysis, we take on the empirical study. Following the arguments proposed by several authors, we established a homogeneous population of knowledge-intensive firms (Hoskisson et al., 1999; Yeoh and Roth, 1999; DeCarolis, 2003; Douglas and Ryman, 2003; Rothaermel and Deeds, 2004). Therefore, the firms studied are facing the same external conditions as well as similar levels of technological and market opportunity.The empirical analysis was conducted on 52 firms devoted to human and animal health using biotechnological processes and having a specific department for developing R&D activities. Finally, 34 companies decided to participate.The biotechnological industry is one of the most important emerging sectors due to the numerous possible applications that this technology has, many of which are yet to be explored. One of the main problems in identifying the biotechnology sector is that it still lacks its own identity, due to its extremely horizontal nature (ASEBIO, 2000). In addition, the practical absence of updated nationwide statistics on firms that carry out activities in Spain made it necessary to prepare a company census using the database created in 1997 by the Scientific Information and Documentation Center (CINDOC)[3].The desired uniformity of firms meant that several criteria had to be fulfilled, especially in relation to the firm belonging to certain segment of activity within the biotechnology industry; therefore, the firms analyzed were those involved in either human or animal health. The widespread social disapproval of biotechnology activities in the food and environmental sectors, recognized by almost all experts interviewed, as well as Orsenigo's (2001) comment supporting the same idea, made us aware that analyzing the companies involved would be much more difficult.The activities of the firms we included refer to biotechnology products and services that focus on a limited number of therapeutic areas, being involved in the treatment, prevention and diagnosis of diseases (production of vaccines, antibiotics, medicines, diagnostic kits, xeno-transplants, genomic medicines or cellular engineering). Following suggestions made by industry experts, only small Spanish firms (up to 50 employees) developing biotechnological engineering were included, as bigger companies working on it are usually multinational pharmaceutical groups making only technological efforts partially in Spain; in addition to this, their inclusion would alter the unit of analysis. Furthermore, this kind of organization constitutes a minimal proportion of the entire population.The main method features are summarized in Table I.The study included developing two main phases: during the first phase, opinions and suggestions from managers and industry experts were used to identify:* some key industry success factors;* suitable measures for the included variables; and* some key issues dealing with biotechnological innovations.During the second phase, the accumulated information was employed to define a preliminary version of the questionnaire, which was sent to managers after being pre-tested and redefined.The nature of the sample chosen for our study substantially fulfils the profile for the statistical data analysis process. The limited size of the firms chosen, and thus of the sample itself, made it advisable to use non-parametric statistics (Siegel and Castellan, 1988), as there is no guarantee that the complete model would behave like a normal multi-variant variable. Besides, according to observations, the number of variables included in the model makes it less advisable to use statistical parametric methods, due to the resulting degrees of freedom. Non-parametric models are based on the assumption that the data does not follow a particular distribution and, therefore, do not have parameters to measure them. The data available and hypotheses defined suggested that they should be contrasted by the Mann-Whitney U test, also known as the Wilcoxon W test, using the SPSS set of statistics.Both tests state if two independent samples are extracted from the same population. To do that, they verify if observations from one sample differ substantially to those included in the other sample. To implement the test it was necessary to classify firms in two groups taking into account their relative position to the median of the dependent variable (i.e. maintenance of competitive advantage). These were the X group, constituted by those firms with non-vulnerable technological knowledge, and the Y group, formed by those firms with more vulnerable technological knowledge. We chose the median value as we consider it reflects more appropriately the firm relative position to the rest of competitors. Furthermore, the shortcomings of using the mean value when deviation is high are avoided too.Once firms were classified, we tested whether both groups of companies differed in the character knowledge embedded in their respective technological capitals and their propensity to protect such knowledge legally.Data used for the dependent variable were collected for three years (2001-2003), and the value used to test hypotheses was the mean for the entire period of analysis. With respect to the independent variable, we asked managers to estimate the complexity of the accumulated capabilities by which firms developed successful innovations from 2001 to 2003. To some extent, the empirical evidence obtained can be interpreted from a process approach, since the independent variable reflects the characterization of previously developed capabilities.The next section is devoted to defining measures[4].3.1 Measuring technological capital complexity\n4. Results: Our first comment is related to control variables. As we used non-parametric tests, we had to develop an individual analysis dealing with such variables. The results indicated that, for the sample studied, the degree of technology novelty and the market value of technology did not have significant effects on sustaining competitive advantages. The homogeneous character of the population analyzed may be one reason for such results. Thus, we can conclude that the effects of complexity and legal protection mechanisms on maintaining competitive advantage are not substantially altered by technology novelty and market value of technological knowledge.In addition to the previous classification of firms into groups X and Y, we developed a descriptive analysis to characterize the firms' technological capital in terms of their complexity, distinguishing firms with complex technological intellectual capitals from those with more transparent ones[6].The first hypothesis suggested the existence of a positive relationship between complexity and \"barriers\" to imitation. The empirical analysis tested if knowledge imitability was greater in firms with simple or transparent knowledge than in the remaining organizations.As can be observed in the table of ranks in Table V, firms with knowledge that are less likely to be imitated show higher levels of overall complexity (12.41>8.44); this relationship is significant at a level of 10 percent, according to an analysis of the significance of the statistic Z1; therefore, the higher the degree of a knowledge's complexity, the greater the natural protection against imitation by competitors.These results are coherent with widespread theories in literature, as well as with the several empirical studies that directly link tacit knowledge to certain measures of technological success (e.g. Afuah, 2002; Balconi, 2002; Figueiredo, 2002; McEvily and Chakravarthy, 2002; Miller et al., 2002; Zahra and Nielsen, 2002; Nicholls-Nixon and Woo, 2003).With respect to legal protection mechanisms, the results for H2 concluded no existence of a significant relationship between them and imitability for the entire sample. Nevertheless, it was possible that the complexity of the knowledge makes its protection by additional legal \"barriers\" unnecessary. Therefore, we analyzed the effect of legal systems on imitability for firms having more transparent or simple knowledge only. In this case the results showed that legal systems were useful to build \"barriers\" to imitation (13.00>7.44). These results allow us to conclude the existence of a substitution effect between legal and natural systems to prevent technological knowledge from imitation. H1 can therefore be accepted, whereas H2 can only be accepted in the case of non-complex technological knowledge.\n5. Discussion, limitations, and future research: This research, in addition to theoretically advancing the study of different concepts and variables, is of great value for strategic technological decision-making. Within the resource-based view context, our results provide new empirical evidence supporting that the complexity of technological intellectual capital has a great impact on evaluating the effectiveness of legal protection mechanisms and sustaining competitive advantage. Also, there is some empirical evidence to answer \"yes\" to the following question: does the complexity of technological intellectual capital challenge the effectiveness of legal protection mechanisms?Results for the relationship between complexity and \"barriers\" to imitation are in line with previous and current research that highlights the strategic role of tacit and specific knowledge in wealth creation (Afuah, 2002; Balconi, 2002; Figueiredo, 2002; McEvily and Chakravarthy, 2002; Miller et al., 2002; Zahra and Nielsen, 2002; Nicholls-Nixon and Woo, 2003).In addition, we found that legal protection is only an efficient mechanism against imitation when technological knowledge is not complex; otherwise, the efforts made are not converted into greater protection against such imitation, as was supported by the most recent literature dealing with the issue (Rivette and Kline, 2000; DeCarolis, 2003; Spencer, 2003). Nevertheless, the existence of an internal structure within the complexity construct may open several new issues in the relationship between complexity of assets and firm value creation to be addressed in future research.While our results provide statistical support for our hypotheses, we must also acknowledge several limitations to be overcome in future research. Our focus on the biotechnology industry raises questions about the generalizability of our study beyond this industry. Defining a wider population of firms would have allowed us to use regression equations, where both independent and control variables could have been included simultaneously, enriching our results and conclusions notably.Although we defined variables and measures in such a way to design a process model, we have to recognize the prominent cross-sectional nature of the research.Furthermore, since we only included the relationship between complexity and sustaining competitive advantage, in future studies other complementary issues regarding value creation process could be analyzed, like gaining competitive advantage, rent appropriation or maintaining competitive advantage by other alternative processes such as barriers to substitution or continuous improvement.Our next research project will be designed to overcome these shortcomings and contrast new relationships in order to complete a model that integrates the main factors responsible for technological and entrepreneurial success in a specific environment.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to analyze the effectiveness of legal protection mechanisms to sustain competitive advantage and appropriate technological intellectual capital, taking into account its degree of complexity.\nDesign/methodology/approach: - The empirical analysis was conducted on a sample of Spanish biotechnology firms, devoted to human and animal health, having a specific department for carrying on R&D activities from 2001 to 2003. The hypotheses were tested by non-parametric techniques after collecting data by sending a questionnaire as the primary source of information.\nFindings: - The results show that the higher the complexity of technological knowledge, the greater the protection of intellectual capital against imitation. In addition, it is proven that legal protection is an effective means of protection against imitation only when technological knowledge has a low degree of complexity.\nResearch limitations/implications: - Within the resource-based view context, our results provide new empirical evidence about the great impact of the complexity of technological intellectual capital on evaluating the effectiveness of legal protection mechanisms and sustaining competitive advantage.\nOriginality/value: - This research is of great value for strategic technological decision-making. Although several studies present the theoretical conditions that explain how firms can create imitation \"barriers\", the empirical evidence found is very scarce and controversial, especially for the Spanish biotechnology industry. The results obtained here offer new empirical evidence on the resource-based view.\n",
"role": "assistant"
}
] |
Does relationship marketing improve customer relationship satisfaction and loyalty?
|
[
"Relationship marketing",
"Customer satisfaction",
"Customer loyalty",
"Segmentation",
"Banking"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Relationship marketing (RM) has been put forth as a way for firms to develop mutually beneficial and valuable long-term relationships with customers (Ravald and Gronroos, 1996). RM is believed to work most effectively when customers are highly involved in the good or service, there is an element of personal interaction, and customers are willing to engage in relationship building activities (O'Malley and Tynan, 2000). Customer oriented RM programmes that enhance the flow of information between the bank and customers increase customers' positive feelings towards their bank, thereby also increasing satisfaction and relationship strength (Barnes and Howlett, 1998; Ennew and Binks, 1996). Although past studies provide knowledge regarding the nature and importance of banking relationships from a customer (O'Loughlin et al., 2004), business (Madill et al., 2002), and dyadic (Paulin et al., 1998; Akerlund, 2004) viewpoint, some questions remain unanswered. For example, we lack information on the extent to which specific RM programmes have succeeded in strengthening relationships in different segments.RM can, but need not be, directed towards all customers of the bank. Most retail banks are characterised by having both profitable and unprofitable customers, where the former subsidise the latter (Carson et al., 2004; Zeithaml et al., 2001). Retaining the profitable customers has become increasingly difficult in a competitive environment where other financial institutions specialise in offering attractive services and prices to this lucrative segment. Since, investments across all customer segments will not yield similar returns (Zeithaml et al., 2001), RM is often directed only at the most profitable segments defined by, for example, income and wealth (Abratt and Russell, 1999). There is, however, a lack of research on how RM activities affect relationship outcomes in the target segment. This is the focus of our study.The aim of this study is to investigate the RM strategy of a retail bank, in order to discover whether the implementation of the strategy has had the intended strengthening effect on customer relationships. A survey was conducted on two different profitability segments, collecting data on customer relationship satisfaction, perceived relationship improvement, and loyalty towards the bank. Of the two customer segments, the more profitable segment (Segment 1) was directly exposed to the RM programme, whereas the less profitable segment (Segment 2) was not.The paper is structured as follows: The literature review outlines previous findings on RM, the interrelationship between customer relationship satisfaction and loyalty, and customer profitability segmentation. The empirical part of the paper describes the RM programme at the case bank, and the design, collection and analysis of survey data. The paper concludes with a discussion of the results, managerial implications, limitations of the study and suggestions for further research.\nLiterature review: Relationship marketing\nMethod: The empirical study was conducted in 2003 at a member cooperative bank branch of the OP Bank Group (formerly the OKO Bank Group), one of the major bank groups in Finland. The case bank was chosen for this study because it had fully implemented an RM strategy that was first launched in 1998. Our case study examines the RM strategy and investigates whether its implementation has led to customer-perceived improvements in the banking relationship, and to increased loyalty in the targeted segment.In line with the recommendations of Eisenhardt (1989), different methods were used to collect data. Primary and secondary data on the bank's RM strategy were first gathered from the bank's web site, annual reports, information brochures, seminar presentations by bank representatives, and interviews with the bank manager and staff. A survey was then designed to investigate customer relationship satisfaction, customer-perceived improvement in the relationship during the past 4-5 years, and customer loyalty towards the bank. The survey was directed towards two profitability segments. Customers in one of the segments had been exposed to a customer oriented RM strategy, whereas customers in the other segment had been subjected to a more sales oriented marketing strategy. Thus, the aim was to investigate if the two segments differed in their perceptions of the bank relationship, or more precisely, if customers who had been exposed to the customer oriented RM strategy had more positive perceptions of the relationship.The following sections describe the OP Bank Group's RM strategy, its segmentation of customers, and management's perceptions of the new approach. After this introduction of the strategy itself, the customer survey will be described and its results will be analysed.Relationship marketing and profitability segmentation at the bank\nSurvey findings: In general, the mean average scores of all 319 responses were quite high, ranging from 3.45 to 4.38 on a five-point scale (Table II). The mean scores for perceived improvement over the past 4-5 years also indicate that customers had perceived improvements in their relationship with the case bank. However, contrary to expectations, independent t-tests revealed no significant differences between Segment 1 and Segment 2 on any of the items. It should be noted that the data was not normally distributed. Many of the items were negatively skewed and peaked, in particular items (1) and (4). Customers' average length of relationship with the bank (Table I) was also similar between the segments (p=0.869).Before proceeding to explore differences in the interrelationships between constructs, the items were factor analysed. The Kaiser-Meyer-Olkin (KMO) value of 0.922 was high (Kaiser and Rice, 1974) and indicated that the data were suitable for factor analysis. Principal component factor analysis with Varimax rotation yielded the three expected factors, together explaining 57.59 per cent of the variance. Two statements on customer needs were removed because they did not load highly on any of the factors, or form a factor of their own. The retained items (marked with an asterisk in Table II) loaded clearly and highest on the expected constructs. Cronbach's as for all three constructs were above 0.7, which is considered adequate construct reliability (Hair et al., 1998).A vital question was if the implementation of a RM strategy on Segment 1 customers had any differential effect on customer loyalty, compared with Segment 2 customers. Although no significant differences were found between the item means of Segment 1 and Segment 2 customers, it is necessary to identify any differences between the two segments with regards to the antecedents of loyalty.In order to determine differences between Segment 1 and Segment 2 customers regarding the relative impact of relationship satisfaction and perceived improvements in the relationship on customer loyalty, regression analyses were conducted. To avoid collinearity effects, a new factor analysis was conducted on the nine variables forming customer relationship satisfaction and relationship improvement, and the resulting uncorrelated factor scores were used as independent variables in the regressions. The regression results are summarised in Table III.The results show that loyalty in both segments depends more on the current level of relationship satisfaction than on perceived relationship improvements. It is also notable how similar the effects of perceived relationship improvement are in both segments. Furthermore, a comparison between explained variances in the models reveals that customer loyalty in Segment 2 is much better explained by the two independent variables (R2=0.559) than customer loyalty in Segment 1 (R2=0.273).\nDiscussion: The aim of this study was to investigate the RM strategy of a case bank and to analyse whether its implementation resulted in relationship satisfaction and customer-perceived improvements in the bank relationship, thereby increasing loyalty. The most profitable customer segment is small but important to the bank. Therefore, the bank has paid particular attention to the needs and wishes of these customers compared with those of other customer groups. Consequently, it was expected that Segment 1 customers would experience relationship improvements and that the improvements would be reflected in higher loyalty. However, this was not the case.The case bank's RM strategy, with the intention of a customer oriented focus on Segment 1 customers, did not lead to higher loyalty in Segment 1 customers compared with Segment 2 customers, who were supposed to receive a sales oriented treatment. No significant differences were found between the two groups on any of the relationship strength items: relationship satisfaction, relationship improvement or loyalty. Furthermore, relationship satisfaction and relationship improvement explained a relatively small amount of the variance in customer loyalty in Segment 1.There are several explanations for these findings. One possibility is that customers of different segments give different meaning to, for example, what it takes to experience relationship satisfaction. Profitable customers are likely to be aware of their worth to the bank, and thus have a smaller tolerance for anything less than excellent service. They are likely to be more demanding and might more easily feel that the service falls short of their high expectations. Although they receive more personal attention, they may perceive few other benefits in the relationship. When most transactions can be performed as self-services, service improvement and RM efforts may need to be directed towards filling technological gaps (Lang and Colgate, 2003), and facilitating technological relationships. Furthermore, it is possible that other factors, not included in the current study, would have been better at explaining the loyalty behaviour of the most profitable customers. Such variables include, for example, customers' service involvement, their competing banking relationships at other banks and core service perceptions. Answers to the open-ended question also indicated the existence of additional customer concerns.Based on the freely formulated customer comments, price issues related to fees and interest rates were predominantly a concern for customers in Segment 2, whereas only Segment 1 customers made complaints regarding the complexity or inadequacy of the bonus programme. Segment 1 customers have the greatest earnings potential from the bonus programme, since it is based on the customers' volume of business. The programme was revised in the same year as this study was conducted.With regard to what Segment 1 customers might lack in their banking relationship, several customers suggested \"investment evenings\" where they could receive information about the different options available for the investment of their money. Although this would require extra time and resources, it would potentially improve the relationships with those valued customers who otherwise do not have time to visit the bank and who conduct most or all of their business with the bank over the internet.Both Segment 1 and Segment 2 customers added positive comments concerning the friendly and personal service during encounters with bank employees. This indicates that the customer oriented attitude is not limited to the employees' interactions with Segment 1 customers. The questionnaire admittedly did not test for differences in perceived customer/sales orientation between the segments. However, if both segments indeed perceive the employees to be customer oriented, it could explain why no significant differences were found between the segments. Although such a situation does not fit in with the bank's strategy, it can hardly be seen as a failure. As long as Segment 1 customers perceive that they are prioritised and the volume of Segment 2 customers' banking affairs is increased, a general customer oriented atmosphere can only be commended. However, since Segment 1 customers may have higher expectations with regards to being treated in a customer oriented way, it is possible that improvements in the Segment 1 relationships compared to Segment 2 relationships are nullified by the higher expectations of the more profitable customers. Hence, the difference would not be observable in the responses to the questionnaire.Another possibility is that customers, managers, and employees have different views of how RM should be conducted. Activities that are viewed as extremely customer oriented from the bank management's point of view, such as actively contacting customers, offering them personal meetings, and mapping their needs on a continual basis, may be viewed by the customer as normal services offered by, or expected from, any bank. Furthermore, the changes that have occurred in Segment 1 customers' banking relationships over the past 4-5 years, although positive, may be viewed as relatively insignificant when placed in the perspective of the customers' entire relationship history with the bank.Managerial implications\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is to investigate the relationship marketing (RM) strategy of a retail bank and examine whether - after its implementation - customer relationships were strengthened through perceived improvements in the banking relationship and consequent loyalty towards the bank.\nDesign/methodology/approach: - A survey was conducted on two profitability segments, of which the more profitable segment had been directly exposed to a customer oriented RM strategy, whereas the less profitable segment had been subjected to more sales oriented marketing communications.\nFindings: - No significant differences were found between the segments on customers' evaluations of the service relationship or their loyalty toward the bank. Furthermore, regression analysis revealed that relationship satisfaction was less important as a determinant of loyalty in the more profitable segment.\nResearch limitations/implications: - This study was conducted as a case study of one specific branch of a bank group in Finland, which limits the external validity of its results. It was not possible to ascertain if, or to what extent, customers of the more profitable segment had received the intended RM treatment. Other limitations are also discussed.\nPractical implications: - Customer orientation is desirable within retail banking and more studies are needed on the differential drivers of loyalty across customer profitability segments. By identifying the aspects of a banking relationship that are more highly valued among more profitable customers than among less profitable customers, bank managers would be able to more effectively devise appropriate strategies for different segments.\nOriginality/value: - The study contributes to the RM literature and marketing of financial services by providing empirical evidence of the effects of RM activities on customer relationship perceptions in different profitability segments.\n",
"role": "assistant"
}
] |
"A balancing act": host leadership as a model for the police force
|
[
"Police",
"Leadership",
"Host"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: We have a tendency to emphasize the importance of a leader over their team. We are typically inclined to see traditional, \"heroic\" qualities of hard work, expertise and control as definitive of a good leader. This style of leadership is defined by the strength of the leader's will and the deference of their team, who act almost like an extension of the leader: executing duties without asking questions. Internationally renowned management thinker Henry Mitzberg discussed as early as 1973 the way in which our approach to leadership has led us to an emphasis on style over substance, on leader over follower (Mintzberg, 1973).\nOur response: In light of the new approaches advocated in the report, crime and social justice, consultant and former Assistant Chief Constable for Hertfordshire Chris Miller and I put our heads together to explore a new way of thinking about the leadership role. Moving away from the metaphor of the \"hero\", we focus on the idea of a \"host\" leader.\nOn the beat: A host is clearly an authority figure, but one whose authority comes from personal connection and invitation, engagement, attention to detail and being seen to be making the running. Hosts are both proactive and reactive, and so too must police offers be.\nAt senior operational ranks: Superintendents and Chief Superintendents rarely deal with operational matters, they oversee and lead those who do. The emphasis is on overseeing strategy, standards and day-to-day operational policy and leading improvement processes. Senior leaders set the tone for the units they oversee; engaging them and getting them to buy into new commitments to change, is a crucial part of their responsibilities.\nUp at the top: At and above the level of Assistant Chief Constable/Commander, the key roles played are those of developing strategy and promulgating it both internally and externally, dealing with internal and external teams and representing the public face of the service by handling media interest.\nStriking a balance: Hosts participate in the events they hold; their success is largely a question of discretion and of balancing the need to step up - to plan, arrange and direct - with the need to bring out the best in others. Leaders at all levels will want to develop this skill for balancing passive with active leadership techniques: too much distance from the team robs them of their credibility and connection, while too little can cause them to lose both perspective and effectiveness as a leader of change.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to explore a new way of thinking about leadership within the police force, moving away from the metaphor of the \"hero\" and focusing on the idea of a \"host\" leader.\nDesign/methodology/approach: This paper uses the College of Policing Leadership Review (2015) to examine current approaches to leadership within the police force and considers how host leadership methods could benefit police at all levels.\nFindings: There is a long-term challenge for the police force, as initial selection tends to test for heroic skills, whereas engagement and consensus building then become more important. As the need for more team-based, engaging approaches continues to drive leadership thinking in the UK's police service, populating the service with individuals who are adept at drawing out the best in their teams, without treating them as foot soldiers, is crucial.\nOriginality/value: This paper is an original piece of research from Dr McKergow and Mr Miller.\n",
"role": "assistant"
}
] |
Regendering of dynamic managerial capabilities in the context of binary perspectives on gender diversity
|
[
"Gender diversity",
"Diversity management",
"Gender categories",
"Regendering",
"Dynamic managerial capabilities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Dynamic managerial capability remains a well-charted terrain and a theme with a focal concern on manager's human capital and related skills (Adner and Helfat, 2003; Coff and Kryscynski, 2011; Holcomb et al., 2009; Huy and Zott, 2019; Kurtmollaiev et al., 2018; Moliterno and Wiersema, 2007) which play a significant role in decision-making outcomes of organisations which pursue superior performance (Al Ariss et al., 2014; Collings et al., 2019; Kucukaltan et al., 2016). Yet, the interplay of gender and dynamic managerial capabilities remain underexplored (Buil-Fabrega et al., 2017; Coleman and Kariv, 2013; Correa et al., 2019). Exploring the interplay of gender and the dynamic managerial capabilities approach is particularly relevant as organisations, which increasingly operate in gender diverse environments and with new gender frames (Westbrook and Saperstein, 2015) need to consider the unique implications of this interplay (Kor, 2006; Krishnan and Park, 2005; Quintana-Garcia and Benavides-Velasco, 2016). Furthermore, there are theoretical and methodological considerations for scholars if gender is considered more seriously in researching dynamic capabilities.\nMethods: Authors of the manuscript are located in the field of strategy, gender theory and organisational behaviour. Discussions among the authors has revealed that the field of dynamic capabilities which is traditionally studied in the field of strategy lacks gender analyses and suffers from gender blindness as a result. This curiosity has led us to explore the extant literature using a number of techniques. We have conducted key word searches, examined key texts in the strategic management field in relation to dynamic capabilities and using gender theory as lens critically examined the available literature, remaining focused on the problematic of gender blindness.\nDynamic managerial capabilities and gender: Gender diversity relates to dynamic managerial capabilities in different ways. First, gender diversity influences the uniqueness and effectiveness of the micro-foundations of dynamic managerial capabilities. The gender related micro-foundations which are the individual level antecedents of dynamic managerial capabilities (Schilke et al., 2018) are often identified as sex differences between women and men's mental models and knowledge structures (Eggers and Kaplan, 2013), cognition-related constructs (Adner and Helfat, 2003; Helfat and Peteraf, 2015), emotion regulation actions (Huy and Zott, 2019) and managerial qualities as both human capital (Castanias and Helfat, 2001; Wright et al., 2014) and social capital (Blyler and Coff, 2003; Hernandez-Carrion et al., 2017). Misappropriation of gender as sex differences leads to essentialisation, reduction of gender to biological determinism, of women and men's dynamic managerial capabilities. In these pieces of research, gender is treated as a binary category without further elaboration of gender categories as listed above, reducing the complexity of gender differences to sex differences.\nRegendering of dynamic managerial capabilities: Based on the above review, we propose regendering of dynamic managerial capabilities in a process relational framework. First, the antecedents of such an intervention are considered, i.e. preparedness of the leadership, top management team and the organisational structures and cultures. Second, regendering intervention for dynamic managerial capabilities is presented in two aspects:\nConclusions: The domain of research on dynamic capabilities is characterised by male domination in terms of authors and practitioners in this field. Furthermore, gender diversity in the context of dynamic managerial capabilities has received scant attention from scholars, with only a few exceptions (Roberson et al., 2017). Even in those rare cases when dynamic managerial capabilities are explored from gendered lenses, the treatment of gender in these studies suffered from some flaws: First, approximation of gender to sex, leading to essentialising differences between women and men in management. Second, the reduction of the complexity of gender to binary gender categories, i.e. women and men at the expense of ignoring complexity of a wider repertoires of gender. Third, gender frames in dynamic managerial capabilities research fail to capture the modernisation of gender relations. To transcend these flaws, we suggested expansion of gender measures in research to include categories beyond women and men.\n",
"role": "user"
},
{
"content": "Purpose: The interplay between gender and dynamic managerial capabilities is not well studied in the extant literature. This paper aims to explore how dynamic managerial capabilities, as prized qualities in the job market, are framed in gendered ways and how the gendering process disadvantages female and male workers for different reasons and harms the organisations, which use the managerial capabilities approach without proofing it for gender biases.\nDesign/methodology/approach: An extensive literature review was conducted and a framework that offers a new gender perspective was offered.\nFindings: A number of ways dynamic managerial capabilities may be proofed for gender biases and how a gender-balanced framing of dynamic managerial capabilities may be achieved are identified.\nOriginality/value: This paper contributes to the development of a new gender perspective, which is called regendering of dynamic managerial capabilities, which frees the concept from its binary frames of gender, assumptions of gender neutrality, with a view to capture gender diversity in a way which is closer to its nature in theory and practice of dynamic managerial capabilities.\n",
"role": "assistant"
}
] |
Sustainability management control systems in higher education institutions from measurement to management
|
[
"Sustainability",
"Levers of control",
"Higher education institutions (HEI)",
"Environmental management performance (EMP)",
"Management control systems (MCS)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Call for sustainability management controls in higher education institutions: As important social institutions, higher education institutions (HEIs) play a key role in shaping the process of sustainable development by broadcasting and distributing knowledge and skills (Amaral et al., 2015). To achieve this goal, the institutionalization of sustainability within the organization is essential. The course of implementing sustainability can proceed along various routes, however, ideally pursues a holistic approach controlled by a management process (Nolan, 2012). Therefore, it is necessary to go beyond the discussions of assessing and measuring the outcomes of activities and turn toward the managerial perspective of applying steering processes. In this context, the implementation of management systems and the application of business tools can contribute to the institutionalization of sustainability efforts in HEIs (Disterheft et al., 2012). Subsequently, this results in a professionalization of functions through actively managing relevant issues (Amaral et al., 2015). The role of management structures therefore is an essential capacity for successfully operating such efforts (Sammalisto et al., 2015).\n2. Theoretical background: The discussion on sustainability in HEIs often starts with the examination of engagement through participatory approaches as a major factor for a successful implementation (Arroyo, 2017; Clarke and Kouri, 2009; Disterheft et al., 2015). Indeed, voluntary engagement does play a key role in that process, though this must not forego the fact that commitment within management and leadership is essential when pursuing the ideal of a whole institution approach (Brinkhurst, Rose et al., 2011; Lozano, 2006a; Verhulst and Lambrechts, 2015). Management commitment helps to establish the structures necessary for successful saturation in HE bodies. With a rising professionalization of public management through the adaption of business processes, the application of structured steering mechanisms for HEIs also becomes evident (Arjalies and Mundy, 2013). I that context the application of MCS serves as a proper tool, which allows to operate a structured management process on environmental sustainability (Pondeville et al., 2013).\n3. Data collection and analysis: The study applies a systematic review of scientific papers and academic publications following a transparent and structured multi-stage process after Fink (2005) and Tranfield et al. (2003).\n4. Results: The following section is structured into a bibliographic analysis and content analysis following the recommendations of Schaltegger and Wagner (2017), Seuring and Gold (2012).\n5. Propositions: EMP as a bundle of policies, objectives, structures, processes and monitoring finally enables to make propositions for the implementation of management controls and for the determination of environmental (management) performance of HEIs. With the examination of perceptible performance outcomes, it is possible to compress the existing data into implications for research and practice.\n6. Discussion and conclusion: The present study examines the environmental performance and the operationalization of environmental issues in HEIs' management. The study gives insight into the status quo of implementation efforts and examines HEIs' environmental performance along the dimensions of policy, objectives, structures, processes and monitoring. The results show that performance issues play a role within HEIs' environmental management, though the distribution across the dimensions differs. The content analysis shows that performance in the dimensions of EMP stands in interrelation with a structured management approach on environmental sustainability. The perceivable EMP of HEIs (so far) does not follow a common practice.\n",
"role": "user"
},
{
"content": "Purpose: As social institutions, higher education institutions (HEIs) play a key role in the distribution of knowledge and skills for sustainable development and societal structures. To fulfill this task, the institutionalization of sustainability within the organization's structures is essential. The purpose of this paper is to provide an overview of the status quo of environmental management performance (EMP) in HEIs and conclusions for the application of management control systems.\nDesign/methodology/approach: Within a systematic literature review, 56 international publications were analyzed along a qualitative thematic content analysis. The coding followed the dimensions of EMP by Trumpp et al.\nFindings: Structuring environmental sustainability efforts along the concept of EMP reveals a major weakness in environmental sustainability management of HEIs. Therefore, the study proposes a model for a processual integration of steering mechanisms for management bodies to systemize appropriate efforts to gain excellence in operations.\nOriginality/value: By applying the concept of EMP by Trumpp et al., this systematic review provides an overview of the status quo of environmental management performance at HEIs and proposes a model for the implementation of advanced top-down steering striving a whole institution approach.\n",
"role": "assistant"
}
] |
QFD-based modular logistics service design
|
[
"Quality function deployment",
"Design",
"Service quality assurance",
"Third party vendors",
"Modularity",
"Service design",
"House of quality",
"Third party logistics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Introduction: With the explosive growth of the global service economy, outsourcing logistics activities to third-party logistics providers (3PLs) has increased rapidly (Bask, 2001) and is now a common practice in many industries, normally driven by a strategy of the companies focusing on their core competency and cost benefits (Tate and Ellram, 2009). Meanwhile, 3PLs face fierce competition, especially due to varying, fast-changing and customised requirements from their customers. However, the logistics industry is not amongst the most innovative industries even in developed countries like the USA and the UK. The main reason for this is that the logistics industry is still in the evolutionary rather than the revolutionary stage (Mena et al., 2007). Consequently, designing logistics services effectively and efficiently and providing and managing service variety to customers with high service quality have become critical success factors to 3PLs.The purpose of this research is to apply the QFD concept and the HOQ tool to better understand and transform customer requirements into designs of service, process and activity to ensure the service quality. Another focus is to construct a logistics service platform following the modular logic within a 3PL company to provide and manage service variety to customers. This paper expects that QFD/HOQ, applied in logistics service design, will help 3PLs to improve their performance (service quality) in effectively establishing linkages between customer requirements and service specifications (design target values). Meanwhile, the three-layer modular logistics service platform is expected to contribute to providing and managing service variety in 3PL companies. The integration of QFD and modularity is also helpful to 3PL companies to better deploy their resources to achieve higher customer service levels.For one thing, effective service design tools are urgently needed in the logistics service sector in order to ensure service quality to satisfy customers. One of the essential characteristics of logistics services is that the customer critically affects the design and development of such services (Choy et al., 2008). In order to achieve high service quality and customer satisfaction, 3PLs need to better understand their customers' businesses, and then transform their requirements into service designs. QFD has been regarded as a useful method to understand customer requirements and to develop comprehensive products or service specifications (Bottani and Rizzi, 2006). Its HOQ tool is used to define the relationships between customer requirements and product capabilities. Considering its application in the logistics sector, Baki et al. (2009) presented a case study to employ QFD together with different scientific methodologies to design the logistics service in a cargo company, which proves that QFD is helpful in transferring the strategic service quality needs of customers into cargo services. This is the reason why this paper integrates QFD in logistics service design for 3PLs.Also, 3PLs have to provide various customised services to satisfy their customers' needs. The capability to provide and manage service variety (Harvey et al., 1997) is needed by 3PLs, just as the capability to providing and manage product variety (Pil and Holweg, 2004) is required by manufacturing companies. In general, service variety is provided to the customer by delivering options via the service process (Silvestro, 1999). Hence, current research on service design is mainly focused on the design of the service process (Kindstrom and Kowalkowski, 2009) and analysis of the service process (Geum et al., 2009). Since the concept of modularity has become one of the most prevalent strategies to successfully provide and manage product variety (Campagnolo and Camuffo, 2009), this paper applies modular logic in logistics service design, and also develops a modular logistics service platform to ensure that 3PLs manage service variety effectively and create value for their customers (Pekkarinen and Ulkuniemi, 2008). Establishing a service platform has been proven to quickly (re)configure and (re)combine services to achieve service variety and to meet the customised logistics needs of customers in industries including insurance (Meyer and DeTore, 2001), health (Meyer et al., 2007), and logistics (Cambra-Fierro and Ruiz-Benitez, 2009).This paper reviews the current literature dealing with service design, quality, modularity and platform concepts to design and provide new logistics services. In this research, QFD is used as a primary tool to transform customer requirements into service targets in logistics service design. A conceptual framework of integrating the QFD/HOQ tool with a modular service platform is then elaborated upon through a case study of a Chinese 3PL provider with three customers from the automotive, apparel, and home appliances industries. Implications, limitations and future research directions are summarised in the conclusion.\n2. Literature review: 2.1 Modularity\n3. Methodology: 3.1 Case study approach\n4. Findings: 4.1 Logistics service design using QFD\n5. Discussion: 5.1 Framework of QFD-based modular logistics service design\n6. Conclusions: 6.1 Theoretical contributions\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to develop a framework of QFD (quality function deployment)-based logistics service design to integrate the HOQ (house of quality) technique and modular logic to help in designing logistics services with high quality and a large service variety.\nDesign/methodology/approach: - Based on a literature review, a conceptual research framework is built integrating the QFD method and modular logic together. A case study is used to illustrate a real application in logistics service design of the third-party logistics (3PL) provider.\nFindings: - The results show that QFD and modularity used simultaneously as design principles can ensure service design quality at three layers (service, process, activity) in the modular logistics service platform.\nResearch limitations/implications: - This paper provides multi-disciplinary insights for both industry and academics on how QFD/HOQ and modular logic can be integrated to systematically translate customer requirements into logistics service designs.\nPractical implications: - The framework proposed is directed to show how, at the operational level, the service providers can transform customer requirements to customer value with modular services and develop new service modules more quickly for new customers that have not been served before.\nOriginality/value: - The resulting framework combining QFD philosophy and modular logic, particularly integrating three-level HOQs paralleled with three layers in the modular service platform, adds knowledge in the research on service design, operations management and marketing.\n",
"role": "assistant"
}
] |
Using the Shapley value of stocks as systematic risk
|
[
"Beta",
"Mean-variance frontier",
"Optimal portfolios"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The purpose of this paper is to calculate the Shapley value (Shapley, 1953) of securities in optimal portfolios to estimate the systematic risk of individual stocks. The Shapley value emerges from cooperative game theory where it is used to measure the contribution of each player to the common goal of a game. Lloyd Shapley received the Nobel Prize in Economics for this result, and since then, the concept has been a standard theoretical and practical measure in economics, politics and sports as reviewed by Alvin Roth (1988). In finance, the Shapley value has been shown to be highly adequate for allocating costs of insurance companies (Lemaire, 1984), valuing corporate voting rights (Zingales, 1995; Nenova, 2003) and measuring the attribution of risk in banking systems (Tarashev et al., 2015), to cite a few. However, applying the Shapley value in investments and portfolio theory has been quite limited. Only recently, Ortmann (2016) and Colini-Baleschi et al. (2018) used the Shapley theory to price the market risk of individual assets, and Shalit (2017) established the theoretical foundations for the Shapley value to decompose the risk of efficient portfolios. Shapley value does not change the way portfolios are optimized but rather computes the exact and true price to be attributed to individual assets. In this sense, the approach can be validated for use in classical mean-variance (MV) portfolios as presented here and can be used for any other investment model in the future.\n2. Shapley value in optimal portfolios: In this section, I present the methodology for evaluating the Shapley value of securities in a portfolio. Our investment model views portfolio selection as a cooperative game played by the securities to reach the common goal of minimizing portfolio risk for a given expected return or, alternatively, maximizing the portfolio's expected return for a specific risk. This is commonly known to be the standard MV portfolio optimization. What is less common is to consider portfolio selection as a cooperative game and then to use the Shapley value procedure to impute the value that each stock contributes to reduce portfolio risk. Shapley value considers all the portfolios a security can participate from itself alone to the total number of available stocks. We then calculate the stock's marginal contribution to risk reduction when it is added to a portfolio and, last, we average these contributions. The sum of these averages is the Shapley value for a given stock which is formulated below.\n3. Systematic risk in actual portfolios: Before calculating the Shapley value for stocks in actual portfolios, I recapitulate the essence of systematic risk in investments. In portfolio theory, the concept used for systematic risk is a measure of asset riskiness relative to market performance since securities are held in portfolios. This is the risk most relevant for evaluating and pricing risky assets. The typical measure is the beta risk obtained by the relative covariation of asset returns and market portfolio returns, namely,\n4. Shapley value in actual portfolios: The basic idea behind the Shapley value in portfolio analysis is to decompose the risk of optimal portfolios such that they are aptly attributed to individual stocks. The outcome of this decomposition reveals the relevant risk investors should use when managing portfolios. The conception that only risk matters regardless of mean return is not pertinent here because one analyzes MV efficient portfolios, i.e. they are portfolios that minimize the variance for given mean return. Using a data set of 13 selected stocks and ETFs, we calculate the MV portfolio frontier using the procedure outlined in Section 2.2. The assets weights for six data points on the efficient frontier are presented in Table 7.\n5. Some concluding remarks: I have explained the logic and the methodology of Shapley value theory in portfolio analysis as an alternative to value risk and return in optimal portfolios by presenting its advantages relative to standard beta analysis. My conclusion is that the Shapley value theory contributes more to financial optimization than to standard systematic risk analysis because it enables looking at the contribution of each security to all possible coalitions of portfolios.\n",
"role": "user"
},
{
"content": "Purpose: This study aims to propose the Shapley value that originates from the game theory to quantify the relative risk of a security in an optimal portfolio.\nDesign/methodology/approach: Systematic risk as expressed by the relative covariance of stock returns to market returns is an essential measure in pricing risky securities. Although very much in use, the concept has become marginalized in recent years because of the difficulties that arise estimating beta. The idea is that portfolios can be viewed as cooperative games played by assets aiming at minimizing risk. With the Shapley value, investors can calculate the exact contribution of each risky asset to the joint payoff. For a portfolio of three stocks, this study exemplifies the Shapley value when risk is minimized regardless of portfolio return.\nFindings: This study computes the Shapley value of stocks and indices for optimal mean-variance portfolios by using daily returns for the years 2016-2019. This results in the risk attributes allocated to securities in optimal portfolios. The Shapley values are analyzed and compared to the standard beta estimates to determine the ranking of assets with respect to pertinent risk and return.\nResearch limitations/implications: An alternative approach to value risk and return in optimal portfolios is presented in this study. The logic and the mechanics of Shapley value theory in portfolio analysis have been explained, and its advantages relative to standard beta analysis are presented. Hence, financial analysts when adding or removing specific assets from present positions will have the true and exact impact of their actions by using the Shapley value instead of the beta.\nPractical implications: When computing the Shapley value, portfolio risk is decomposed exactly among its assets because it considers all possible coalitions of portfolios. In that sense, financial analysts when adding or removing specific securities from present holdings will be able to predict the true and exact impact of their transactions by using the Shapley value instead of the beta. The main implication for investors is that risk is ultimately priced relative to their holdings. This prevents the subjective mispricing of securities, as standard beta is not used and might allow investors to gain from arbitrage conditions.\nOriginality/value: The logic and the methodology of Shapley value theory in portfolio analysis have been explained as an alternative to value risk and return in optimal portfolios by presenting its advantages relative to standard beta analysis. The conclusion is that the Shapley value theory contributes much more financial optimization than to standard systematic risk analysis because it enables looking at the contribution of each security to all possible coalitions of portfolios.\n",
"role": "assistant"
}
] |
The impact of dependence and relationship commitment on logistics outsourcing: Empirical evidence from Greater China
|
[
"China",
"Dependence",
"Service quality",
"Logistics outsourcing",
"Relationship commitment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: With the development of global competition and the increasing complexity of market demands, third party logistics (3PL) plays an important role in improving customer service and market share (Fugate et al., 2010; Zacharia et al., 2011). Given the prevailing trend of logistics outsourcing practices, 3PL users are becoming increasingly dependent on 3PL providers' resources, including physical assets, information, skilled employees, effective processes, social capital and other tangible and intangible resources, to save non-core investments and improve logistics service quality when serving supply chain customers (Mentzer et al., 1999). For instance, value-added logistics services involving significant specific investments by 3PL users may present switching costs, and subsequently increase dependence on current 3PL providers (Lai et al., 2013). Moreover, because logistics outsourcing is process oriented, the specific processes and performance measures involved are relatively complicated, which may result in 3PL users losing control over service processes and depending on 3PL providers (Maltz and Ellram, 1997). Such dependence issues that are extensively presented in 3PL relationships (McCarter and Northcraft, 2007) may easily trigger 3PL providers' opportunistic behaviors (Pfeffer and Salancik, 1978) and thereby lead to disappointing outsourcing performance. How to manage such dependence issues effectively must be considered in order to mitigate uncertainty and maximize logistics outsourcing performance (Pfeffer and Salancik, 1978; Lai et al., 2013).\nTheoretical background and hypothesis development: 3PL relationship management\nResearch methodology: Data collection\nResults: Unidimensionality, reliability and validity\nDiscussion and implications: Managing dependence through relationship commitment in the 3PL field\nConclusions, limitations and further research: The findings of this study provide practical insights into how 3PL users may cultivate different types of relationship commitment to manage different types of dependence on their providers. Both dependence and relationship commitment enable three types of logistics outsourcing, which is connected to service quality. The findings contribute to our understanding of the dependence-relationship commitment framework in the 3PL context and enrich the relationship management and 3PL literature. They also provide managerial implications for outsourcing managers to outsource different logistics services and enhance service quality.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to develop a theoretical framework involving dependence, relationship commitment, logistics outsourcing and service quality to exhibit the roles of the relational factors involved in logistics outsourcing and their outcomes.\nDesign/methodology/approach: Based on data collected from 361 companies in Greater China, the authors use the structural equation model approach to examine the hypothesized relationships.\nFindings: Both normative and instrumental relationship commitment are necessary for third party logistics (3PL) users to cope with their goal dependence on 3PL providers. However, only normative relationship commitment is necessary when users perceive switch dependence. Normative relationship commitment also plays a more important role than instrumental relationship commitment in facilitating the adoption of 3PL logistic outsourcing. In addition, both basic and advanced outsourcing practices have a positive effect on service quality.\nOriginality/value: This study contributes to both 3PL theories and practices by clarifying how relationships between 3PL users and providers in China are managed.\n",
"role": "assistant"
}
] |
Education in entrepreneurship and the Theory of Planned Behaviour
|
[
"Education",
"Entrepreneurial intention",
"Theory of Planned Behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Entrepreneurship is of fundamental importance to job generation (Acs et al., 1999) and to the prosperity of a country (Kourilsky and Esfandiari, 1997). This explains the interest of governments in influencing the level of entrepreneurial activity and the importance of fostering and promoting the entrepreneurial mindset, especially through education (Rae and Carswell, 2001). Consequently, a proliferation of curricula and courses for entrepreneurial education has taken place since the early 80s and, over those past decades, academic institutions have put a lot of efforts in developing suitable entrepreneurial and small business education programmes. Despite its relatively short history compared to other management disciplines, entrepreneurship education has become a widely recognised and available teaching area at universities and business schools. At the same time, however, the questions of how effective such education programmes are and of how they should be designed to maximise their effectiveness, remain largely unanswered.\nLiterature review and hypotheses: A discussion of the effectiveness of entrepreneurial education is in a first line linked to a still ongoing important debate on whether entrepreneurship implies individuals with a definite personality or with developable skills (Bechard and Toulouse, 1998). Generally, a growing amount of evidence is favouring the alterability of entrepreneurial skills, while the educational approaches that are implemented to achieve these changes differ widely; for instance, when it comes to the training objectives (Bechard and Toulouse, 1998) or the content level (Johannisson, 1991). Empirical investigation of the key propositions is here urgently needed (Hindle and Cutting, 2002), but it is only recently that this matter has received serious attention from researchers.\nMethodology: The hypotheses of the current study were tested on two different samples: one from Belgium and one from Norway. The questionnaires were in French and Norwegian, respectively, and were subsequently translated into English, allowing for parts of them to be quoted hereafter.\nResults: The descriptive statistics and the correlations among the different variables examined are presented in Table II (for the Belgian sample) and Table III (for the Norwegian one).\nDiscussion and conclusion: The results presented here show a strong link between attitude, perceived behavioural control and entrepreneurial intention (in both samples) and between subjective norm and entrepreneurial intentions in the Norwegian sample. However, none of the hypotheses are supported.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of the present empirical study is to investigate the relationship between education in entrepreneurship and entrepreneurial behaviour and compare the effectiveness of frequently used entrepreneurship education teaching approaches.\nDesign/methodology/approach: Hypotheses are derived from the \"Theory of Planned Behaviour\" (\"TPB\"), which suggests that education in entrepreneurship is positively related to attitude (\"A\"), subjective norm (\"SN\") and perceived behavioural control (\"PBC\"); that the effect on A, SN and PBC is stronger for extensive courses in entrepreneurship than for more superficial forms of educational efforts; and that education only influences entrepreneurial intentions indirectly through the effect on A, SN and PBC. The hypotheses are tested using data from two different surveys, one from Belgium and one from Norway.\nFindings: None of the hypotheses are supported. However, the findings from the study indicate a strong direct relationship between participation in extensive education programmes in entrepreneurship and entrepreneurial intention.\nResearch limitations/implications: Additional empirical evidence would be required to confirm these results and help to improve our understanding of the validity bounds of the TPB in the field of entrepreneurship.\nPractical implications: The findings indicate that educational programs or events of limited duration have a rather limited usefulness and that efforts should rather be directed on programs of longer duration. This would, for instance, speak for a re-orientation of some policies like those financing short seminars on venture creation for unemployed people.\nOriginality/value: Using data from two different surveys, one from Belgium and one from Norway, this study raises questions about the sufficiency of the TPB.\n",
"role": "assistant"
}
] |
Information-seeking behaviour of physicists and astronomers
|
[
"Physics",
"Astronomy",
"Information searches",
"User studies",
"Behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: How do scientists really discover, select and use the countless information and communications resources available to them? Studying the information behaviour of scientists has been one of the main concerns of librarians and information scientists at least since The Royal Society Scientific Information Conference (Royal Society, 1948). As information technologies, which nowadays are major means of information service provision, develop, information services are improved and as a result information seeking activities of scientists go though changes and adjustments. This is a cycle where research on information behaviour of scholars leads to better information services and improved information services might make the scholars alter their information seeking activities and behaviour, hence the need for continuous study of the information-seeking behaviour of scholars.Physicists are renowned for their information prowess. They have played a significant role in scholarly communication and publishing, especially in areas such as e-print culture and electronic publishing. They are renowned for having one of the, apparently, most efficient information systems (Nicholas et al., 2005b) and the best organised literature in sciences (Gould and Pearce, 1991, cited in Lawal, 2002). They are known as innovators in methods of scholarly communication (Wertman, 1999). Physics and astronomy are expensive sciences. Nowadays, conducting research in certain areas of physics and astronomy is not feasible for countries unless they are done as multinational collaborative projects. The financial factors and the collaborative nature of the research in many areas of physics and astronomy necessitate the importance of an efficient information system. The supply and the maintenance of such a system require up-to-date knowledge of scholars' information-seeking behaviour which is only achievable by researching this area. Therefore, not surprisingly physicists and astronomers have been the subject of much information science interest. This paper perpetuates that interest but is somewhat unusual in one respect: it takes a closer look within physics and astronomy than has been the case hitherto.\nAims and objectives: The study aims to look at differences and similarities between different research areas within physics and astronomy with regard to two aspects of information-seeking behaviour, including methods used for keeping up-to-date and methods used for identifying articles.\nAbout UCL department: The UCL department is a research oriented department that at the time of data collection of this study (2005-2006) had about 150 academic and research staff and more than 100 research students.The department consisted of four research areas and contributed to six research centres that each had their own researchers. The four research areas were:1. Astronomy, Astrophysics and Atmospheric Physics.2. Atomic, Molecular, Optical and Positron Physics.3. Condensed Matter and Materials Physics.4. High Energy Physics.Although the main structure of the department was based on the four aforementioned research groups, some of these groups were composed of smaller research groups that were quite characteristic and could be studied separately rather than as part of the bigger research group. For instance, although Atmospheric Physics is part of the broader research group \"Astronomy, Astrophysics and Atmospheric Physics\" (the first research group), it is a quite distinctive research group with its own laboratory and research areas that are not very related to astronomy. Or in the case of the second research group \"Atomic, Molecular, Optical and Positron Physics\", two subgroups of \"Optical Science Laboratory\" and \"Theoretical Molecular Physics\" could be separated as two distinctive groups. Therefore, the researchers have decided to consider the following seven research groups as the research areas in the department and units of analysis in this study wherever appropriate:1. Atmospheric Physics (AP).2. High Energy Physics (HEP).3. Condensed Matter and Materials Physics (CMMP).4. Astronomy and Astrophysics (AA).5. Theoretical Molecular Physics (TMP).6. Atomic, Molecular, Optical and Positron Physics (AMOP).7. Optical Science Laboratory (OSL).\nLiterature review: Not many studies have been done on the information-seeking behaviour of physicists and astronomers and the methods they utilise for finding information. A rather old study by Ellis et al. (1993), before the popularity of the web-based information services, investigated the information-seeking patterns of a group of social scientists, physicists and chemists using the grounded theory approach. They did not find fundamental differences among these groups, surprising perhaps. Therefore, they concluded that overall differences between the information seeking activities of the chemists, physicists, and social scientists seemed more a difference of emphasis than of a fundamental difference in behaviour. They identified five main features for the information-seeking behaviour of these groups with slightly different terminologies. For the physicists these five core features were: initial familiarization, chasing, source prioritization, maintaining awareness, and locating. These five features are known as Ellis' model of information-seeking behaviour.Regarding applied methods for finding less recent information, a survey of astronomers, chemists, mathematicians, and physicists at the University of Oklahoma by Brown (1999a) found that physicists and astronomers used citations at the end of articles (94 per cent), retrospective searching of indexing/abstracting tools (56 per cent), personal communication (50 per cent) and browsing older volumes (19 per cent). Eighty-one per cent of respondents in the field of physics/astronomy said that they photocopied the library's copy for obtaining journal articles, 75 per cent read the library's copy, 44 per cent used free electronic copy, the same percentage used interlibrary loans, 38 per cent had a personal subscription, and 19 per cent used the library's electronic subscriptions.Nicholas et al. (2005a) surveyed the users of the Institute of Physics (IoP) journals to report on the views and attitudes of physicists around the world in relation to what they get and want from the journals system. Their aim was, in particular, to assess how they use electronic products and services. The findings showed that the most frequent method used to locate journal articles was visiting a journal's web site. Respondents said they were most dependent on visiting a journal's web site for finding articles followed by the library. Younger respondents were more likely to rely on the Web of Science, while older respondents were likely to depend on their personal collection. In general, younger users were more likely to depend on and use online methods. The most important web site proved to be the arXiv e-print server and this was followed by Elsevier ScienceDirect.Another study by Nicholas et al. (2006) that utilised both log analysis technique and online survey to study the information-seeking behaviour of ScienceDirect's authors (as users) revealed some information about physicists in comparison to scientists in other fields. Physicists, compared to respondents from other subjects, were more likely to be browsers than searchers - they obtained much information by requesting journal homepages and journal issues. Physicists were also more active, making a higher number of requests in a session.Brown (1999a) surveyed astronomers, chemists, mathematicians and physicists at the University of Oklahoma and found that physicists and astronomers mostly used current journals and had reliance on pre-print archives. Mathematicians and physicists/astronomers used personal communication and conference attendance but chemists used current awareness services. This study also shed some light on the different sources of reading used for different purposes. It turned out that for teaching purposes, physicists and astronomers relied 93 per cent on textbooks, 40 per cent on journals, 6 per cent on preprints, 6 per cent on conference attendance and 13 per cent on personal communications, while these figures for research purposes were, respectively, 33, 87, 67, 60, and 33 per cent.A year after Brown's study, Cho (2000) investigated the sources of reading by physicists and concluded that astrophysicists have virtually replaced journal reading with regular inspection of the Astrophysics archive at arXiv.org. More recent studies by Tenopir and King (2002) found that the number of readings per scientist across all work fields increased from an average of 100 articles per year in mid-1990s to 130 per year in the early 2000s. A survey in 2000 showed that physicists read an average of 204 articles per year and spend 153 hours per year reading on average.In general, physicists and astronomers are heavy users of e-print archives (Kling and McKim, 2000; Fry, 2003). According to Lawal (2002) this high usage of e-print archives is because theoretician physicists depend on the work of their predecessors. The information most important to them is often too recent to have been published; hence they use e-print archives.As the literature review shows there have not been any studies on similarities and differences among different subfields of physics and astronomers in terms of their information seeking behaviour. All of the studies on physics consider the whole field as a single discipline without investigating areas within physics or astronomy. This study tries to fill this gap.\nMethodology: The data presented here was obtained from a mixed-method research project conducted as a PhD thesis (Jamali, 2008). The research included interviews, critical-incident data, as well as a questionnaire survey of staff and PhD students in the Department of Physics and Astronomy at University College London (UCL). This article presents part of the findings of the survey study.A self-administered web-based questionnaire was designed for conducting the survey. The questionnaires went online on 3 May, 2006. To conduct the survey, a personalised e-mail was sent to all staff and PhD students in the department with a link to the questionnaire. This was followed by two sets of reminder e-mails with about ten day intervals.There was no need for sampling in the survey as it was possible to include all PhD students and staff in the Department of Physics and Astronomy at UCL in the survey. However, the respondents were self-selective due to what is known as the phenomenon of non-response. This phenomenon refers to the difference between the initial sample (all individuals about whom we want to collect information) and the final sample (the cases we manage to get information on). This phenomenon is composed of different aspects including refusal to participate (because of lack of time or other personal reasons) in a survey or to be interviewed (Gobo, 2004, p. 441). As the participation in the survey was voluntary, the refusal by some to take part in the questionnaire was the main reason for the non-response phenomenon in this study. There were 129 staff and 113 PhD students (total 242 people) in the department and the survey achieved 47.1 per cent response rate with 114 respondents, which is a good rate by any standard as Hemminger et al. (2007) showed that participation rates range from 3 to 62 per cent for electronic surveys.\nFindings: Characteristics of the sample\nConclusions: The findings presented here show that, although similarities exist among subfields of physics and astronomy with regard to information-seeking behaviour, there are significant differences as well. The subfields of physics and astronomy are different in terms of their reliance on different methods used for keeping up-to-date as well as methods used for finding articles. Most of the articles read by people in Astronomy and Astrophysics (AA) are identified through recommendations by colleagues, searches in databases and e-mail alerts. AA relied on browsing e-print archives, e-print e-mail alerts and word of mouth in order to keep up-to-date. People in AMOP relied a great deal on using journal web sites as well as searching Google for finding articles they read. Browsing e-journals and browsing e-print archives were the two most used methods for keeping up-to-date with the developments in their field. AP physicists relied on colleagues and reference tracking for finding articles they read. However, word of mouth, browsing electronic journals and e-print archives were the main sources of information in order of importance for keeping up-to-date. Physicists in CMMP mainly used recommendations by their colleagues, conducting searches in general databases and tracking references for identifying articles they read. In addition they tended to browse electronic journals, rely on word of mouth, search databases and receive journals' table of contents e-mail alerts in order to keep up-to-date. Physicists in HEP relied mostly on searches in subject databases (arXiv.org) for identifying articles they read. The second most used method was searching in Google. The fact that Google was the second used means by which articles were found in the field of HEP might be because of high availability of open access material in HEP that makes everything searchable by general search engines such as Google. In order to keep up-to-date with the developments in HEP they mainly depended on browsing e-print archives, word of mouth and meetings. Physicists of OSL relied on tracking references and colleagues for identifying articles they read. Attending conferences and receiving table of contents e-mail alerts were the two main methods on which they relied for keeping up-to-date. Finally, the researchers in TMP relied mainly on tracking references for finding articles they read. Browsing electronic journals and searching databases were the two main methods used for keeping up-to-date by the researchers in this group.The study lends support to the findings of Brown (1999b) that showed high usage of citation tracking. It also confirms some of the findings by Nicholas et al. (2006) that showed physicists, compared to scientists in some other subjects, were more likely to be browsers. However, as we can see, different subfields are different and talking of physicists here might be over-generalising the data. The study highlights the need for and the value of looking at narrower subject communities within disciplines for a deeper understanding of the information behaviour of scientists.\n",
"role": "user"
},
{
"content": "Purpose: - The study aims to examines two aspects of information seeking behaviour of physicists and astronomers including methods applied for keeping up-to-date and methods used for finding articles. The relationship between academic status and research field of users with their information seeking behaviour was investigated.\nDesign/methodology/approach: - Data were gathered using a questionnaire survey of PhD students and staff of the Department of Physics and Astronomy at University College London; 114 people (47.1 per cent response rate) participated in the survey.\nFindings: - The study reveals differences among subfields of physics and astronomy in terms of information-seeking behaviour, highlights the need for and the value of looking at narrower subject communities within disciplines for a deeper understanding of the information behaviour of scientists.\nOriginality/value: - The study is the first to deeply investigate intradisciplinary dissimilarities of information-seeking behaviour of scientists in a discipline. It is also an up-to-date account of information seeking behaviour of physicists and astronomers.\n",
"role": "assistant"
}
] |
Low carbon supplier development: A fuzzy c-means and fuzzy formal concept analysis based analytical model
|
[
"Multi-criteria decision making",
"Supplier development",
"Fuzzy c-means clustering",
"Fuzzy formal concept analysis",
"Low carbon supplier development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: An increasing number of studies on emission-related issues in the supply chain management have been conducted over the past decades. This is because of the growing pressure from multiple stakeholders such as government bodies, nongovernmental organizations and foreign customers (Sarkis et al., 2010). Thus, organizations have been compelled to initiate several strategies for abatement of greenhouse gasses (GHGs) emission. One effective way for a firm to achieve emission reduction is to integrate their upstream and downstream supply chain partners into their emission reduction strategies (Theiben et al., 2014). However, a firm must have a multiple investment and collaboration strategies for managing cross-border collaborations among their supply chain partners (Bai et al., 2016). Thus, a firm can help their supply chain partners who often face a lack of resources and capabilities to improve their emission-related performances. Supply chain partners require a focal firm's support and investments for overcoming environmental issues in their business processes (Bai and Sarkis, 2010; Fu et al., 2012). Similarly, a firm may also have limited resources and capabilities. Complexities related to a firm's investments and collaboration decisions therefore warrant a scientific investigation.\n2. Review of the related literature: In this section, we provide a brief review of the literature on low carbon supplier development. We classify our review in two sub-categories such as GSD and low carbon supplier selection. In the next two sub-sections, we provide a detailed description of each of these two sub-categories.\n3. Evaluation of low carbon supplier development criteria: Krause et al. (1998) stated that integration of supplier performance evaluation into the supplier development program is imperative. Similarly, Glock et al. (2017) also stated that an organization needs to evaluate suppliers' performance prior to implement supplier development program. Therefore, we evaluate suppliers' carbon performance prior to invest and collaborate with them in our analytical model. The suppliers' carbon performance evaluation criteria were developed after reviewing Hsu et al. (2013), Theiben and Spinler (2014), Blome et al. (2014), Kuo et al. (2015) and Rao et al. (2017). Hsu et al. (2013) and Kuo et al. (2015) stated that by integrating a carbon reduction strategy at the board level, setting targets and goals for emission reduction, identifying the scope for emission reductions, disclosing transparent emission data and incorporating life cycle cost, suppliers can have a better carbon management strategy at their organizational level. They also stated that effective carbon management needs implementation of employee training, knowledge-based information systems, low carbon technologies and systematic CF measurement techniques. Theiben and Spinler (2014) emphasized on supplier commitment to participate in external environmental initiatives such as product CF projects and carbon disclosure projects. They argued that suppliers that are more prone to participating in external events are more aware of emission reduction. Similarly, supplier risk of failure to achieve targeted emission level and market power for emission reduction compared to their competitors are also important criteria for supplier carbon performance evaluation (Hsu et al., 2013; Theiben and Spinler, 2014).\n4. Background of the FCM clustering algorithm and FFCA: In this section, we first define the relevant notations and definitions related to the FCM clustering algorithm and FFCA. Thereafter, we illustrate our proposed analytical model with the help of a case study from the Indian energy sector.\n5. Model development: In this section, we provide a stepwise description of the proposed framework, which primarily consists of two broad stages: clustering of existing suppliers based on their carbon performance criteria by applying the FCM clustering algorithm; and extracting of formal concepts from formal contexts by using the FFCA. The detailed methodology is shown in Figure 1. In subsequent sections, we discuss each stage and step in detail through a case illustration.\n6. Results and discussions: This study is important because growing climate change, increasing concentration of GHGs and global warming have started to threaten our natural environment. As a result, policy makers have adopted stringent regimes to combat these challenges. Organizations have redefined their management strategies in line with these regimes. Therefore, it is also imperative for organizations to evaluate their supplier emission performance to improve their own emission performance. Only supplier evaluation of emission performance is insufficient. Sometimes, suppliers do not have much resources and capabilities. Therefore, organizations need to extend their support to their suppliers to help them improve their emission performance by adopting emission reduction strategies. However, organizations also may have limited resources and capabilities. Further, not all suppliers will require support to improve their performances. Prior studies failed to answer the relative level of investment for supplier development. Thus, this study is important from the perspective of a buying firm managing its supplier carbon performance with limited capabilities and resources.\n7. Summary and conclusions: 7.1 Summary\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide an analytical model for low carbon supplier development. This study is focused on the level of investment and collaboration decisions pertaining to emission reduction.\nDesign/methodology/approach: The authors' model includes a fuzzy c-means (FCM) clustering algorithm and a fuzzy formal concept analysis. First, a set of suppliers were classified according to their carbon performances through the FCM clustering algorithm. Then, the fuzzy formal concepts were derived from a set of fuzzy formal contexts through an intersection-based method. These fuzzy formal concepts provide the relative level of investments and collaboration decisions for each identified supplier cluster. A case from the Indian renewable energy sector was used for illustration of the proposed analytical model.\nFindings: The proposed model and case illustration may help manufacturing firms to collaborate with their suppliers for improving their carbon performances.\nResearch limitations/implications: The study contributes to the low carbon supply chain management literature by identifying the decision criteria of investments toward low carbon supplier development. It also provides an analytical model of collaboration for low carbon supplier development. Though the purpose of the study is to illustrate the proposed analytical model, it would have been better if the model was empirically validated.\nOriginality/value: Though the earlier studies on green supplier development program evaluation have considered a set of criteria to decide whether or not to invest on suppliers, these are silent on the relative level of investment required for a given set of suppliers. This study aims to fulfill this gap by providing an analytical model that will help a manufacturing firm to invest and collaborate with its suppliers for improving their carbon performance.\n",
"role": "assistant"
}
] |
Assessing the service quality of department store using RSQS an Empirical study of Albaha Region, Saudi Arabia
|
[
"Service quality",
"RSQS",
"Arab customers",
"Department stores quality",
"Saudi customers"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Saudi Arabia is considered to be one of the richest gulf countries with $197,342m of gross domestic product (GDP) in the second quarter of 2019. This ranks Saudi Arabia at number 17 in the ranking of GDP of the 50 countries. All this information helps to understand that Saudi Arabian consumers have high purchasing power. Consumers with high purchasing power have higher standards of expectations (Wood and Neal, 2009; Lemke et al., 2011; Teoh and Gaur, 2019). Measuring service quality (SQ) was never an easy task. But in the context of Saudi Arabia where the consumers have high spending power, this task of measuring SQ becomes more difficult. As it is difficult to measure and define the SQ, it has become the most deliberated and discussed topic for researchers in the recent past as the advent of internet enhanced the consumers' expectations. Nowadays, consumers are more informed about SQ(Durante and Griskevicius, 2016; Hochstein et al., 2019). Most of the research studies have defined quality as when the expected services meet the delivered services, it is known as quality. In other words, when the gap in delivered services and the expected services does not exist, it is called SQ. The extent to which a service meets customers' needs or expectations, and it involves a comparison of customer expectations with customer perceptions of actual service performance (Parasuraman et al., 1985, 1988; Lewis and Mitchell, 1990). Those customers who are satisfied are expected to continue their loyalty (Wong and Sohal, 2003). The vehicle to ensure consumer satisfaction is SQ (Sivadas and Baker-Prewitt, 2000). It also helps positioning in a competitive environment (Mehta et al., 2000) and retention and patronage (Yavas et al., 1997). With the increasing number of retail department stores, there is a need to study and explore the factors responsible for SQ. Hence, the purpose of this study is to further explore the factors and their impact on SQ in the context of Saudi Arabia.\nLiterature review: Firms get their competitive advantage by exploiting the technology for the aim of enhancing SQ and gathering market demand (Parasuraman et al., 1985, 1988). For a really long amount of time, varied researchers have developed a service perspective (Kaul, 2007; Nguyen Dang DuyNhat, 2007; Shucui, 2009; Zia and Azam, 2013). The construct of SQ ought to be usually approached from the purchaser's perspective, therefore, it becomes totally different values, totally different ground of assessment and totally different circumstances (Lemke et al., 2011; Durante and Griskevicius, 2016). It was also found that SQ is an extrinsically perceived attribution supported by the purchasers concerning the service that the customer perceived through the service encounter (Parasuraman et al., 1985, 1988). SQ is not only concerned within the final product and services, however, conjointly concerned within the production and delivery method, so worker involvement in method plan and commitment is very important to provide final product or services (Sohail and Shaikh, 2008). The character of SQ needs a particular approach to identify and implement SQ (Siu, 2009; Wirtz and Lovelock, 2016).\nResearch question: The main issues addressed in this research are related to the retail SQ using RSQS in the department stores of Albaha region Saudi Arabia. The researcher has tried to measure the overall retail SQ in stores by evaluating all five dimensions from the consumer's perspective.\nHypothesis: Null hypothesis states that there is no probability to have a linear relationship between the department store SQ (dependent) and independent factors (all five factors).\nResearch methodology: This study was conducted to explore the factors responsible for contributing to the SQ of department stores in Albaha, KSA. Active retail shoppers are the population for this study; it includes both Saudi shoppers as well as expatriates (citizens of other countries living in Saudi Arabia). While collecting data, both Saudi shoppers and expatriates shoppers were considered. Both have been considered to form uniform SQ perceptions. A data of 600 retail shoppers were collected through a personally administered questionnaire. After the careful analysis, it was found that only 508 questionnaires are fit to be included for the study. Customers who were coming for shopping were asked to fill the questionnaire after they have finished shopping or while they were waiting for billing and standing in queues. All together ten stores were selected from the Albaha city region to collect data. From each store, 60 questionnaires were filled by the respondents. As the customers' walk-in is high on weekends, the data was collected on the weekends of January 2019.These stores were from different locations in the city, namely, from Al-Aquiq city, Albaha city and Buljurshi city.\nAnalysis and results: This study checks the validity of scale using factor analysis, whereas the reliability was measured using Cronbach's alpha. Table I shows the basic statistics associated with 28 questions. Table II shows the overall value of Cronbach's alpha for all the five independent dimensions together followed by the individual Cronbach's alpha value calculation for each dimensions. Tables II and III show the details of the Cronbach's alpha value calculation.\nDiscussions: In this study, the null hypothesis states that there is no probability of linear relationship between dependent (store SQ) and predictors variables (physical aspect, reliability, personnel interaction, problem-solving and policy) was rejected and out of five variables, four variables were found to have a significant impact on a dependent variable with p-value less than 0.05 and one variable (policy) has an insignificant impact with p-value more than 0.05 (Table X).\nConclusions: Based on this study, it can be concluded that if the departmental stores in Albaha region want to increase the SQ, but for this purpose, they have to pay more importance to the reliability, problem-solving and physical aspect which has a significant impact on the SQ. These three dimensions are positively influencing the SQ which means that any improvement in these three aspects enhances the SQ of the departmental stores. The personal interaction dimension has a negative but significant impact on the SQ. This means that if retailers try to have more personal interactions with the customers then they might feel uncomfortable and the SQ may diminish. Saudi customers do not personally appreciate the personal interaction with the retailers. Personal interaction scored the negative value which means that shoppers in this region do not expect personal interaction with the seller. This is very well understood because of the sociocultural restrictions in Saudi Arabia.\nLimitations: This study was conducted in Saudi Arabia; therefore, this study observes some limitations. One of the biggest limitations of the study is the end-user and the researcher's interactions. As the society under study is a conservative society, it was very difficult to gather their unbiased opinions. Second, the gender mixing is not permitted because of which the male and female voluntaries have to be arranged for the data collection. Furthermore, as the society is developing, the product preferences and likeness are not developed yet. This creates a problem in customers understating about the services and the level of services. Until and unless a customer is exposed to services, they are not aware of the level of quality they expect from the companies.\nFuture research directions: In this study, it was observed that \"personal interaction\" scored a negative correlation. This negative b value shows its inverse relationship with retail SQ which is an area of research to explore. These findings need a further examination of reasons beyond the scope of this research which can justify inverse relationship of SQ and \"personal interaction\".\n",
"role": "user"
},
{
"content": "Purpose: This study aims to explore the factors responsible for influencing the service quality (SQ) of department stores in Albaha region, Saudi Arabia. This study also examines the level of influence on SQ.\nDesign/methodology/approach: Primary data was collected from 600 customers out of which 508 questionnaires were included for analysis using retail service quality scale (RSQS). Quota sampling technique was adapted for data collection from different cities of Albaha province. SPSS software was used to analyse the data using multiple linear regression analysis.\nFindings: Three dimensions have a positive significant impact on SQ (physical aspect, problem-solving and reliability), whereas one dimension (personal interactions) has a negative significant influence and one dimension (policy) is found to be a highly insignificant factor.\nResearch limitations/implications: Further investigation is required for the negative but significant behaviour of personal interaction and insignificant impact of policy as factors.\nPractical implications: This study outlines the dimensions having positive and significant influence of SQ on the department stores.\nSocial implications: This study suggests socially and culturally acceptable methods to improve SQ in Albaha.\nOriginality/value: The research suggests that personal interaction has a negative influence on SQ; this deviation of results from the literature may be due to the cultural variances in KSA which needs to be further investigated, as customers in Albaha are not willing to converse and interact with the seller.\n",
"role": "assistant"
}
] |
Library eArchiving with ZONTAL Space and the Allotrope Data Format
|
[
"Digitization",
"Data preservation",
"Digital archiving",
"OAIS",
"Allotrope",
"ZONTAL space"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Today, most industries are feeling the pressure to digitize their data and their processes (Kagermann, 2015). Despite the broad impact of advances in technology, it is still quite common that similar solutions are developed either in parallel or subsequently in orthogonal branches of industry. Many synergistic effects are thus left untouched by foregoing more inter-industrial exchanges of how to implement best practices. Here we will show how advances in the pharmaceutical approach to deal with data can positively benefit university libraries.\n2. The Allotrope Data Format: In 2012, the Allotrope Foundation (www.allotrope.org/) formed as an international consortium of pharmaceutical, biopharmaceutical, and other scientific research-intensive industries. The aims of the foundation are developing advanced data architectures to transform the acquisition, exchange and management of laboratory data throughout its complete lifecycle. The first tangible product developed by the consortium was the Allotrope Data Format (ADF) (Oberkampf et al., 2018), released in 2017. From the long-term preservation perspective, ADF may be summarized as a PDF for scientific data. ADF is a vendor neutral format that implements the information package as described in the OAIS standard (Giaretta, 2012). ADF is built on HDF5 (Folk et al., 2011) core libraries that have been used for effectively storing large satellite data since the 1970's.\n3. Comparison of Open Archival Information System, BagIt, Allotrope Data Format and user requirements: Many libraries are currently using data management tools that predate the release of the latest OAIS standard. Many of these solutions implement the BagIt standard (Kunze et al., 2018) that resembles core functionalities of the information package as prescribed by the OAIS. The question that forces itself upon each operator of such a legacy system is, whether such a solution can be considered OAIS compliant. Here we will provide an overview of core functionalities and features suggested by the OAIS standard and FAIR data principles and evaluate the ability of BagIt and ADF standards to deliver these features. The results are shown in Table I.\n4. ZONTAL Space: a library use case: A full OAIS requires effective communication between data producers, consumers, and management. To submit, archive, and disseminate information packages, an information management system should provide highly customizable, ideally automatic, workflows. ZS is an implementation of an OAIS that attempts to achieve all of the above. ZS was designed to be completely format agnostic and interoperable with existing IT landscapes. It can connect through restful APIs or through a user interface with the outside world to ingest data and to trigger automatic metadata enrichment. Throughout the lifecycle of a file, ZS keeps track of the status and can prompt data stewards for managerial intervention. Internally, ZS converts ingested data to ADF files and stores them initially as submission information packages. After promotion to archival information packages, the rights and permissions of the files are updated in compliance with OAIS. ZS provides basic and detailed customizable report features that enable effective management of the information system. Dissemination is achieved through manual downloads via the user interface, via restful APIs, or through connection to other downstream systems, like business analysis tools.\n5. Conclusion and outlook: Allotrope is rapidly growing in adaption in the pharmaceutical area and was here shown to bring new functionality to digital archives. ZS is currently the only information management system that converts all ingested data into the ADF and leverages the abilities of this standard. Usage of both enables more efficient workflows, long-term data preservation, and opens opportunities for additional data reuse benefits.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this technical paper is to evaluate the emerging standard \"Allotrope Data Format (ADF)\" in the context of digital preservation at a major US academic library hosted at Brigham Young University. In combination with the new information management system ZONTAL Space (ZS), archiving with the ADF is compared with currently used systems CONTENTdm and ROSETTA.\nDesign/methodology/approach: The approach is a workflow-based comparison in terms of usability, functionality and reliability of the systems. Current workflows are replaced by optimized target processes, which limit the number of involved parties and process steps. The connectors or manual solutions between the current workflow steps are replaced with automatic functions inside of ZS. Reporting functionalities inside of ZS are used to track system and file lifecycle to ensure stability and data preservation.\nFindings: The authors find that the target processes leveraging ZS drastically reduce complexity compared to current workflows. Archiving with the ADF is found to decrease integration complexity and provide a more robust data migration path for the future. The possibility to enrich data automatically with metadata and to store this information alongside the content in the same information package increases reusability of the data.\nResearch limitations/implications: The practical implications of this work suggest the arrival of a new information management system that can potentially revolutionize the archiving landscape within libraries. Beyond the scope of the initial proof of concept, the potential for the system can be seen to replace existing data management tools and provide access to new data analytics applications, like smart recommender systems.\nOriginality/value: The value of this study is a systematic introduction of ZS and the ADF, two emerging solutions from the Pharmaceutical Industry, to the broader audience of digital preservation experts within US libraries. The authors consider the exchange of best practices and solutions between industries to be of high value to the communities.\n",
"role": "assistant"
}
] |
Seasonal and cyclical behaviour of farm gate milk prices
|
[
"Milk",
"Prices",
"Seasonality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Changes in the common agricultural policy (CAP) play an important part when examining milk price dynamics in the EU dairy industry. In particular the Luxembourg agreement in 2003 marked a major change in EU dairy sector policy. Prior to this the focus was on maintaining high and stable prices through a suite of market intervention tools. The Luxembourg agreement saw the focus for dairy shift to greater market orientation with the introduction of income support via the Single Payment Scheme (SPS) and the reduced use of market intervention tools which in turn led to greater price variability. A similar effect could be observed on the US milk price after the product purchase prices were reduced under the Dairy Price Support Program in the mid 1980s.\nOn seasonality and cycles in agricultural commodities: Economic theory suggests that seasonality and cycles are common features in agricultural commodity prices (Tomek and Robinson, 2003; Piot-Lepetit and M'Barek, 2011). Seasonality can arise because of seasonal patterns in supply and demand. The timing of seasonal effects in commodity prices is thus usually highly predictable whereas the effect may be stochastic. In the case of the dairy industry seasonality in prices may be explained by the biology of the livestock, seasonal production costs and demand changes throughout the year.\nSome aspects of EU and US dairy policies: Within the dairy industry agricultural policies play an import part. The EU dairy industry is subject to the CAP. Prior to the Luxembourg 2003 agreement the CAP focussed on maintaining adequate and stable prices for particular dairy commodities such as butter and SMP. It was expected that the stable prices from these commodities were to be transmitted to the farm gate price for milk. This aim was mainly achieved by purchasing to intervention stores, setting production quotas, export refunds, import tariffs and subsidized consumption[2]. These measures resulted in EU dairy commodity prices being significantly higher and less volatile than world prices. Additionally it could be argued that these measures can be seen counter cyclical dampening the magnitude of cycles. For example intervention prices which serve as floor prices would be probably only active at the trough of a cycle.\nMethodology: The structural time series approach by Harvey (1989) is used for the decomposition of the price series into trend, seasonal and cyclical components. For the decomposition an additive structural model is used which is described in Harvey (1989) and Durbin and Koopmann (2012):\nData: This paper considers the raw milk price paid to milk producers in Germany and Ireland as well as the average EU price. These prices are collected by the EU Commission from its member states at the end of each month and were published by the milk market observatory[6]. The average EU price is included as it provides a useful reference price for policy makers. The German price is included as Germany is the largest producer of milk in the EU. Finally the Irish price series is included because of it represents a seasonal and grass based feed system. Irish milk production is far more seasonal than production in Germany and practically all other countries of the EU. This can be seen from Figure 1, where monthly milk collection from 2012-2014 is shown. Thus a priori one would expect that the dynamics of the Irish price might differ from that of the German price and most other major producer countries like France, the Netherlands or Italy.\nResults: The model described in the methodology section is now applied to the farm gate raw milk price in Germany, Ireland the average EU price along with the US price. These decomposition results are also presented in this section. The estimated parameters are reported in Table II for all series and the respective time period. In addition a maximum of two intervention variables per series for outliers as described in Commandeur and Koopmann (2007) are added.\nConcluding remarks: In this paper the EU, German and Irish farm gate milk price time series were decomposed into trend, seasonal and cyclical components using the structural time series method. The analysis was done for the periods from 1995 to 2004 prior to the CAP reform and 2004 to 2013 post the CAP reform. Also a comparison with the US market for the latter period was presented. While these data sets may in some cases be considered short for cycle analysis these models have a good fit based on normality, independence and heteroscedasticity tests of the prediction errors. The main results of the decomposition are threefold:\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyze how the German, Irish and average EU farm gate milk prices have changed after the common agricultural policy (CAP) reform in 2003. In addition the dynamics of these prices are compared to a US farm gate price.\nDesign/methodology/approach: These milk price time series are divided into two time periods, pre and post the CAP 2003 reform, and decomposed into their trend, seasonal and cyclical components. For the decomposition a state space model is used following the approach of Harvey (1989).\nFindings: The results show that the dynamics of the EU, German and Irish series converged after the CAP 2003 reform were implemented and that a three-year cycle is underlying the European milk prices which is comparable with the cycle length of the US milk price. In addition it is shown that most of the observed price variation in recent times is attributed to the cyclical component.\nResearch limitations/implications: The division of the milk price time series into periods pre and post the CAP 2003 reform is somewhat subjective because not all measures were immediately applied after the reform. It is also possible that other factors may have contributed to the changed dynamics which have been observed. In addition this leads to a short data sample.\nPractical implications: The results show that policy makers should consider counter cyclical policy measures given the importance of the cyclical component. Also most models used to evaluate policies do not account for cycles which may lead to wrong conclusions. In addition farmer should be aware of the cyclical nature of milk prices as they budget and plan for the future.\nOriginality/value: No previous decomposition studies of European milk prices exist.\n",
"role": "assistant"
}
] |
Brand hypocrisy from a consumer perspective: scale development and validation
|
[
"Measurement",
"Scale development",
"Anti-branding",
"Brand relationships",
"Brand hypocrisy"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Although consumers develop mainly positive relationships with brands (Batra et al., 2012; Fournier, 1998; Thomson et al., 2005), there is increasing interest in understanding negative consumer reactions to brands and the nature of negative brand perceptions (Fournier and Alvarez, 2013; Zarantonello et al., 2016). The issue is particularly relevant in a context in which skeptical consumers incessantly scrutinize all manner of brand actions (Holt, 2002; Krishnamurthy and Kucuk, 2009). Modern-day consumers are quick to criticize brands for misbehaving, lacking transparency or acting in a manner contrary to their personal beliefs. Indeed, they readily accuse brands of being insincere (Maehle et al., 2011), inauthentic (Thomson and Arsel, 2004) and dishonest (Lee et al., 2009).\nTheoretical background: Hypocrisy\nScale development: Studies 1a and 1b: Item generation and reduction\nDiscussion and contributions: This research provides a comprehensive understanding of brand hypocrisy in terms of structure and dimensionality, and develops a valid, reliable, parsimonious scale for measuring brand hypocrisy. Results show the latter to be a multidimensional construct comprising four dimensions, namely image, mission, message and social hypocrisy. A brand can, therefore, be perceived as hypocritical from the consumer viewpoint in instances of the following:\nLimitations and future research avenues: This study, albeit not devoid of limitations, is the first to investigate the concept of brand hypocrisy from a consumer perspective. First, in all instances, participants had to self-select a hypocritical or non-hypocritical brand irrespective of product or service category. Although the study covered a broad range of categories, future work could determine whether or not systematic variations exist across categories. Such an investigation might be particularly interesting from a managerial standpoint. Relatedly, across studies, it would have been appropriate to control for participant attitudes towards given industries or product categories. Second, this study does not probe individual variations in perceptions of brand hypocrisy. Future work might wish to consider individual variables which can interfere with judgments relating to brand hypocrisy. Skepticism (Obermiller and Spangenberg, 1998), cynicism (Helm, 2004) and brand engagement in self-concept (Sprott et al., 2009) represent interesting examples deserving of consideration Third, this paper does not provide a comprehensive assessment of the antecedents of perceptions of brand hypocrisy. Although the dimensions identified herein do provide some information about how perceptions of brand hypocrisy emerge in the eyes of consumers (e.g. a brand that supports a cause misaligned with the brand's mission - leading to social hypocrisy), a framework including various types of antecedents (e.g. brand action and communications) would be worth developing in future studies. For example, what role would a brand scandal play in generating perceptions of brand hypocrisy? How important is brand response for attenuating or fuelling judgments of hypocrisy? Experimental studies could be relevant for assessing the antecedents involved. Fourth, only two consequences of perceptions of brand hypocrisy are identified herein, namely brand distance and negative word-of-mouth. Future work could provide a broader assessment in this regard by considering the likes of brand hate (Zarantonello et al., 2016), revenge behavior (Gregoire et al., 2009), trash-talking (Marticotte et al., 2016) loss of brand trust (Sirdeshmukh et al., 2002) and the development of judgments of the like over time. It would therefore be worth contemplating longitudinal approaches. Lastly, the potential interrelations between corporate hypocrisy and brand hypocrisy merit future work. Although this study argues in favor of the need to understand hypocrisy at the brand level, it does not assess under which conditions judgments of hypocrisy extend (or not) from companies to brands and vice versa.\n",
"role": "user"
},
{
"content": "Purpose: There is increasing interest in understanding negative consumer reactions to brands and the nature of negative brand perceptions. The purpose of this paper is to conceptualize the construct of brand hypocrisy from a consumer perspective and develop a scale to measure it.\nDesign/methodology/approach: A multiphase scale development process involving 559 consumers was conducted. Study 1 pertains to item generation and reduction phases. Study 2 reports on scale purification and validation through confirmatory factor analyses and model comparisons. Study 3 focuses on discriminant and predictive validity, while Study 4 further investigates predictive validity using real brands with differences in brand hypocrisy.\nFindings: A 12-item scale measuring four dimensions of brand hypocrisy is developed: image hypocrisy (brand failing to put words into action), mission hypocrisy (brand exerting an unacknowledged negative impact on society or consumer well-being), message hypocrisy (brand conveying unrealistic or unattainable images) and social hypocrisy (brand supporting social responsibility initiatives for strategic purposes only). Results indicate that brand hypocrisy is distinguishable from similar constructs in the literature and that it is a significant predictor of negative word-of-mouth and brand distance.\nPractical implications: This conceptualization provides managers with a detailed understanding of what constitutes a hypocritical brand in the eyes of consumers as well as insights about how to prevent consumer perceptions of brand hypocrisy.\nOriginality/value: Findings enrich the understanding of negative consumer inferences related to brands and provide a conceptualization of an understudied but increasingly relevant form of brand judgment.\n",
"role": "assistant"
}
] |
Why buy second-best? The behavioral dynamics of market leadership
|
[
"Brand management",
"Leadership",
"Pricing",
"Fair value",
"South Korea"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nBackground and motivation for the study: In most markets there is an identified market leader, who is market leader because of some competitive advantage which is obviously compelling to the majority of consumers. If this leading brand does indeed have a compelling competitive advantage, the question immediately arises of why anybody in the market should bother buying the second-best brand? Clearly, there may be situations when a minor brand has particular product or service attributes that appeal to a minority of customers, but then this begs the definition of a \"market,\" and the \"second-best\" question should then be applied at the segment level.Market imperfections, or the strategic actions of competing firms, may be claimed to be the cause of buying second-best. It should be possible, however, to tease this argument out, because if people only buy second-best because first-best is not available in their part of the world, for instance, then their attitudes and their preferences should still make it clear that they truly consider their choice to be second-best. Indeed, it is a fundamental tenant of marketing and consumer behavior that it is consumer preferences which drive purchase behavior.It has been a central focus of consumer behavior to determine exactly how consumers make their choices and, even though great strides have been taken, it is still very hard to state concisely how the process works. There is agreement at a basic level, though, that when involvement in the purchase process to hand is high, then there is probably some attribute evaluation taking place in the mind of the consumer prior to purchase. Various models of attribute evaluation have been proposed and are still being contested but, again, a broad consensus has been reached that some sort of trade-off takes place.This concept, of a type of mental accounting involving trade-offs between attributes, is encapsulated not only in the attitude literature (Voss et al., 2003), but also in the more general choice and brand equity models. In this context brand attributes are often clustered into groups; perhaps functional and emotional (Sheth et al., 1991) or hedonistic and utilitarian (Batra and Ahtola, 1991). Dhar and Wertenbroch (2000) used the latter classification when they proposed that it is a trade-off between hedonistic and utilitarian attributes which leads to brand equity and thus to purchase and market share. Keller (1993) took a different view and claimed that the trade-offs are between attributes, benefits and attitudes/values. Price, or price fairness, is assumed by all these researchers to be simply an attribute, and is dealt with in the same way as any other attribute. However, there is a long history and large body of evidence that price is almost always an important feature, especially when there is a high level of involvement concerning the particular purchase situation to hand (Monroe, 1990; Sweeney and Soutar, 2001; Zeithaml, 1988). It is our intention here to inspect the differences in the trade-off patterns between purchasers of first- and second-tier brands, which might explain why people do sometimes opt for what appears at first sight to be a sub-optimal purchase solution. As can be seen in the basic model which we use as a framework (shown in Figure 1), we will inspect the differences in the trade-offs, between purchasers of leading and following brands, in terms of perceptions of hedonism, utilitarianism and price fairness.In the work that follows we will briefly review the question of market leadership, and then move on to consider three alternative views of consumer choice; the perspective of pricing scholars, of those who consider hedonism in some form or other the driving motivation for purchase, and those who hold more utilitarian views. Finally, the research method will be discussed and the analysis presented.Market leadership\nResearch method: The research reported here took place in Seoul, Korea, and involved selecting the leading brand and the main followers in four markets, then asking 405 university-student purchasers of both brand categories to give their perceptions of the brands on a self-administered, face-to-face survey. First the model is fitted, then the two sets of data compared to seek the trade-off information required.Materials\nResults: In both models the fit is adequate for exploratory work such as this, and all paths (in both models) are significant. At this point the scale reliability is assessed and variables constructed to represent the latent variables. These data are then explored graphically and by t-test (see Table I).All scale items contribute toward their respective scale, so scale means were calculated to allow comparison between the perceptions of purchasers of first- and second-tier brands (see Table II). These are presented in Figures 3 and 4.T-tests reveal no significant difference between the perceptions of the purchasers of first-tier brands of price fairness. However, the perceptions of this group of both the Utility and Hedonism of first- and second-tier brands are significantly different (Hedonism mean difference =0.37, t=2.38, p=0.02; Utilitarianism mean difference =0.73, t=4.97, p <0.001).In Figure 4 the corresponding perceptions of the purchasers of Second-tier brands are shown diagrammatically. Although the perceptions of the Hedonism of leading and following brands is the same for this group, differences in both Price fairness and Utilitarianism are highly significant (Price fairness mean difference =0.95, t=4.49, p <0.001; Utilitarianism mean difference =0.42, t=4.06, p <0.001).\nDiscussion: First, consideration of the path analysis reported in Table I yields insights. In this case the causal relationship between the three latent variables and satisfaction with the brand is being measured. The power and significance of the covariances reported are very similar for the two models, with the odd exception that the path from price fairness to satisfaction for the first-tier buyers is actually negative, where it is positive for those buying second-tier brands. This suggests that the people who buy top brands actually want to pay top prices for them - a lower (fairer) price would yield less satisfaction.More important, however, is that in both models the importance of utilitarian attributes are of paramount importance to satisfaction with the brand. This is not in accordance with conventional wisdom, which leads us to believe that price is most important to buyers of second-tier brands and hedonic considerations most important to buyers of leading brands. Note that the models are relating attributes to satisfaction with the brand, rather than with intention to purchase; although intention to re-purchase does follow strongly from satisfaction.Interpretation of the means and mean differences also yields interesting insights into the trade-offs that consumers make. For purchasers of leading brands price fairness is not perceived to be particularly high and they perceive no difference in price fairness between brands, but they believe that they gain greater utilitarian and hedonistic benefits from the purchase of the leading, rather than following, brand. Possibly they feel they are trading a higher price for superior utilitarian as well as hedonic benefits. Their counterparts - those who purchase one of the second-tier brands - have different perceptions; believing that they not only get a fairer price but get superior utility from buying the second brand. There is no apparent difference for them regarding the hedonistic benefits of leader and follower brands, and hedonistic benefits generally do not seem to be perceived as high overall as are utilitarian benefits. For these consumers, then, the trade-off is about price and utilitarian benefits.Implications for marketing practice\n",
"role": "user"
},
{
"content": "Purpose: - In most markets it is possible to identify a brand leader and a number of brand followers. The purpose of this paper is to address the following question: \"Why should any consumer purchase a 'second-best' option?\"\nDesign/methodology/approach: - Consideration of the way in which these two groups of consumers (brand leaders and followers) trade off product attributes (utilitarian benefits, hedonistic benefits and price fairness) during the choice process is made in the study by path analysis of the appropriate structural equation models and then a consideration of the level and the differences between the means of the three variables.\nFindings: - The paper finds that utilitarian attributes are of paramount importance to satisfaction with the brand. This is not in accordance with conventional wisdom, which leads us to believe that price is most important to buyers of second-tier brands and hedonic considerations most important to buyers of leading brands.\nResearch limitations/implications: - The model fit attained is only good enough for exploratory work (GFI =0.88/0.90) so replication with a larger sample would be appropriate. In addition, the research is carried out in South Korea - although we know of no reason why this should affect the results it would be of interest to generalize the results to a western economy. Nevertheless, the method of first fitting the model then trading off the attribute levels is unusual and quite interesting.\nPractical implications: - Marketers of leading brands should not concern themselves about price but merely stress the benefits, whereas marketers of follower brands should make much of the price differential.\nOriginality/value: - The importance of utilitarian benefits have hitherto not often been stressed when marketing premium brands - this research suggests that this is wrong.\n",
"role": "assistant"
}
] |
Impact of hospital atmosphere on perceived health care outcome
|
[
"Healthcare",
"Service quality",
"Atmosphere",
"Perceived outcome"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Research into patient perception toward healthcare service quality has become popular in recent years. Healthcare service perceptions are important (Akin and Hutchinson, 1999), as they affect patient satisfaction (Qatari and Haran, 1999), loyalty (Gaur et al., 2011), service provider choice and willingness to pay (Annis, 1981). To our knowledge, no study has considered the relationship between hospital atmosphere and customer behavior. The interaction between service quality and atmosphere is unexplored. Our study aimed to understand the relationship between service quality, atmosphere and outcome.\nAtmosphere: Healthcare literature largely ignores healthcare atmosphere, while retailing literature covers its impact on service quality. Studies on retailing posit that retail store atmosphere plays an important role in influencing store choice, customer satisfaction and loyalty (Chang and Tu, 2005; Pan and Zinkhan, 2006; Sirgy et al., 2000). Environmental cues, or atmospherics are so pervasive in a store that retail outlets may be considered cues, messages and suggestions that communicate with consumers (Markin et al., 1976). Furthermore, the external environment also influences service settings (Bitner, 1992). Atmospherics is defined as the emotionally oriented design of space, which can affect the customers as well as the employees (Skandrani et al., 2011, p. 53). Markin et al. (1976) suggest that store personalities are created and shaped, and these personalities - friendly, upper-class, aloof, high quality, low priced, convenient, warm, inviting, cool, haughty, etc. - in turn affect customer attitude, store choice and loyalty. Mehrabian and Russell (1974) identified three basic emotional states that impact approach-avoidance behavior (pleasure-displeasure), arousal-non-arousal, dominance-submissiveness, novelty, surprise, unusualness, crowd and unfamiliarity in the environment will lead to stimulation, arousal, excitement, alertness and jitteriness among individuals. Because the healthcare literature on atmospherics is scarce, the scale we employed is based on the retailing literature (Kasulis and Lusch, 1981; Donovan and Rossiter, 1982; Sirgy and Samli, 1985; Darden and Babin, 1994; Liljander et al., 2009) and environmental psychology (Mehrabian and Russell, 1974). The resulting scale measures emotional and psychological states by employing semantic differentials (Table I).\nHealthcare service quality: Healthcare quality is understood as the degree to which services achieve the desired health outcomes for individuals and populations (Institute of Medicine, 2001). Three healthcare quality elements have been identified: structure (infrastructure, technology staffing, funding, etc.), process (caretaker and patient interaction) and outcome (health status, mortality rate, patient satisfaction, etc.). Increasing attention is being given to patients' healthcare service quality perceptions (Donabedian, 1980, 1988; McIver, 1992). Studies distinguish between observed and perceived quality (Palmer et al., 1991). Observed care addresses healthcare structure and process, and determines whether professionally defined standards are met. Perceived quality, on the other hand, is the patients' quality perceptions (Donabedian, 1980). We sought to measure healthcare perceptions using traditional the service quality dimensions in SERVQUAL: tangibility, reliability, responsiveness, assurance and empathy (Parasuraman et al., 1985, 1988). In hospital service quality, these dimensions are defined as physical environment, functional quality and diagnostic accuracy, communication, trustworthiness and exactness, keeping promises, accurate records, timeliness, good and relevant specialized knowledge, politeness, sympathy, reassurance, willingness to help and ability to reflect patients' needs (Furrer et al., 2000; Polsa et al., 2013). SERVQUAL has been used successfully in healthcare (Curry and Stark, 2000; Curry and Sinclair, 2002). Haddad et al. (1998) developed a healthcare service quality perception scale comprising three dimensions: healthcare delivery, personnel and facilities. The healthcare delivery subscale has five items: good diagnosis, prescribing, drugs availability and quality, and recovery or cure. Eight factors are included in the second subscale, personnel: clinical examination, follow-up and monitoring, overall reception, compassion and support for patients, respect for patients, allowing sufficient time for patients, information (about the health problem) and honesty. The third subscale, facility, has seven items: costs, payment arrangements, physical accessibility (distance), medical staffing, women's services, equipment and rooms. This scale is adapted for different nations (Narang, 2010, 2011; Duong et al., 2004). The dimensions we employed stem from Haddad et al. (1998), Narang (2010), Dagger et al. (2007), Parasuraman et al. (1985), Furrer et al. (2000) and Polsa et al. (2013) (Table I).\nPerceived outcome: Outcome is interpreted differently in healthcare. Reerink and Sauerborn (1996) considered mortality, morbidity, malnutrition, illness duration, quality of life and satisfaction as outcomes. Population health status, deaths and disability-adjusted life years are generally used by the government and healthcare organizations to measure outcomes. According to WHO (2000), outcomes can be measured as deaths, disability-adjusted life years or responsiveness. Patients, however, may view outcomes differently; e.g., satisfaction with their visit (Qatari and Haran, 1999). Donabedian (1980, 1988), WHO (2000) suggest that patient satisfaction is a major quality outcome. Uzochukwu et al. (2004) found that most quality assessment studies measure three outcomes: medical, cost and client satisfaction, while Forbes et al. (1997) deduced two broad outcomes: comfort and function. Confidence, satisfaction and loyalty are relational outcomes following doctor-patient interactions (Gaur et al., 2011). Haddad et al. (2000) included health status, fears and anxieties, return to routine activities, ability to react if health deteriorates and motivation to follow prescribed treatment. In this study, we take outcome to mean the patient's perceived change in his/her health status that impacts his/her feelings. The scale used for outcome was based on Qatari and Haran (1999), Reichheld (1997), Sheth and Parvatiyar (1995) and Annis (1981) (Table I).\nThe relationship between healthcare atmosphere, quality and outcome: Many studies found a strong relationship between service quality and outcome. Zeithaml et al. (1996) suggest that patient behavior is likely to be affected by service quality and satisfaction. Other studies support a positive relationship between service quality and intention to visit the healthcare center again (Andaleeb, 2000; Uzochukwu et al., 2004). Patients' healthcare perceptions also affect their compliance with treatment and interpersonal relationships with doctors and, consequently, on outcomes (Pascoe, 1983; Rubin, 1990).\nMethod: We tested our hypothesis with data from Finland (55 patients); India (80); Nigeria (99) and the People's Republic of China (81). Total respondents were 315 (Table II). We used purposive sampling (Patton, 2002) to gather data from those hospitalized in the past six months. Questionnaires were delivered to master's degree students who filled out the forms if they had been inpatients during the past six months. However, if they had not been hospitalized then they were asked to pass the questionnaire to a friend or a relative who had been hospitalized. We used the snowball sampling procedure (Patton, 2002). A course instructor recruited the students, asking them to complete or, for those not recently hospitalized, to recruit someone to complete the questionnaire. In the sample, over 90 percent of the hospitals were in urban areas; 33 percent were private.\nResults and discussion: Our final scale (Table I) reflects three healthcare dimensions: employees; drugs and diagnosis and environment and access. It compares well with the scales employed in other studies (Haddad et al., 1988; Narang, 2010). Our scale also emphasizes compassion and support, respect toward patients, obtaining drugs, drug quality and medical equipment. When compared with the traditional service quality dimensions (tangibility, reliability, responsiveness, assurance and empathy) (Parasuraman et al., 1985, 1988), our scale reflects some traditional items, such as willingness to help, responding promptly and physical environment. Accessibility and politeness are also important in all studies, including ours. The literature shows that atmospherics has been largely using ambience (Heide and Gronhaug, 2006; Mehrabian and Russell, 1974; Wysocki, 1979; Stevens, 1980), interaction and design (Slatten et al., 2009). However, some studies have emphasized the emotional aspect (Mehrabian and Russell, 1974; Slatten et al., 2009) retail atmosphere (Donovan and Rossiter, 1982) and service settings. Our atmosphere scale (Table I) reflects only the psychological and emotional aspects, and includes items such as contented/depressed, happy/unhappy, satisfied/dissatisfied, pleased/annoyed, relaxed/bored, important/insignificant, free/restricted, hopeful/hopeless, common/rare (Donovan and Rossiter, 1982).\nConclusions and recommendations: Our results show that service quality and atmosphere have a statistically significant positive relationship with patients' perceived outcome. While patients' perceived outcome is not the actual outcome, as improved health or disease free, it nevertheless warrants further attention in healthcare service quality research. The same applies to atmosphere, which has been a studied less. Our main contribution lies in showing that healthcare service quality and service provider atmosphere impact the patients' health perceptions. Our findings condense the original SERVQUAL and Haddad scale dimensions into three: employees; drugs and diagnosis; and environment and access, and this scale has been cross-culturally validated. The scale, based on combined scales from earlier literature, is the main contribution we make. While SERVQUAL was developed in American and European research traditions, the Haddad scale is used in less affluent, empirical settings. Additionally, our contribution to combining these scales, we have combined western (Finland), eastern (China and India) and southern (Nigeria) empirical samples. Thus, we provide a healthcare service quality scale that cross-culturally is more emic than its earlier versions. Though we dealt mainly with emerging or developing markets, we provide directions for further research in developed nations, which we fill is another major contribution, as developing nation researchers always draw lessons from developed countries, and now developed nations may learn from developing ones.\n",
"role": "user"
},
{
"content": "Purpose: Healthcare service quality studies primarily examine the relationships between patients\nDesign/methodology/approach: Data were collected from Finland, India, Nigeria and the People\nFindings: Results showed that atmosphere and healthcare service quality have a statistically significant relationship with patient perceived outcomes.\nResearch limitations/implications: The sample size was small and the sampling units were selected on convenience; thus, caution must be exercised in generalizing the findings.\nPractical implications: The study determined that service quality and atmosphere are considered significant for developing and developed nations. This result could have significant implications for policy makers and service providers developing healthcare quality and hospital atmosphere.\nOriginality/value: Studies concentrate on healthcare outcome primarily regarding population health status, mortality, morbidity, customer satisfaction, loyalty, quality of life, customer behavior and consumption. However, the study exposes how patients perceive their health after treatment. Furthermore, the authors develop the healthcare service literature by considering atmosphere and perceived outcome.\n",
"role": "assistant"
}
] |
Road map for the implementation of green manufacturing practices in Indian manufacturing industries: An ISM approach
|
[
"Performance measurement",
"Strategy",
"Manufacturing strategy",
"Critical success factors",
"ISM",
"Green manufacturing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Rapid industrial growth in the last century has led to many economic, environmental and social problems from global warming to local waste disposal. The industry has come under increasing pressure to reduce the environmental impacts involved in the production of goods and services. In recent years, green manufacturing (GM) have been widely considered for implementation to reduce environmental and social impacts as well as to improve economical competitiveness (Sangwan, 2011). GM has emerged in the last few years and covers all phases of product's life cycle from design, production and distribution phases to the use of products by the end users and its disposal at the end of product's life cycle. The scarcity of natural resources and growing concern of environment in the market for \"green\" issues have forced executives to manage operations within an environmental perspective (Azzone and Noci, 1998).\n2. CSFs for GM implementation: A few decades ago, almost every company had a thought that environmental goals and sound business strategies are mutually exclusive, as any investment to improve environmental performance would lead to an increment in lead times and cost and reduction in quality. All of these adversely affected the profit of organizations. Porter (1991) opined that pollution is a type of waste which diminished the value of products and processes. In the last ten years, a radical change has come in management's view on pollution, the need for pollution reduction and better environmental management (Melnyk et al., 2003). GM differs from traditional manufacturing as it focuses on environmental impact, environment policies of governments, national and international environmental regulations, stakeholder activism and environmentalism and competitive pressures. Proper environmental regulations or standards can trigger green innovations that actually decrease cost, increase productivity or make companies more competitive (Digalwar et al., 2013). So, many researchers have attempted to implement GM by applying concrete strategies, key success factors and technical and managerial concerns. Berry and Rondinelli (1998) described six critical elements like training and education programs, top management involvement, resources, infrastructure, facilities and supplier management in order to develop an effective proactive environmental management system. The prime focus of an environment management system (EMS) is to prevent adverse environmental effects and improve environmental performance by institutionalizing various environmental programs and practices such as initiating environment-related performance measures and developing green technologies, processes and products (Gupta, 1995). Azzone and Noci (1998) defined basic \"green\" manufacturing strategies of a company and identified the most effective PMS required for successful implementation of each \"green\" manufacturing strategy. Inman (2002) reviewed the literature from the environmental and operations management perspective where he introduced some factors which gave some direction for environmental management and operations management interface. Daily and Huang (2001) illustrated top management support, environmental training, employee training, employee empowerment and reward systems as the key human resource factors for the implementation of environmental system. A set of seven critical factors - top management commitment to environmental management, total involvement of employees, training, green product/process design, supplier management, measurement and information management of environmental management are also developed and validated by Wee and Quazi (2005). Zutshi and Sohal (2004) broadly classified CSF for the adoption of EMS into four categories: management leadership and support, learning and training, internal analysis, and sustainability. Zhou et al. (2008) described a green product using six attributes, namely, development and design, manufacture, packaging, sale and transport, use and maintenance and recycling and their variables/items.\n3. Methodology: In this study, questionnaire-based survey and ISM have been used to achieve the research objectives. These methodologies and the respective results are separately discussed in the next two sections.\n4. Conclusions: At this time, emerging concern for the environment has brought manufacturing industries to take their steps toward GM. So these findings will be enormously helpful for industry professionals in developing road map for implementation of GM in Industry.\n",
"role": "user"
},
{
"content": "Purpose: Green manufacturing (GM) has received growing attention in the last few years. As today's business environment is highly competitive, dynamic and volatile in nature, GM is seen as the winning strategy to be adopted by manufacturers. In this scenario, successful implementation of GM by manufacturers can give them an edge over their competitors. The purpose of this paper is to identify and develop the structural relationship among different factors for the successful implementation of GM.\nDesign/methodology/approach: A questionnaire-based survey and interpretive structural modeling (ISM) approach has been applied in this study. In this research, 12 factors have been identified, such as top management commitment, supplier and materials management, green product and process design, employee training and empowerment, etc. On the basis of survey results and opinions of experts from industry and academia, a structural relationship model has been developed through an ISM approach.\nFindings: Top management commitment has the highest driving power for the implementation of GM. Effective implementation of GM will improve organization performance in terms of product cost and quality. For effective implementation of GM, management should not ignore development of people, product and processes.\nPractical implications: The developed model will be useful to managers, decision makers and practitioners of GM in prioritizing their resources more effectively.\nOriginality/value: Critical success factor play a pivotal role for the assessment and implementation of GM practices. Identification of key success factors and their mutual relationship can help in implementation of GM practices in various organizations. It is probably one of the leading attempts to provide road map to the implementation of GM practices in Indian manufacturing industries.\n",
"role": "assistant"
}
] |
Conceptualisation of the service experience by means of a literature review
|
[
"Service management",
"Service experience",
"Literature review",
"Bibliometrics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Service operations have been growing in importance in the contemporary world, with a consequent increase in academic interest in the subject (Chase and Apte, 2007). Heineke and Davis (2007) highlight the importance of this topic in confirming the interdependence between assets and services, i.e., the production and sale of products are generally associated with some type of service, such as transport, delivery, assembly, assistance, replacement, or telemarketing. Moreover, there are no \"pure\" services or products, and their coexistence is evident in business relations at industrial, commercial, or service organisations (Bateson and Hoffman, 2008).\n2. Research methods: This investigation and analysis is based on literature that was obtained through a systematic bibliographic review that was conducted in three parts (Walsham, 2006; Levy and Ellis, 2006): article search and selection; analysis of the results (bibliometric description, content reading, and analysis); and documentation (construction of the service experience concept).\n3. Descriptive results: This section presents the most relevant results of the bibliometric study on the service experience and also identifies the most relevant articles, journals, and keywords that were found. As shown in Figure 1, publications on this topic began to appear in 1991 and have been growing in number ever since, particularly over the last few years.\n4. Conceptualisation of service experience: After the descriptive analysis, a more in-depth analysis of article content was carried out. The content analyses consisted of searching for concepts and approaches adopted by researchers on service experience. The content analysis was restricted to three sets of articles: the ones with the most citations in Web of Science, the most recent ones, and the ones most utilised as references.\n5. Conclusions: The main contribution of this paper to the services area is to refine a concept not fully clarified in the literature that might guide other researchers on developing instruments in service operations management when conducting empirical work. Until now, numerous studies have been conducted based on theories which do not carry the experiential nature of some types of services, compromising the understanding of the aspects that drive services companies to improve performance.\n",
"role": "user"
},
{
"content": "Purpose: Service operations are currently growing in importance, with a consequent increase in academic interest in the matter. In this context, the purpose of this paper is to systematise the concept associated with the service experience by drawing from the definitions which have been established in the relevant scientific literature.\nDesign/methodology/approach: A systematic search in the literature was conducted, aiming to identify the main articles, journals, references, and keywords on the subject of customer experience. Using bibliometric content and analysis a number of articles are analysed towards a concept of service experience.\nFindings: The study verifies that service experience and service quality are distinct concepts that have to be analysed separately. The construction of this proposed concept has been oriented by content analysis of the reviewed articles. Dimensions and components that categorise this concept in a singular manner have been identified. The service experience, which can be thought of as a process, is composed of the interdependent and complementary dimensions of \"predispositions\", \"interactions\", and \"reactions\", which condition the perception of quality and affect current as well as future purchasing decisions. As a result of their strategic importance for service organisations, customer experiences therefore warrant careful analysis.\nResearch limitations/implications: This work is theoretical so empirical studies are necessary in order to validate or reject the proposed concept.\nOriginality/value: Very few systematic literature reviews of the characterisation and conceptualisation on service experience concept are available.\n",
"role": "assistant"
}
] |
Understanding the power of hope and empathy in healthcare marketing
|
[
"Emotions",
"Advertising",
"Empathy",
"Hope",
"Health care"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The landscape of healthcare in the USA has changed dramatically over the past decade. As of September 2015, the US Department of Health and Human Services reports a total of 17.6 million people are covered due to the Affordable Care Act (2010) between the Marketplace, Medicaid expansion, young adults staying on their parents' plan and other coverage provisions. Along with this influx of new consumers in the marketplace, more healthcare options are available, which results in increased competition. In addition, pharmaceutical companies and various providers of healthcare products and services (e.g. hospitals, health insurance providers) have flooded the media with direct-to-consumer ads, in part because many employers and insurers require healthcare consumers to become more responsible and knowledgeable about their healthcare choices. Healthcare advertisements feature doctors in scrubs posing with impressive machinery, emphasizing quality of care, offering testimonials and tugging at consumer heartstrings with emotional appeals (Kemp et al., 2015; Larson et al., 2005; Vater et al., 2014).\nConceptual framework and hypotheses: Emotions, unlike other feeling states (e.g. mood, affect), involve a high degree of cognitive awareness (Lazarus, 1991b) that guide daily behavior, simplify decision making and influence consumer judgments (Bui et al., 2012; Gordon, 2006; Kemp et al., 2014). Ad-based feelings and emotions play a crucial role in the formation of attitudes and judgments about products and services (Brown et al., 1998; Bulbul and Menon, 2010; Kemp et al., 2014; Mizerski and Dennis White, 1986). For example, Field and Pringle (2008) find that emotional advertising campaigns can create a sense of enduring differentiation for a brand and reduce price sensitivity, and that emotional ads are more likely to generate large profit gains than rational ones.\nStudy 1: Method\nStudy 2: In the interest of enhanced generalizability, a second study (S2) focuses on a different type of healthcare provider, cancer treatment centers. To further understand what executional elements work well with hope and empathy appeals in healthcare provider campaigns, S2 examines the impact of incorporating testimonials in hope and empathy-based ads. As noted above, testimonials are common in advertising for a variety of healthcare providers (Gaughran, 2010; Larson et al., 2005; Vater et al., 2014). For example, well-known campaigns for St. Jude Children's Research Hospital and Cancer Treatment Centers of America are dominated by testimonials from patients and medical experts.\nDiscussion and conclusion: Theoretical implications\n",
"role": "user"
},
{
"content": "Purpose: The dynamic landscape of healthcare has seen significant changes in marketing by the various types of healthcare providers. This research aims to explore the impact of emotions in healthcare advertising.\nDesign/methodology/approach: Two consumer panel experiments investigate the role of hope and empathy appeals in fostering positive evaluations toward healthcare providers (medical centers for serious illnesses).\nFindings: Study 1 shows that two types of emotion-based healthcare appeals are more effective than non-emotional appeals. Study 2 compares the relative effectiveness of hope versus empathy appeals with medical expert or typical person (patient) testimonials.\nResearch limitations/implications: Findings demonstrate that in a healthcare context, an expert testimonial enhanced the persuasiveness of a hope-based appeal, whereas testimonials from unknown patients were not effective.\nOriginality/value: Understanding the role of emotions in healthcare advertising is increasingly important as healthcare providers compete on care and quality outcomes and advertising agencies vie for the attention of consumers.\n",
"role": "assistant"
}
] |
Spillover of social responsibility associations in a brand portfolio
|
[
"Corporate social responsibility",
"Brand portfolio",
"Social responsibility associations",
"Spillover effect"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: As social responsibility (SR) continues its march into the mainstream (Berger et al., 2007), it is becoming clear that investments in such activity can provide companies with substantial returns in the form of purchase intent (e.g. Sen et al., 2006), word-of-mouth behaviors (e.g. Du et al., 2007), willingness to pay (e.g. Trudel and Cotte, 2008) and consumer-company identification (e.g. Lichtenstein et al., 2004; Sen and Bhattacharya, 2001). These outcomes occur by influencing the cognitive associations that consumers hold about both a company and its products. Realizing that these SR associations reside at multiple levels, some companies enact SR activities at both the corporate and product levels. For example, the Cheerios brand supports children's literacy through a community program that distributes children's books in Cheerios boxes; meanwhile, the company General Mills, which operates the Cheerios brand, donates millions of dollars to charitable organizations each year under the corporate brand (Source: General Mills Global Responsibility Report 2012[1]).\nConceptual underpinnings: Social responsibility\nConceptual model: Spillover effect of a product brand's SR initiatives\nExperiment 1: One hundred and twenty-one subjects (48 per cent male; age 19-29 years) were randomly assigned to one of the eight conditions based on a 23 between-subjects design: description of SR initiatives for Brand A: present or absent; corporate branding strategy: monolithic or endorsed strategy; and product category fit between Brand A and B: high or low fit. For stand-alone branding strategy, a fictitious competing brand was included in the instrument to represent the stand-alone brand.\nExperiment 2: As value-expressive products can be positioned as functional or symbolic brands (Bhat and Reddy, 1998), we included these two positionings in the experimental design. A total of 130 undergraduate students (59 per cent male; age 19-50 years) were randomly assigned to one of the eight conditions in a 2 (description of SR initiatives for Brand A: present or absent) x 2 (product category fit between Brand A and B: high or low fit) x 2 (brand positioning of Brand A and B: symbolic or functional) between-subjects design.\nGeneral discussion: Both experiments indicate the spillover effect from the SR initiative of product Brand A to another product Brand B and the corporate Brand C in a brand portfolio. In addition, the spillover effect from Brand A to B is shown to be mediated by SR associations with corporate Brand C (i.e. CSR associations). This mediating role of CSR associations suggests that when consumers are exposed to Brand A's SR initiative, the activation of this brand node and its SR associations first spreads to corporate Brand C before spreading to product Brand B.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to explore the spillover effect of social responsibility (SR) activity at the product brand level on the full brand portfolio. Extant research has established that SR activity can be beneficial to companies by influencing consumers' SR associations with the company and its product brands. However, most studies only look at the outcomes of SR implemented at the corporate level (i.e. corporate social responsibility [CSR]). This paper provides a new and expanded perspective by exploring how SR at the product brand level reverberates throughout the full brand portfolio. Drawing on associative network theory, the authors propose a conceptual model that predicts when and how SR associations with a product brand spillover to corporate brand and other product brands and the consequences of this spillover.\nDesign/methodology/approach: Two experiments were conducted to test the conceptual model. The authors used utilitarian products (frozen yogurt, ice cream, and soft drink) in the first experiment and value-expressive products (running shoes, T-shirt and watch) in the second experiment.\nFindings: Both experiments found support for the proposed spillover effect. The moderating impact of corporate branding strategy and product category fit on the strength of spillover effect were also examined.\nPractical implications: The findings will help managers make better decisions about which brands (product and corporate level) should be involved in SR activity.\nOriginality/value: This research offers a new perspective to look at the consequences of SR activity and reveals a larger picture than extant research on CSR by indicating the impact of a product brand's SR initiative on the whole brand portfolio.\n",
"role": "assistant"
}
] |
If kids ruled the world, how would they stop the non-medical use of prescription drugs?
|
[
"Prescription drug use",
"Adolescents",
"Mixed-methods",
"United States"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the USA, substance abuse by youth continues to be a public health concern even though rates of substance abuse among youth have declined over the past 20 years [1]. Some of the most commonly used substances among youth are alcohol and tobacco, followed by marijuana [2, 3]. Studies report that between 20 and 35 percent of youth used alcohol in the past 30 days [4-6], while approximately 25 percent used tobacco in the past 30 days [7], and nearly 22 percent of 12th graders used marijuana in the past 30 days [8]. Additionally, the type of substance used differs by age group and gender among youth. Younger teens are more likely to use substances they can inhale compared to older teens who are more likely to take non-medical use prescription medications [2]. In addition, even though males are more likely to use prescription medications compared to females, females aged 12-17 are more likely to use prescription medications non-medically when compared to males [9-11].Non-medical use is commonly defined as using a prescription drug that was not prescribed for you, taking medication in a manner or dose other than prescribed, or taking a medication to get a feeling of euphoria [12, 13]. The classes of prescription drugs that are typically used non-medically include stimulants, opioids, and sedatives [1]. Youth are sometimes prescribed these medications by a physician, but may also access and use prescriptions non-medically from their family's medicine cabinet [14]. In the USA, the National Survey on Drug Use and Health (NSDUH) reports that 5.9 percent of youth used a prescription drug that was not prescribed to them in the past 30 days [15]. Non-medical use of prescription drugs among youth is especially problematic due to the critical period of development in adolescents during this time; youth are at high risk for experimentation and addiction. Further, youth may perceive prescription medications as less harmful compared to other drugs [16], even though they can cause serious side effects. For example, stimulants can cause paranoia and an irregular heartbeat while opioids can cause drowsiness and nausea; depressants can cause slurred speech and shallow breathing [17]. Non-medical use of prescription medications among youth is not only harmful for the development of the adolescent brain and body but also has social consequences that may include delinquency [18], school dropout [19], abuse and subsequent dependence [20, 21].There are many factors contributing to the non-medical use of prescription drugs among youth. Established characteristics include being female [19, 22] and Caucasian [19, 23]. Use has been found to increase with age [22]. Individual risk factors such as deviant [24] and sensation-seeking behaviors [25], and peer influence [26] are two of the main reasons why youth initiate the use of prescription drugs non-medically. Other risk factors include previous medical use of prescription drugs [22], availability [27], and urbanicity [28]. In addition to these risk factors, there are protective factors for non-medical use of prescription drugs including parental monitoring [29] and academic achievement [19].The USA utilizes multiple national surveys to equip researchers to better understand the use of prescription drugs among youth including trends and changes over time, and implementation of interventions. National surveys such as Monitoring the Future (MTF), funded by the National Institute on Drug Abuse (NIDA), have assessed substance use and related attitudes among youth nationwide since 1975. The Youth Risk Behavior Surveillance System (YRBSS) used by the Centers for Disease Control and Prevention (CDC) tracks tobacco, alcohol, and other drugs used among youth. Further, the NSDUH conducted by the Substance Abuse and Mental Health Services Administration (SAMHSA), includes information regarding trends in the use of alcohol, tobacco, and various types of drugs from individuals aged 12 years or older. In addition, school-based programs including the Drug Abuse Resistance Education (DARE) program has been implemented to educate youth on the harms of drug use and to prevent the initiation of drug use. Other national representative studies such as the National Monitoring of Adolescent Prescription Stimulants Study (N-MAPSS) have been used to examine signals for non-medical use of stimulants and other drugs. Results from the N-MAPSS show that approximately 4 percent of youth reported prescription stimulant non-medical use in the past 30 days [30] and 3 percent used prescription opioids non-medically in the past 30 days [11]. The prevalence of other substance use has also been examined in this dataset. Though many of the national surveys described above assess data that are necessary to understand prescription drug use among youth, the N-MAPSS also included open-ended questions to probe youth on their opinions on why people use prescription drugs non-medically, how to educate, and how to stop kids from using prescription drugs non-medically.Our aims were to: examine perceptions of prescription drugs, their effects, and ideas on how youth would stop non-medical use of prescription medications among their peers. The aims of this study were addressed with the following questions on the survey: (1) How should kids your age be told about prescription drugs and their effects?; (2) If you ran the world, how would you stop kids from taking other people's prescription medicines?; and (3) Why do people use prescription stimulants without a prescription? We hypothesized that helpful themes would emerge on how the youth of different ages perceived prescription drugs, their effects, and ideas on how to stop non-medical use among them and their peers.\nMethods: Study design\nResults: Among the entire N-MAPSS sample (n = 11,048), 52 percent were female, 43 percent white, 48 percent resided in urban communities and the mean age was 15.1 years. As expected, demographics for the 900 youth randomized for these qualitative questions were similar with 52 percent female, 40 percent white, and 48 percent residing in urban communities with the same mean age (Table I). The themes that emerged for each qualitative question were determined by the coders and are outlined in Figure 1.How should kids your age be told about prescription drugs and their effects?\nDiscussion: Among a sample comprising over 11,000 youth, we assessed perceptions of prescription drug use and ways these youth would stop the non-medical use of prescription drugs. The youth believed they should be educated about the negative effects related to prescription drug use, that education efforts should be included in their formal education at school or that they should be informed through their parents. Parental education was a more common theme among younger children 10-12 years, while school education was a more common theme among older children 16-18 years of age. These results suggest that education efforts for prescription drugs may need to be delivered through different sources depending on youth age groups. Parents may have a greater influence on younger children, who expect their parents to educate them on prescription drugs, while formal education in schools may have a greater influence on older children and adolescents.Additionally, if youth ruled the world, they would stop kids from taking other people's prescription drugs by implementing more stringent laws. Differences emerged by age groups, where restricting use was the most common theme among those aged 10-12 years of age. Implementing stringent laws was more common among 13-15 year-olds. This suggests that strategies restricting access to other people's prescription drugs may have a greater influence on younger children, who believe this would be effective. Conversely, older children and adolescents appear to be more motivated by legal consequences and the threat of punishment.Interestingly, we found that youth, regardless of age, most commonly believed the main motivation for non-medical use of prescription stimulants was for their psychoactive effects. It was the most common belief among 16-18-year olds (67 percent) compared to every other answer they gave, as well as the younger cohorts. This may be potentially related to the increasing pressure to perform academically. Previous studies have found an association between prescription stimulant use and cognitive enhancement among college students [36-38]. In our study, the motivation for peer pressure decreased with age. The youngest youths (10-12 year-olds) believed non-medical use was due to peer pressure more frequently compared to older youths. Other studies have found substance use is often influenced by social networks and factors such as peer pressure [39-42]. These findings suggest that motivations for non-medical use of prescription stimulants may change as adolescents mature.The sampling technique used in the N-MAPSS resulted in prevalence rates of prescription drug use among youth that were similar to those found by traditional national surveys. In the last few decades, national surveys have examined prescription drug use among youth using quantitative surveys. In addition, some qualitative studies have used a focus group approach to gain a better understanding of attitudes and perceptions on substance use [43, 44]. Uniquely, the N-MAPSS not only surveyed youth about their own use but also used a mixed-methods approach to assess perspectives of youth on how to stop non-medical use of prescription drugs. Though surveys such as these are an integral part of monitoring prescription drug use among youth, supplementing measures with qualitative data can help to understand the opinions and concerns of youth regarding prescription drugs. Results from studies like the N-MAPSS, which include both quantitative and qualitative measures, can be used to better understand the non-medical use of prescription drugs among youth and to find possible interventions to reduce such use [31-33].We had the opportunity to analyze qualitative questions embedded in a larger quantitative study to get a better understanding of prescription drug use among youth. Results from this study are most relevant in today's epidemic. Previous nation-wide prevention programs such as the D.A.R.E. program have been shown not to be effective in the long term. In fact, literature has shown that D.A.R.E. was minimally effective [45, 46] and during a five year follow-up period, no significant differences were observed between intervention and comparison schools [47]. More recently, D.A.R.E. has revised its curriculum to include decision-making skills and social competence. Preliminary results from the revised curriculum have shown higher efficacy in preventing substance use [48]. Integrating perspectives from youth, such as the results from this current study, and teaching youth social skills into future prevention programs that target the prevention of drug use among youth may be a more effective prevention strategy. Assumptions by adults about non-medical use among youth can be avoided with this approach and resources can be directed to the most appropriate settings. For instance, integrated education efforts from multiple outlets including the media and doctor visits may be more effective than efforts delivered solely within a school setting. Using parental-based education efforts amongst younger children and then gradually increasing school-based efforts as children get older may also be an effective approach, based on our results.Some limitations of the data include the cross-sectional nature of the study, meaning that cohort changes over time cannot be examined. Data were also self-reported, which could lead to under-reporting of comments on the questions, to begin with. In addition, age differences were not empirically tested. However, these preliminary analyses examined through a mixed-methods approach have highlighted thematic differences by age among youth that need to be further examined. Gender differences and thematic differences that may emerge between youth who use and don't use prescription drugs non-medically should also be further examined. It should be noted that the data collected covers a broad time period (2008-2011) and may not reflect current prescription drug practices. However, the qualitative information ascertained from the youth can be translated for interventional purposes regarding the mitigation of non-medical use of prescription medications among their peers and these ideas can then transcend time.This study had many strengths in addition to using mixed methods to better understand youth and their perspectives on prescription drug use. The questions analyzed were open-ended and responses were clear and unambiguous allowing easy identification of themes. Also, the entertainment venue methodology was successful, with only 10 percent of participants declining to participate, allowing recruitment of a large data set of youth in a short period. Age, sex, race, and composition of urban and rural areas of this sample in the N-MAPSS were also comparable to the 2010 US Census data.Overall, the themes we observed in this study varied and helped identify some useful guidelines for moving forward. For example, younger children appear to have different views and suggestions compared to those of older adolescents. This suggests that different targeted approaches by age group would be useful to study, considering the themes identified when asking about education, prevention, and motivation. Strategies may be more effective if designed based on the opinions of the youth themselves, rather than the preconceived ideas of adults about the opinions of youth.\nConclusion: Though many national surveys quantify trends and changes in the prevalence of prescription drug use among youth, no study so far has directly asked youth about their perceptions regarding their ideas on prescription drugs and how they would prevent non-medical use. This current analysis uses qualitative methods to examine prescription drug use among youth and obtain their insight on future prevention strategies of non-medical use. These results suggest that further research among this age population should focus on prevention strategies more central to the youth, without making assumptions about drug use behavior.\n",
"role": "user"
},
{
"content": "Purpose: This study examined how youth would mitigate non-medical use of prescription medication among their peers.\nDesign/methodology/approach: The National Monitoring of Adolescent Prescription Stimulants Study (N-MAPSS) was conducted as an interview comprising 11,048 youth of 10-18 years of age between 2008 and 2011 from entertainment venues of 10 US urban, suburban, and rural areas. Using a mixed-methods approach, participants completed a survey culminating in open-ended questions asking: (1) How should kids your age be informed about prescription drugs and their effects? (2) If you ran the world, how would you stop kids from taking other people's prescription medicines? (3) Why do people use prescription stimulants without a prescription? Responses from a random sample of 900 children were analyzed using qualitative thematic analyses.\nFindings: The random sample of 900 youth (52 percent female, 40 percent white, with a mean age of 15.1 years) believed they should be educated about prescription drugs and their negative effects at schools, at home by parents, through the media, and health professionals. Youth would stop kids from using other people's prescription drugs through more stringent laws that restricted use, and providing education about negative consequences of use. Peer pressure was the most common reason the youth gave for using other's pills, though some reported taking them out of curiosity.\nOriginality/value: The importance of considering youth's opinions on non-medical use of prescription medications is often overlooked. This evidence, from a peer perspective, could end the illicit use of prescription drugs among today's youth.\n",
"role": "assistant"
}
] |
Store managers, profitability and satisfaction in multi-unit enterprises
|
[
"Multi-unit",
"Retailing",
"Managers",
"Customer satisfaction",
"Profitability",
"Financial performance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Balancing act: the store manager: The store manager is ultimately responsible for the overall performance of the unit, often delegating many responsibilities to assistant managers. It is in the role of the store manager where the integration of marketing and operations strategy is particularly essential.\n2. Strategic implementation: Strategy implementation, \"the communication, interpretation, adoption and enactment of strategic plans\" (Noble and Mokwa, 1999, p. 120), is often treated as an afterthought by practitioners and academics (Noble, 1999) or is perceived as another discipline\n3. Services marketing and operations: Successful retail enterprises, like other services, result from marketing and operations efforts (Lovelock, 2000). Gronroos (1984) noted that service quality is comprised of technical quality (the what is delivered) and functional quality (how the product is delivered), and Zeithaml et al. (1993) identified both technical and functional quality elements in their SERVQUAL measure. Chase has long recommended that the core service product (the what) be separated from the interaction with the customer (Gronroos\n4. Operating orientations: Managers pursue profitability through their operating orientations. Three operating orientations are considered here: customer, sales, and production. These operating orientations are not mutually exclusive; rather, a manager can hold multiple priorities and these interactions may have differential effects on the unit\n5. Data collection: 5.1 Context\n6. Construct measurement and operationalizations: 6.1 Operating orientations\n7. Data analysis and results: The hypotheses were tested using hierarchical linear regression. To reduce multi-collinearity, customer and sales operating orientations were mean-centered and the logarithm of production operating orientation was used (Tabachnick and Fidell, 2001). In order to evaluate the effects of the interactions beyond the effect of the operating orientations, the individual operating orientations were entered in a block. A second block contained both the individual operating orientations and their interactions. Without exception, the regression equations containing both individual operating orientations and their interactions provided better explanatory power than including only the individual operation orientations. Adjusted R2 is reported as it is a more conservative measure when using small sample sizes as used here. Because of the exploratory nature of the research and the small sample size, relationships at a p\n8. Discussion: 8.1 Summary\n9. Conclusion: The multi-unit enterprise dominates the retail landscape; however, little is understood about how these organizations implement strategy consistently across units. The research reported here explores the critical role the unit manager plays in strategy implementation, highlighting how the manager\n",
"role": "user"
},
{
"content": "Purpose: This study aims to explore how managers of multi-unit retail chains balance unit customer satisfaction and profitability through the manager\nDesign/methodology/approach: This research links survey data (gathered from unit managers and customers) to customer satisfaction and unit financial performance.\nFindings: The study found that the store managers\nResearch limitations/implications: Using a sample drawn from within a single firm, this research examines profitability and customer satisfaction at the unit level and identifies the manager of the unit as occupying a key strategic role in the multi-unit enterprise.\nPractical implications: The findings highlight the critical role the unit manager of a multi-unit enterprise plays in driving customer satisfaction and unit profitability. Further, the results point to the challenge of managing the production-related responsibilities of the retail enterprise while striving to be sales and/or customer oriented. Implications for management are particularly salient when considering the combined effect of production-sales and production-customer orientations.\nOriginality/value: This study merges services operations and services marketing theory to explore how conflicting strategic initiatives are implemented at the store level, and how these, in turn, influence unit financial performance and customer satisfaction.\n",
"role": "assistant"
}
] |
Exploring career agency during self-initiated repatriation: a study of Chinese sea turtles
|
[
"Boundaryless careers",
"Career agency",
"China",
"Global career",
"Returnees",
"Self‐initiated expatriation",
"Self‐initiated repatriation",
"Careers",
"Career development",
"Expatriates"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Globalization, accompanied with advances in communication and transportation technologies as well as liberalization of immigration policies in many industrialized nations, has had a profound impact on the flow of international labor and the way people pursue their careers (Baruch et al., 2007; Cappellen and Janssens, 2005; Carr et al., 2005). It has also been suggested that global careers now have become more boundaryless due to career actors' increased proactivity in making career moves across organizational and national borders to develop their career competencies and marketability, and/or for personal reasons (Carr et al., 2005; Thomas et al., 2005).Along with this trend towards increased international mobility, there has been growing interest in self-initiated expatriation/repatriation (SIE/SIR) (e.g. Al Ariss and Ozbilgin, 2010; Cerdin and Pargneux, 2010; Richardson and Mallon, 2005; Tharenou and Caulfield, 2010). SIE/SIR research focuses on individuals who choose to expatriate or repatriate independently (Richardson and Mallon, 2005; Tharenou and Caulfield, 2010) and draws attention to individual agency with assumptions of being independent, self-directed, non-corporate sponsored, and responsible for one's own career (Inkson and Richardson, 2010). For example, Richardson and Mallon (2005) explored reasons for SIE by academics from four countries. Tharenou and Caulfield (2010) examined pull and push factors, such as host country embeddedness, home country attraction and repatriation-related shocks, to explain why SIEs repatriated to their home countries. Furthermore, Tung and Lazarova (2006) studied the career experiences of SIRs from countries in Central and Eastern Europe.Although the previous research has greatly enriched our understanding of this particular career form, questions still remain. First, the focus of SIE/SIR research has been on individuals from western industrialized countries (e.g. Carr et al., 2005; Richardson, 2006; Tharenou, 2003; Tharenou and Caulfield, 2010) and we know little about how individuals from non-western cultures pursue careers in this more integrated global economy (Pringle and Mallon, 2003). This is an important omission, because local cultural norms and values can exert strong influence on career enactment and can effectively shape one's career values and aspirations (Pringle and Mallon, 2003). Second, SIE/SIR literature pays less attention to career experiences of skilled career actors from developing countries (Al Ariss and Ozbilgin, 2010), who represent a major portion of the global workforce (Carr et al., 2005). Indeed, individuals from developing countries may not share the same career development patterns (Tharenou and Caulfield, 2010) and some career actors may not benefit from their overseas experiences due to administrative barriers in the host country, leading to a problem of talent waste (Al Ariss and Ozbilgin, 2010). As such, understanding the opportunities and obstacles facing individuals from developing countries is critical for advancing our knowledge of SIE/SIR. Lastly, except for a few studies (e.g. Begley et al., 2008; Tharenou and Caulfield, 2010; Tung and Lazarova, 2006), most of the previous research has focused on SIE and we know relatively little about SIR. That many returnees may have difficulty re-adjusting to their home culture and experience reverse cultural shock (Tung and Lazarova, 2006) highlights the need for further research on SIRs.To address these gaps, in this paper, we sought to explore how Chinese repatriates experience their return to China. Specifically, we sought to understand how they exercise career agency during repatriation. In addressing this question, the paper makes an important contribution to the SIE/SIR literature by drawing on the contemporary careers theories and, more specifically, literature on career agency.The rise of the Chinese economy has been attributed partly to the repatriation of overseas Chinese, especially those who are highly skilled and received education or training in western, industrialized countries such as the US, Canada, and the UK (Wright et al., 2008; Zweig et al., 2004). In the local popular press, these Chinese returnees are called \"sea turtles\", or haigui (return from overseas). This group is expected to continue to be instrumental in building China's economy and has been seen as a catalyst for social change (Engardio, 2009; Li, 2005). Although the return of the \"sea turtles\" has drawn a great deal of media attention and studies have revealed the important role of the returnees in facilitating international knowledge transfer and local entrepreneurship (e.g. Dai and Liu, 2009; Liu et al., 2010; Wright et al., 2008), relatively little research has been done on career-related experiences of the returnees (for exceptions see Fong, 2011; Tung, 2007; Xu, 2009; Zweig and Han, 2010).In the following sections, we first review literature on the boundaryless career and career agency as well as SIE and SIR. Drawing on the literature on Chinese culture, we then examine the potential cultural influence on Chinese returnees' SIR experiences. Afterwards, we explain the methodology, data collection, as well as data analysis. A detailed discussion of the findings and implications for theory and practice is then provided.\nTheoretical background: Boundaryless careers and individual agency\nMethod: Due to the paucity of research on this topic, we adopted an interpretive approach using in-depth, personal interviews (Seidman, 2005) with a limited number of Chinese SIRs. A small sample (in this case twenty) is considered appropriate given the exploratory nature of the study (Eisenhardt, 1989). Such limited numbers are not unusual in career studies. Indeed, studies based on smaller samples providing in-depth narrative have been instrumental in advancing career theory (e.g. El-Sawad, 2005; Nicholson and Arnold, 1989). The major goal of the present study is not to test existing theories or hypotheses but to develop an initial theoretical understanding grounded in empirical data. Although our sample size limits the generalizability of our results, we proceeded systematically through our data collection and analysis to ensure clear, rigorous findings using the tenets of interpretive research (Gephart, 2004). Therefore, the objective of providing an in-depth understanding of interviewees' experiences of repatriation has been achieved.Personal contacts were used to locate willing participants in China, which is often necessary when undertaking international research (Richardson and Mallon, 2005). The authors used their own personal guanxi or personal relations (e.g. former classmates, colleagues, friends) to identify interviewees for the study. In identifying our sample, we used the definition of SIR presented at the beginning of the paper as individuals who have returned to their home country to pursue careers after spending a significant amount of time elsewhere. Hence the participants in our study include both individuals with Chinese citizenship as well as those who immigrated to foreign countries with their families at a young age and returned to China with foreign citizenship or permanent residency. We included participants from a wide variety of age groups and backgrounds, who are currently working in multiple major cities in China, including Beijing, Shenzhen, Chongqing, Zhuhai, Hong Kong and Macau. Taken together, the interviewees represent a range of educational backgrounds and international experience. Interviewees had returned primarily from Western developed countries and are currently working in industries such as banking, education, IT, accounting, tourism, academia, consulting and urban planning. More detailed demographic information of the interviewees can be found in Table I.Hong Kong, Macau and mainland China share the same cultural roots in Confucianism. However, they have distinct historical paths and exposure to external influences that exemplify the diversity of Chinese returnees. Macau has witnessed the long-term presence of Portuguese people and administration, and as such it has been influenced by Southern European values and culture (Simpson, 2008). Hong Kong culture has been deeply influenced by Britain (Ralston et al., 1993). On the other hand, the contemporary history of mainland China has been mostly shaped by its own social, cultural and political developments and, compared to Macau and Hong Kong, has had less exposure to external influences (Ralston et al., 1993).Interviews were conducted between February 2009 and December 2011, and lasted 60-90 minutes each. The interviews were conducted in English or in Chinese depending on the preference of the interviewee. All interviews were done in-person, and to encourage sharing of information, all took place outside of the interviewees' workplace such as in local tea houses, restaurants, participants' homes, or in one interviewer's office. Eighteen of the interviews were recorded and transcribed verbatim. Two interviewees preferred not to have their interviews recorded, so extensive notes were taken during and afterwards. The interviews that took place in Chinese were translated into English for analysis. Back translation techniques were used to ensure the translation accuracy (Brislin, 1970).During the interviews, interviewees were first ensured of confidentiality and then were asked to provide demographic information. After that we asked a series of open-ended questions to elicit accounts in which participants could describe and interpret their career experiences in their own words. Specifically, we asked them to describe their reasons and motivations to repatriate, their career purposes and values, their career experiences in the host country, their career practices and experiences prior to and after they returned to China, and factors facilitating or impeding their careers, and their desired career outcomes.Data analysis\nFindings: Individual variations\nDiscussion: Drawing on empirical data gathered from a study of Chinese SIRs, this study addresses the gap in our understanding of SIRs in developing countries. In doing so, it makes three theoretical contributions. First, it contributes to career agency theory by applying the six dimensions of career agency proposed by Tams and Arthur (2010) to SIRs. The reported findings provide initial empirical evidence about the importance of personal control, agency, proactivity, as well as self-determination in developing one's own career when making an international move. They also highlight the impact of context on SIRs career experiences in the repatriation process. From this perspective, they suggest that both human agency and context are important in shaping career experiences.Second, the study makes a contribution to current literature on SIE/SIR by exploring the experiences of a group of SIRs in a developing country, in this case China. The findings show that despite the perceived abundance of job opportunities in China, SIRs experience significant challenges during repatriation. Some of these challenges relate to perceived difficulty in applying their knowledge from the \"host\" country to the local context, whereas others relate to the challenges of re-adjusting to living in China and getting used to Chinese cultural norms.Last but not least, the study contributes to the contemporary careers literature by identifying contextual factors at different levels - interpersonal, organizational, and factors in the larger economic, socio-cultural, and political environments in the home country that facilitate or impede the career experiences after SIR. These findings lend empirical support to the recent argument for acknowledging the continued presence of boundaries in contemporary careers (Inkson et al., 2012). In addition, our study has revealed that SIRs career activities are not only guided by their personal career preferences, but are also embedded in and hence continuously shaped by social, cultural and economic contexts. We have suggested that contextual factors in the home country can exert great influence on the career experience of Chinese SIRs and may present challenges to their career experiences after repatriation. From this perspective, our study adds value to the current career literature by showing that repatriation is not only impacted by individual free will but also by contextual and social influences.Why and how Chinese returnees engage in self-initiated repatriation\nConclusion: Chinese SIRs or \"sea turtles\" are just one group of many that are initiating their own career movements across borders throughout the course of their careers, and many regional governments around the world are trying to make repatriation attractive to citizens living and working abroad. SIR does not just involve a physical change in job and location, but also multiple complex psychological processes as the returnees readjust to the economic and cultural realities of their home country. As this study shows, careers are shaped through individual drive, as well as economic, cultural and organizational boundaries. As the returnees experience the challenges and opportunities of building their careers in China, the actions and career paths they take, in turn, are shaping a new China, forming and reforming new boundaries that could ultimately impact all of us who live within the global economy.\n",
"role": "user"
},
{
"content": "Purpose: - Drawing on career and self-initiated expatriation/repatriation literatures, this paper aims to examine the career experiences of Chinese self-initiated repatriates after their return to China.\nDesign/methodology/approach: - The authors conducted an exploratory, qualitative study involving in-depth interviews with 20 Chinese individuals who returned to China after spending at least three years living, studying and/or working in a range of \"host\" countries.\nFindings: - This study shows that the career agency of Chinese returnees reflects both independent and interdependent factors. It provides specific empirical support for Tams and Arthur's argument that career agency is impacted by both individual and contextual factors.\nResearch limitations/implications: - The findings indicate the central role played by individual proactivity and contextual influences during self-initiated repatriation. The small sample size allows for rich data, but limits the generalizability of the findings.\nPractical implications: - Managerial practices that address the unique career values and expectations of self-initiated repatriates can facilitate the application of skills and knowledge acquired abroad to the local context. Policy makers should provide more institutional support to encourage and facilitate the return of overseas Chinese.\nOriginality/value: - This study is among only a small number to explore the experiences of self-initiated repatriates in developing countries. Recent research has addressed the importance of recognizing and identifying the boundaries that constrain and enable global careers. This study identifies a number of such boundaries and also adds to the understanding of the challenges and difficulties of repatriation.\n",
"role": "assistant"
}
] |
Workplace spirituality, employee wellbeing and intention to stay: A multi-group analysis of teachers' career choice
|
[
"Teachers",
"Workplace spirituality",
"Higher education sector",
"Career choice",
"Intention to stay",
"Well-being at work"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Ever since the Academy of Management launched the special interest group \"Management, Spirituality and Religion\" in 2000, the concept of workplace spirituality has gained the attention of both academicians and practitioners as a salient area of inquiry. Workplace spirituality is essentially an employee's experience of spirituality in the context of the workplace. It is conceptualized at three levels: having a deep sense of meaning and purpose in one's work, experiencing connectedness with co-workers and experiencing a strong sense of alignment between one's personal values and their organization's mission and purpose (Milliman et al., 2003). Experience of spirituality at the workplace has been found to be positively associated with employee work attitudes such as job satisfaction, organizational commitment, OCB and organizational performance (Rego and Cunha, 2008; Crawford et al., 2009; Petchsawang and Duchon, 2012; Houghton et al., 2016). In recent times, employee well-being has emerged as one of the greatest challenges faced by managers (Boddy, 2014) and they are inquiring into strategies aimed at improving the same. Experience of more positive emotions at work has significant influences on both individual outcomes and organizational performance and productivity (Illies et al., 2015). In line with the theory of job demands-resources model (Bakker and Demerouti, 2017, 2014; Schaufeli and Taris, 2014), we conceptualize that workplace spirituality is a psychological job resource that will help employees in realizing enhanced well-being at work. Also, drawing from the Broaden and Build theory of Positive Emotions (Fredrickson, 2013), we propose that when employees experience positive emotions at work, higher will be their intention to stay with the organization. Research in these areas are still emerging, and our study attempts to fill this gap in the literature and examines the role of workplace spirituality in influencing well-being at work and intention to stay with the organization.\n2. Literature review: 2.1 Workplace spirituality\n3. Theoretical framework and hypothesis development: One of the basic premises of the Job Demands-Resources model (Bakker et al., 2014; Demerouti et al., 2001) - employed in this study - is that the job resources are critical for employee well-being. Job resources are those physical, psychological, social, or organizational aspects of the job that help the person to cope with job demands, increase learning and development as an employee and are useful in accomplishing work-related goals (Bakker et al., 2014; Demerouti et al., 2001). Numerous studies have examined how different job characteristics like supervisory support, autonomy, feedback and task significance act as resources, resulting in improved job outcomes (Crawford et al., 2010). Studies have identified that centrality of work is pivotal to an individual's well-being (Burack, 1999; Krishnakumar and Neck, 2002; Warr, 2003). Employees experience meaningful work through work designs that will facilitate enjoyment and purpose in what they do at workplace. Managers help experience meaningfulness by providing challenges, clarity and authenticity associated with each work, so that employees will enjoy their work and finds it connected to the larger good of the society. The dimension of meaningful work in workplace spirituality is thus conceptualized as a job resource, as the same is facilitated by managers, in the work environment. Sense of community is the togetherness experienced by co-workers, and managers play a great role on facilitating a work climate that nurtures and nourishes team-spirit and good personal relationships. Managerial practices aimed at improving the social and organizational context have been proved to add value to employee well-being at work (Bakke, 2005; Tehrani et al., 2007). Sense of community is thus conceptualized as an organizationally determined job characteristic and thus a job resource. Accordingly, workplace spirituality is conceptualized as a job resource that will help employees in realizing enhanced well-being at work and we put forward the following hypothesis:\n4. Research methodology: 4.1 Overview of sample and procedures\n5. Results: Table I reveals the means, standard deviations, and correlations between the different constructs under focus. The mean values ranged between 4.22 (meaningful work), 3.38 (sense of community), 3.52 (alignment with organizational values), 3.66 (positive affect), 2.27 (negative affect) and 3.54 (intention to stay). All variables under the focus of study had significant positive correlations with each other (Table I).\n6. Discussion: In accordance with Mitroff and Denton's (1999) initial assessment of workplace spirituality as a competitive advantage for organizations, findings of our study document the significant role of workplace spirituality in fostering well-being at work. Consistent with the job demand-resource model (Bakker et al., 2014; Demerouti et al., 2001) and our hypothesized theoretical framework, meaningful work and sense of community were found to be positively related well-being at work, whereas the proposition between alignment with organizational values and well-being was not supported. In line with existing literature (Amabile and Kramer, 2012; Arnold et al., 2007; Meyers and Van Woerkom, 2014), our study emphasized the strong association between meaningful work and well-being at work (H1a supported). Experiences of meaningful work, positive emotions and stronger intrinsic motivation at work are strong driving forces that would influence employee attitudes and outcomes which in turn act as the vital source of competitive advantage for organizations (Steger et al., 2012).\n7. Theoretical and practical implications: The findings of our study are illuminative with respect to many theoretical and practical implications. The validation of the workplace spirituality-employee outcomes framework delivers a significant contribution to organizational literature, in its efforts to understand the role of workplace spirituality in organizational settings. Designing a work environment where employees have the opportunity to experience meaning and purpose in their work, sense of community through positive relationships, and alignment of personal and organizational values may be one of the most important managerial tasks of the twenty-first century of meaningful work had the utmost influence on well-being at work. This finding implies that utmost care should be taken while designing jobs, so as to enable employees' experience meaning and purpose in what they do at the workplace. Promotion of an organizational climate that nurtures interpersonal bonding among department members would help build much more well-being and intention to stay. The present work also signifies that managerial efforts should be focused toward making the workplace a community that enriches employee's social and psychological needs for inclusion and social affiliation. Socialization programs that are designed to foster employee adjustment in the context of the workplace will prove fruitful for both the individuals and the organization alike. Since alignment with organizational values didn't emerge as a predictor of well-being at work, additional efforts and interventions like instilling more person-environment fit and spiritual leadership styles need to be taken up by managers, as to translate alignment with organizational values into higher well-being amongst their employees. The focus should be on developing personnel programs, policies and procedures that are responsive to the disparate values of employees, and thus enabling their experience of alignment with organization.\n8. Limitations and recommendations for future research: Our study was conducted among teachers in private sector and it is recommended for future researchers to consider testing the conceptual model among a larger and more representative sample of teachers, inclusive of teachers in public sector institutions. Also, the role of various demographic factors in influencing the relationships was not taken into consideration and future studies may investigate the role of gender, age-group, years of teaching, etc., as moderating factors that could further explain intention to stay. Career choice was measured using a single item, which could pose potential limits to generalizability. Studies should attempt to identify the possible moderating and mediating influence of organizational variables in the relationship between alignment with organizational values and well-being at work. Finally, it is recommended for future studies to look into organizational antecedents that would stimulate experience of workplace spirituality and well-being at work.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the influence of workplace spirituality on employee well-being and intention to stay with the organization among teachers in technical higher education institutions. Furthermore, the study endeavors to test the difference in model estimates across two groups of teachers who differ in their intentional career choice.\nDesign/methodology/approach: This descriptive study was conducted amongst a sample of 523 teachers working in technical educational institutions in India. Self-reporting questionnaires were administered among the respondents, who were selected through purposive sampling method. Structural equation modeling and multi-group analysis were done to test the hypotheses.\nFindings: Analysis revealed that workplace spirituality enhanced employee well-being and intention to stay. Differences were observed on the effects of different dimensions of workplace spirituality on job outcomes. Also, teachers' intentional career choice was found to moderate these relationships.\nOriginality/value: This study is pioneering in conceptualizing and testing a theoretical model linking workplace spirituality, employee well-being and intention to stay, particularly in the context of teachers who differ in their intentional career choice. Implications with regard to the experience of workplace spirituality and job outcomes in the specific context of teaching are elaborated, thus striving to fill a gap in existing literature.\n",
"role": "assistant"
}
] |
Emerging evaluation processes in consortium-based outsourced system development
|
[
"IS evaluation",
"Evaluation process",
"Multi‐organizational IS development",
"Consortium‐based system development",
"Outsourced system development",
"Project evaluation",
"Universities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Most organizations are changing their information system (IS) development practises from greenfield development into different procurement and outsourcing practises. At the same time the IS function has shrunk in size. Consequently, the organizations may not have enough resources to establish comprehensive practices for managing and controlling their IS investments. As the hype around the IS investments as growth generators is gone, business managers call for return on IS investments as well. IS investments have their own special characteristics and, therefore, they are not that easy to evaluate as traditional investments (Smithson and Hirschheim, 1998). The control and evaluation of information technology investments is of great theoretical and practical importance.One way of reducing the costs and risks of development has been to organize development through developing common ISs. However, multi-organizational co-operation is challenging with complex inter-organizational relationships (Ackermann et al., 2005; Huxham, 1991; Martinsuo and Lehtonen, 2007). Outsourcing relationships can involve one or several clients and one or several vendors, each of these settings having their own special characteristics (Gallivan, 1999). In this paper, we investigate a case where a consortium of universities develops a common IS and the actual system development is outsourced to multiple vendors. Our aim is to gain better understanding of the problems of evaluating and controlling a development project in this kind of consortium setting with multiple stakeholders in different roles and with varying agendas. Previous research has acknowledged that to capture the social and organizational aspects of evaluation it should be seen as a process (Farbey et al., 1999). The process of evaluation was chosen as the initial focus of our case analysis.Our case is a consortium of universities that procures a student registrar system, called Oodi, from several private vendors through public bidding process (Baskerville, 1999). The system development has been going on for more than ten years, thus the case provides us with an excellent opportunity to study the evolution of the evaluation process. We used an action case study approach to investigate the development of the evaluation practises in the case organization. As the researchers are part of the case organization, and their involvement in the process can be seen as critical and thus affecting the evaluation process, our research approach can be characterized as action case (Braa and Vidgen, 1999). As an a priori construct we applied the evolutionary social process model of Newman and Robey (1992) to identify changes in the evaluation process.We identified the critical events during the project that have influenced the evaluation process. Furthermore, we described the evolution of the control and evaluation of the system development effort in detail. Based on the empirical data, we identified new concepts for the modes of IS evaluation: reactive, transitional, and proactive. These new concepts were applied in analyzing the case. The case project started with a clearly reactive mode of evaluation. However, during the course of the project the evaluation processes evolved, becoming more proactive by nature. This change did not happen overnight, but gradually, through a series of encounters. Although our results are based on analyzing a single-consortium-based IS development project, the concepts presented in the study could be applicable to other IS development projects and their evaluation in similar organizational settings. The knowledge about suitability of different modes of evaluation into different contexts and development situations provides practical guidance for IS development evaluators.The paper is organized as follows: in the next section, we discuss the nature of the IS evaluation process in organizations. It is followed by the introduction of the case and the research methodology. After that the case is analyzed through our research model. In the last section the findings are discussed and future research directions are outlined.\n2 Evaluation process of ISs: Earlier literature suggests that evaluation should be conducted frequently over the system life cycle (Willcocks and Lester, 1993). Using this process-oriented view of evaluation, a comprehensive definition of IS evaluation is provided by Farbey et al. (1999):A process, or group of parallel processes, which take place at different points in time or continuously, for searching and for making explicit, quantitatively or qualitatively, all the impacts of an IT project and the programme and strategy of which it is a part.Furthermore, IS evaluation can occur on different levels of aggregation (Wehrs, 1999). Farbey et al. (1992) presented a detailed description of the objectives of evaluation. They maintained that evaluation can serve four different objectives:1. evaluation may be used as a part of the process of justification of a system;2. evaluation enables an organization to make comparisons between different projects competing for resources;3. evaluation provides a set of measures which enable the organization to exercise control over the project; and4. evaluation and the subsequent measurement and comparison with actual achievements provide the learning experience which is necessary if the organization is to improve its system evaluation and system building capability.There are different stakeholders inside each organization that might have different views of system development or system features (Farbey et al., 1999; Mcaulay et al., 2002; Serafeimidis and Smithson, 2000; Ward et al., 1996). The stakeholder dimension has implications also to the evaluation of the IS success (Larsen and Myers, 1999). Serafeimidis and Smithson (2000) emphasized the importance of the variety of background, skills and interests of the different stakeholders in the IS evaluation context. Farbey et al. (1999) found that field research should note and analyze different stakeholders and their interests and relationships within the project, but also, the relationships outside the context of the system project.On one hand, evaluation is a way to practice control in the organizational domain (Farbey et al., 1992). On the other hand, control can be seen as a process of measurement and evaluation (Eisenhardt, 1985). As control is a means to exercise power in the organizational context (Choudhury and Sabherwal, 2003; Kirsch, 1996; Tiwana, 2008), it can be distinguished from evaluation. Evaluation itself is not a means to exercise control, whereas evaluation results (if there is an evaluation process in place) can be used to exercise control. Hence, for the purposes of our study we see evaluation as a dynamic process that develops through series of events through the whole life of the IS (Farbey et al., 1999). We see that the evaluation process evolves during the system development process and learning is a genuine part of the process as well (Nonaka, 1991). Evaluation issues may not be even recognized at the beginning of the system development project but they evolve and are emergent (Farbey et al., 1992).Remenyi and Sherwood-Smith (1999) categorized evaluation as summative and formative and proposed that the iterative formative approach would provide continuous feedback over the system development process on the system's influence to the organization. Summative evaluations are defined as \"evaluations made as part of the feasibility study and at the end of the project to support the strategic decisions about the validity of the information systems development\". Formative evaluation is defined as \"an iterative evaluation and decision-making process continually influencing decisions about the information systems development process and the resulting information system\" (Remenyi and Sherwood-Smith, 1999). However, they note that moving to this kind of participative continuous evaluation process is not an easy task.\n3 Case description and research methodology: 3.1 Case description (Oodi)\n4 The critical episodes and encounters in the system development project: In this section, we view our case through the social process model of Newman and Robey (1992). We describe the critical episodes and encounters during the system development process and analyze their implications on the evaluation process. We aim to capture the triggers (who, why, when) that led development to certain direction.4.1 Antecedent conditions\n5 Discussion and conclusions: In this section, we synthesize the findings of our study. The evaluation mode of the organization changed rather slowly, through a series of encounters and episodes, from the reactive mode, through the transitional mode, to the proactive mode as shown in Figure 4.Proactive and reactive evaluation modes do not refer to the evaluation methods used per se, but rather to the dominant way of conducting evaluation and to the \"spirit\" of evaluation.5.1 Theoretical findings\n",
"role": "user"
},
{
"content": "Purpose: - In large system development efforts organizational and managerial issues are often more challenging than technical ones. One of the key concerns of managers is the control and evaluation of the overall development effort. The purpose of this paper was to analyze the evaluation of system development process in a setting where there are multiple stakeholders in multiple organizations that develop a common information system (IS). The case is a consortium of universities that has developed a common student registrar system over a period of more than ten years.\nDesign/methodology/approach: - The authors used a social process model to analyze the episodes and encounters in the system development and evaluation process. The research approach can be characterized as action case.\nFindings: - The authors found that in the early stages of system development the evaluation was emergent or even non-existent. As the system was developed further and issues, such as delays and missing features, arose, there was a need for better control of the process. Thus, the evaluation process evolved through a series of critical encounters into a more proactive one.\nResearch limitations/implications: - The authors studied only one case so broad generalizations directly from the case cannot be made. However, the results could hold true for similar settings. The authors believe that the results are interesting to both researchers and practitioners since in outsourcing relationships the consortium kind of arrangement is becoming more common.\nOriginality/value: - The authors analyze the evolution of IS evaluation processes in a setting with multiple client and vendor organizations and reveal the complexities of system evaluation in such settings. Moreover, they introduce three new concepts: reactive, transitional and proactive evaluation processes. These concepts can help researchers and practitioners to better understand the nature of the IS evaluation processes and to possibly choose an evaluation approach that has a better \"fit\" with the development situation at hand.\n",
"role": "assistant"
}
] |
To assess and be assessed: Upper secondary school students' narratives of credibility judgements
|
[
"Information literacy",
"Learning",
"Students",
"Schools"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: How can educational activities be organised in order to facilitate students' learning of and learning from information seeking and use? This constantly ongoing and relevant discussion is central to librarians and teachers, as well as to scholars in the field of information literacy and adjacent fields. In the research literature on information literacy, a common theme concerns how information seeking and information use can be turned into objects of teaching (Limberg and Alexandersson, 2010). In school settings, learning to participate in information activities is not tied to one particular subject. Thus, the learning from and of information seeking and use run the risk of being taken for granted and not taught explicitly, or, if made into an object of teaching, become decontextualised and separated from other school activities.\n2. Information activities in school settings: In our empirical case, information activities, and particularly credibility judgements, are made into objects of teaching. As objects of research, credibility judgements have been studied and discussed for several decades, not least in terms of \"relevance\" in the fields of information retrieval and information behaviour (Rieh, 2010, p. 1338). In recent years, credibility assessments, and related notions such as trust, expertise, and authority, have gained increased attention, especially in relation to the internet where traditional cues used for judging credibility, such as identifiable or well-known authors and publishers, become insufficient and sometimes even obsolete (e.g. Gasser et al., 2012; Lankes, 2008; Rieh et al., 2010; Sundin and Francke, 2009).\n3. Theoretical points of departure: Our study of students' descriptions of their credibility judgements in a Swedish upper secondary school setting is informed by a neo-Vygotskian sociocultural perspective of learning, particularly the work by James Wertsch (Wertsch, 1998, 2000; Erstad and Wertsch, 2008) and Roger Saljo (2000, 2005). This theoretical perspective, which highlights social and discursive aspects of learning, is relevant as we are interested in how the group of students learns to participate in the social activity of working with an assignment, and also how they negotiate this activity when interacting through their blogs.\n4. Aim and research questions: In this study, we are interested in some of the consequences when information activities, and particularly credibility assessments, become objects of teaching and learning in an upper secondary school setting. The aim of the study is to explore how students, through their blog posts, construct narratives of themselves as information seekers in a school context where their descriptions of their information activities are assessed and graded. Our analysis is guided by the following three questions:\n5. Methods and empirical setting: The blog posts analysed in this study derive from a larger multi-method study of online and offline activities taking place during group work in two classes in two different Swedish upper secondary schools. Thus, our understanding of the blog posts, which will be described in detail below, was enhanced by an understanding of the context in which they were produced.\n6. The construction of narratives: The blogs introduced a means for communication that had not been used by the class for previous assignments. To a large extent, the blogs contain accounts of communicative activities which have happened elsewhere: in classrooms, in libraries, and other places where the students had been working with the assignment. But spaces for communication and interaction were also created by the blogs. The work on the assignment took place, in part, through the interplay between the students' blog posts and the readers' comments. In this sense, one could say that the audience co-constructed the students' narratives. In the following, this interplay and co-construction will be analysed and described.\n7. The narratives of selves: Above, the students' texts have been analysed in terms of a process of mastering or appropriating a language valued in connection with information seeking and credibility assessment in Swedish upper secondary schools. In the blogs, the students were given the possibility of redefining their information activities, and their information seeking selves (see Erstad and Wertsch, 2008, p. 34), through a language that appears appropriate in relation to their assignment. A basic premise for narratives is the use of structures that link actions and events in logical order. By using the blogs, the students are encouraged to create coherent and rational accounts of their own and their groups' information activities in relation to how they understand the criteria for the assignment.\n8. Information activities as objects of assessment: In this paper, an analysis is conducted of the interaction taking place between students and educators when credibility judgements become an object of teaching, and moreover, an object of assessment. The analysis suggests that the students are facing a demanding task, not least since their assignment has been constructed so as not to have pre-defined and correct answers. As their work with the assignment is progressing, the students use the interactive function of the blog to get feedback on and negotiate the expectations placed on them.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore how students construct narratives of themselves as information seekers in a school context where their descriptions of their information activities are assessed and graded.\nDesign/methodology/approach: Blog posts on credibility judgements written by 28 students at a Swedish upper secondary school were analysed through a bottom-up coding process based in the sociocultural concept of narratives of selves.\nFindings: Two tensions in the students' accounts are identified. The first tension is that between the description of the individual, independent student and the description of the good group member. The second tension is between describing oneself as an independent information seeker and at the same time as someone who seeks information in ways that are sanctioned within the school setting.\nResearch limitations/implications: The study focuses on a specific social practice and on situated activities, but also illustrates some aspects of information activities that pertain to educational contexts in general. It explores how social norms related to credibility judgements are expressed and negotiated in discursive interaction.\nPractical implications: The study highlights that when information activities become objects of assessment, careful consideration of what aspects are meant to be assessed is necessary.\nOriginality/value: The study is based on the idea of information activities as socially and discursively shaped, and it illustrates some of the consequences when information activities become objects of teaching, learning, and grading.\n",
"role": "assistant"
}
] |
Exploring jealousy and envy in communal relationship revenge-seeking
|
[
"Communal relationships",
"Revenge-seeking",
"Betrayal",
"Envy",
"Communal relationship",
"Jealousy"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Ugh. Why should I be punished for being a loyal [...] customer for 4 years by paying so much more?\"\nIntroduction: While firms often benefit from communal relationships that lead to customer-to-customer (C2C) brand advocacy (de Villiers, 2015), firms also cede some control over the flow and content of information exchanged between customers (Colliander et al., 2016), potentially resulting in negative perceptions of the firm (Adjei et al., 2016). Jealousy can arise when a customer observes an oft-patronized and beloved firm - a communal relational party - directing preferential attention, such as pricing, shipping or special treatment, toward another customer (Ludwig et al., 2017). When C2C comparisons expose actions that are deemed unequal, the perceived deviation from relationship norms can lead to customer perceptions of firm betrayal (Gregoire et al., 2010). By understanding the C2C comparisons that result when customers perceive differences between how they are treated versus others, and the role of equity in retaliation after such a perceived slight, companies can better manage the negative response associated with betrayal-motivated revenge-seeking.\nTheory and hypothesis development: Equity theory\nMethod and results: Measures\nDiscussions and implications: This research contributes to the growing literature on envy, jealousy and betrayal in communal relationships by establishing the role of the constructs in driving the connection between unfairness and revenge behaviors. This study provides insights into the paths that communal customers can take in seeking to tip the scales back to perceived equity after C2C comparisons that uncover inequitable treatment.\nLimitations and future research: A student sample was used in the pretest of the survey items, and an M-Turk sample was used to collect data in the primary study. While care was taken to provide a scenario appropriate for these samples, and the methods used best practice for both the student sample (Yoo and Donthu, 2001) and the M-Turk sample (Hulland and Miller, 2018; Hulland et al., 2018), future research might replicate these findings using a different sampling approach. Moreover, the study used a survey research methodology that focused on one scenario to determine the different paths to retaliation for communal customers. The large size of the structural model neared saturation as additional paths was added. Future studies should use experimental design to isolate the variables and treatments further. As jealousy and envy represent two distinct constructs, this experimental approach may explore the effects of envy on the development of jealousy and vis versa in relationships. Additionally, the scenario focused on C2C comparisons after a Christmas purchase. Additional scenarios could be used to triangulate the findings and further differentiate the various customer shopping situations and relationship types that emerge beyond communal customers through more exchange-based means. In addition, distributional proximity to the consumer should be explored in future research to determine if and when envy and jealousy arise when irregularities in shipments are expected.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to identify the ways that customers respond to customer-to-customer comparisons that may drive loyal customers to engage in undesirable behaviors. The research examines the role that jealousy and envy play in restoring equity through revenge-seeking intentions.\nDesign/methodology/approach: The study uses survey research methodology. The measurement model is validated using CFA, and hypotheses are tested using structural equation modeling. The mediated relationships are calculated using the bootstrap method, and moderated mediation is calculated by creating estimands to test the effects.\nFindings: Customers who feel either jealousy or envy may engage in revenge-seeking behaviors, such as vindictive complaining and negative electronic word-of-mouth (eWOM). Customers who perceive that a firm is unfairly favoring other customers develop feelings of jealousy and betrayal, and this tendency is strengthened when the customer has a high level of prior trust. Customers typically develop envy when their attention and perceptions of inequity center on another customer, rather than on the firm's actions and anger drives this effect.\nPractical implications: Customers can pursue revenge-seeking actions when unfair actions influence the formation of envy and jealousy through the development of perceived betrayal. Companies can focus on the comparisons that customers make to address revenge-seeking and better manage online relationships preemptively.\nOriginality/value: The paths that customers take to revenge through jealousy and envy are conceptualized in a communal relationship setting and further developed. Further distinctions of jealousy and envy are made, and the role of prior trust in enhancing revenge-seeking is found.\n",
"role": "assistant"
}
] |
Performance management in Canadian public organizations: findings of a multi-case study
|
[
"Performance measurement",
"Contextual factors",
"Success factors",
"Public sector organizations",
"Challenges and barriers",
"Performance management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Despite the widespread use of performance management (PM) in public sector organizations worldwide, there has been criticism of its efficacy in fostering performance improvement. In a recent review of 30 years of public sector PM literature, Jackson (2011) suggests that we still do not have clear answers to important questions like does performance measurement result in better decisions and improved outcomes? In a similar vein, Sanger (2008), in a study of local and state governments in the USA, suggests that there are still obstacles to the effective implementation of PM such as suppressing and manipulating negative data leading to the perception that there is a lack of transparency in the public reporting of performance. This is not surprising as increasing pressures for accountability are forcing public sector managers to gather metrics to justify their programs. However, as Thomas (2007) has observed, there are few incentives to recognize problems, learn from mistakes, and experiment with new strategies for improvement. A study by Radnor and McGuire (2004) illustrates this problem in the UK. The authors concluded that despite the implementation of a PM system, including balanced scorecards, there was lack of ownership and accountability for the system and most managers were \"working the system\" to meet externally imposed requirements for reporting on performance.\nBackground literature: Since 1995, the Canadian Government has required federal departments and agencies to develop strategic objectives and goals and to develop plans to measure and report on results. On an annual basis, departments and agencies are obliged to provide a Report on Plans and Priorities (RPP) with estimates and justification of their spending plans. In addition, they must later provide Departmental Performance Reports, which focus on performance measures, both financial and non-financial, as they relate to the commitments made in the RPPs. The Canada Treasury Board Secretariat (TBS) (2007) is the central agency that oversees this process; it provides direction to departments and agencies through policy directives and guidelines. This PM process is also practised at the provincial and municipal levels in those cases where provincial departments and municipalities have to provide annual performance reports. Again, legislation has provided guidelines on the reporting standards for these performance reports (Schatteman, 2010). There is, therefore, a significant amount of PM activity routinely performed in Canadian public sector organizations. This is no surprise, as the Canadian public sector is not immune to the \"New Public Management\" focus of governments and the growing concern with accountability and results-based management (Kaboolian, 2009; Borins, 1995).\nScope and purpose of study: The objective of this study is to examine in more depth how public sector organizations in Canada are implementing PM. As discussed in the literature review, previous research on public sector PM in Canada has been mostly at the municipal levels of government. These studies at the municipal level have been mostly quantitative surveys of senior public sector managers and/or politicians (Schatteman, 2010; Pollanen, 2005), and single case studies or observations of PM practices. The focus has been through the lens of performance measurement experts, and/or organizational decision-makers (drivers of such practices in the organization), mostly at high levels (McDavid and Huse, 2011). In our view, research on this issue should focus on all levels of the organization, including the employees who have to develop, gather and report on performance measures related to their work.\nMethodology: Sample\nFindings: The findings of our cross-case analysis will be presented in three parts. The first will discuss some of the perceived challenges and barriers that interview participants articulated. Second, a number of success factors are identified from the data, suggesting positive outcomes that these organizations have experienced through implementing PM in their organizations. Third, we will discuss some contextual factors identified in the research.\nDiscussion and practical implications: The findings of this study suggest that public sector organizations continue to face significant challenges and barriers to implementing PM in their organizations. But despite these challenges, the findings have also identified some success factors that can improve the effective implementation of PM. Furthermore, an important implication that emerged from this study is the issue of context as an important factor to be considered in understanding PM in the public sector. Previous empirical research and discussions have focussed on only finding a universal set of effective PM practices in comparing private vs public sector organizations. This approach ignores the \"within-context\" issues in public sector organizations which may be just as important.\nConclusions and future research: Our study of five Canadian public sector organizations did not find a consistently strong message that PM is meant to improve performance. Part of the problem is the desire for central agencies to demand performance reports from departments as a routine activity across all departments on a regular annual cycle. This ignores the different role of each department in delivering very different services or in monitoring and ensuring compliance. The required uniformity and enforced templates for performance measurement and reporting is a root cause of the problem. The contextual analysis of this study suggests that governments need to take a more context-sensitive approach when imposing PM frameworks on public sector organizations. It is critical that they are cognizant that contextual factors such as size and geographic dispersion of a public sector organization and the complexity of their operating environment can mitigate the success of an imposed rigid one size fits all PM framework. However, this bureaucratic mentality may be hard to change (Behn, 2003; Radin, 2006).\n",
"role": "user"
},
{
"content": "Purpose: Performance management (PM) is now clearly a well-established practice in public sector organizations. However, increasingly scholars have been questioning its efficacy in improving organizational performance. Research has shown that the presumed benefits remain questionable and that there are many barriers, challenges and problems in implementing PM. The purpose of this paper is to report and discuss the findings of a multi-case study that examines in more depth how five Canadian public sector organizations are implementing PM.\nDesign/methodology/approach: A qualitative multi-case study approach was used in this study. Structured interviews were carried out in each public sector organization and the interview data were analyzed using NVivo8. Individual case profiles were also written. A cross-case analysis was carried out using data from these five cases.\nFindings: The cross-case analysis of the data focussed on the major themes emerging from the data with respect to challenges and barriers, success factors, context and implications for practice for PM in public sector organizations. Three contextual factors are identified and discussed in explaining some of the findings. Conclusions are drawn for making PM more effective in achieving performance improvement in public sector organizations and future directions for research.\nResearch limitations/implications: The research findings and implications for practice are based on five Canadian public sector organizations so may limit its generalizability to public sector organizations in other countries.\nPractical implications: Some practical implications are discussed with respect to implementing PM more successfully in public sector organizations. This included the better integration of PM to corporate strategy, leadership in developing a positive PM culture and employee buy-in and commitment to the process.\nOriginality/value: This qualitative multi-case study of PM in Canadian public sector organizations has not previously been done. This approach allows for a more close-up look at PM in public sector organizations especially how it is implemented and the experiences of organizational members. The paper also presents new insights on context as an important variable in explaining the findings from the cross-case analysis and points to future new directions for research and in developing a contingency theory approach to PM.\n",
"role": "assistant"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.