title
stringlengths 6
301
| keywords
listlengths 0
23
| messages
listlengths 2
2
|
|---|---|---|
Gender differences in salary in a female-dominated profession
|
[
"Pay",
"Pay differentials",
"Negotiating",
"Gender",
"Psychology",
"Universities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: During the last three decades, women have made progress in narrowing the gender pay gap, which is a statistical indicator of women's earnings as compared to that of men (AAUW, 2005; Brunner, 2007). A report from the Institute for Women's Policy Research (2009) describes much of this progress occurring during the 1980s, with growth slowing considerably after 1990. In 2008, the ratio of women's to men's median weekly earnings for full-time employees was 79.9-100.0, down slightly each year from a high of 81.0 for women in 2005. Despite this trend, the American Association of University Women (AAUW, 2007) found that just one year out of college, women working full time already earn less than their male colleagues. At this point, women earn approximately 95 percent of the salary of their male counterparts in the field of education. Ten years after college graduation, the pay gap between men and women widens, particularly in educational occupations, with women earning 13 percent less than their male colleagues (AAUW, 2007).Factors explaining gender-based differences in pay\nMethod: Study 1\nResults: Study 1\nDiscussion: Despite numerous advancements in education, training and degree status, women performing either or both the practice and science roles in the field of school psychology earn less money than similarly qualified men. That is, whether in an academic (e.g. university) or practitioner (e.g. primary or secondary school) setting, women consistently receive less compensation for comparable work activities. This study investigated the salary differences for men and women school psychology trainers and practitioners. Furthermore, it identified the negotiation practices, degree of job satisfaction and the impact of gender on these variables across employment settings. Results show that salary equity in the workplace remains a challenge for women in school psychology.Differences in salary earned by males and females\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to examine the salary and promotion negotiation practices of female and male school psychology practitioners and university instructors of school psychology practitioners in order to determine whether salary differences exist between male and female employees in the field of school psychology, which has become a female-dominated profession.\nDesign/methodology/approach: - A total of 191 female and 115 male faculty members and 148 female and 56 male school psychologists completed a survey regarding salary, negotiation practices, and job satisfaction.\nFindings: - Results suggest that females earn less than male colleagues, controlling for years of experience and degree attainment. No gender differences were found regarding faculty participants' willingness to negotiate for increased salary; however, males were more likely to negotiate for promotion. Likewise, no gender differences were evident in practitioners' salary and promotion negotiation attempts, although none were expected, given the salary schedule constraints unique to occupations in the field of education.\nResearch limitations/implications: - The paper is limited to one profession, albeit both university faculty and school psychology practitioners, and was conducted in the USA, so the findings may have limited generalizability to other professions and/or in other countries.\nPractical implications: - The paper demonstrates that gender pay differences exist despite no differences in males' and females' willingness to negotiate for salary. Consequently, it is likely that pay differences between men and women are due to reasons other than individuals' education levels, years in position, and negotiation practices.\nOriginality/value: - This is the first paper that tracks salaries and the negotiating practices of school psychologist trainers and practitioners. It also finds that male/female salary differences carry over into a female-dominated profession.\n",
"role": "assistant"
}
] |
Intercultural household food tensions: a relational dialectics analysis
|
[
"Food consumption",
"Intercultural families",
"Relational dialectics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Significant shifts in global migration over recent decades have led to the increased formation of intercultural families, forged through the blending of cultures, and resulting in new family formations facing challenges as they settle in their cultural context (Lindridge et al., 2016; Castles and Miller, 2009). Around 3.3 per cent of the world's population is settled outside their country of birth, and in many Western countries, the figure is 14 per cent and rising (United Nations, 2017). Several studies have indicated that immigration, globalisation and acculturation are important macro-factors influencing the likelihood of meeting, interacting with and marrying someone from another culture (Berry, 1997; McFadden and Moore, 2001; Qian and Lichter, 2007). Thus, it has become increasingly appropriate to speak of transnational or intercultural couples and families. There have been several calls for researchers to account for the dynamic changes in culture that result from the increasingly frequent and complex interactions between the people, products and practices from several cultures (Luedicke, 2015, 2011; Yaprak, 2008). Yet, as a cultural and social phenomenon, intercultural families represent an under-researched family or household form (Bystydzienski, 2011; Cross and Gilly, 2013).\n2. Literature review: 2.1 Relational dialectics\n3. Context and methods: The study context is Ireland, a distinctive regional space of migration and mobility, with recent data revealing that almost 12 per cent of the population was born outside Ireland (compared to 6 per cent in 2002). The largest group of \"East-West (European) free movers\" (Favell, 2008) in the immigration boom since 2004 originates from Poland. In this research, we focus on Polish-Irish families, providing a two-sided empirical account of the relational culture of intercultural families.\n4. Results: Our analysis identifies a range of relational dialectic tensions around food, illustrating the strategies used to address these tensions as they relate to wider relational development. Analysis is framed around key tensions related to food practices that our intercultural families face.\n5. Discussion: Our study explored a range of intercultural tensions, and how the family relational culture develops in the creation of family practices around food (Silva et al., 2012). The use of relational dialectics theory facilitates a close examination of intercultural dialectic tensions, ostensibly relating to food, but indicative of some tension in the relational culture, which is negotiated and managed within the intercultural household. This research is important as recent understandings of the family in consumer research have shifted towards considering families as comprising a unique relational culture, wherein consumption takes place (Lindridge et al., 2016; Cross and Gilly, 2014). Within the intercultural context, understanding of the unique relational culture is complicated by the broader cultural context, emphasising the importance of acculturation and enculturation theory (Penaloza, 1989; Davies and Fitchett, 2010) in this setting.\n",
"role": "user"
},
{
"content": "Purpose: Recent global migration trends have led to an increased prevalence, and new patterning, of intercultural family configurations. This paper is about intercultural couples and how they manage tensions associated with change as they settle in their new cultural context. The focus is specifically on the role food plays in navigating these tensions and the effects on the couples' relational cultures.\nDesign/methodology/approach: A qualitative relational-dialectic approach is taken for studying Polish-Irish intercultural couples. Engagement with relevant communities provided multiple points of access to informants.\nFindings: Intercultural tensions arise as the couples jointly transition, and food consumption represents implicit tensions in the household's relational culture. Such tensions are sometimes resolved, but sometimes not, leading to enduring tensions. Dialectical movement causes change, which has developmental consequences for the couples' relational cultures.\nResearch limitations/implications: This study shows how the ways that tensions are addressed are fundamental to the formation of a relational family identity.\nPractical implications: Recommendations emphasise the importance of understanding how the family relational culture develops in the creation of family food practices. Marketers can look at the ways of supporting the intercultural couple retain tradition, while smoothly navigating their new cultural context. Social policy analysts may reflect on the ways that the couples develop an intercultural identity rooted in each other's culture, and the range of strategies to demonstrate they can synthesise and successfully negotiate the challenges they face.\nOriginality/value: Dealing simultaneously and separately with a variety of dialectical oppositions around food, intercultural couples weave together elements from each other's cultures and simultaneously facilitate both relational and social change. Within the relationship, stability-change dialectic is experienced and negotiated, while at the relationship's nexus with the couple's social ecology, negotiating conventionality-uniqueness dialectic enables them reproduce or depart from societal conventions, and thus facilitate social change.\n",
"role": "assistant"
}
] |
Innovative employee behaviour: The moderating effects of mental involvement and job satisfaction on contextual variables
|
[
"Innovative work behaviour",
"Innovation management",
"Job satisfaction",
"Mental involvement and innovation trust",
"Denmark"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Innovation can be seen as an overall organizational learning orientation in which success is achieved by paying less attention to specific innovation projects and paying more attention to a general innovation orientation that produces innovative capabilities (Siguaw et al., 2006). Scholars agree that organizational innovativeness is a key to competitive advantages and strategic renewal (Zhang and Bartol, 2010). In this respect employees are important for organizational innovative capabilities as they are responsible for developing, reacting to and modifying ideas (Scott and Bruce, 1994), also known as innovative work behaviour (IWB). In the eyes of many employees IWB is seen as an extraordinary and risky behavioural effort (Lee, 2008). In order to become innovative, organizations must manage and foster an inner environment that supports the innovative behavioural traits among employees (Dobni, 2010; Alpkan et al., 2010). In this article the inner environment is composed of job autonomy and innovation trust. Job autonomy provides employees with the necessary decision latitude and empowerment for being innovative (Hennessey and Amabile, 2010; Shalley et al., 2000). At the same time, job autonomy increases the possibility of sub-optimal behaviour because of monitoring difficulties (Langfred, 2004). Innovation trust is important as this reduces the perceived riskiness and negative reactions caused by IWB (Unsworth and Clegg, 2010). An inner environment characterized by innovation trust will allow employees to more freely introduce new ideas, knowing that their co-workers will respond positively to this.Scholars have called for more research investigating contextual variables for IWB (Shih and Susanto, 2011; Janssen, 2003). This paper investigates two such variables by addressing the question: how does job satisfaction and mental involvement moderate the inner environment supporting IWB?The context of this study is the financial sector as this sector is forced to become more responsive to innovative demands, because technological advancements have caused an increasing digitalization in the financial sector. Traditionally, the banking sector has been characterized by a close relationship between the financial advisor and the client. This relationship is changing because of the increased use of, for example, e-banking and smart phone apps, which are both examples of the increased demand for innovation in the banking sector (Vermeulen and Dankbaar, 2002). The innovation pressure in the financial sector also arises because of increased competition and downsizing tendencies in the banking sector. Employees in the financial sector therefore have to develop new routines and respond to new demands to ensure organizational effectiveness and competitiveness.The contribution of the paper falls in two parts; first it addresses central aspects of dealing with the negative effects of IWB. Employees with a high degree of job satisfaction engaged in an environment characterized by innovation trust will demonstrate higher levels of IWB. Innovation trust allows employees to cope with the negative effects of innovation. In this respect, this paper addresses the concerns of increased turnover intentions and conflicts with co-workers, thus questioning the benefits of IWB (Shih and Susanto, 2011). From a managerial perspective it is important to create an environment characterized by innovation trust as this can help to align IWB and job satisfaction. Thus, under the right conditions, IWB can benefit both innovation and job satisfaction. The second contribution is found in relation to the effectiveness of job autonomy. Mental involvement is found to be a positive moderator for the relationship between job autonomy and IWB. Employees who are mentally involved in their jobs will be more effective and focused when they have the freedom to innovate. By looking at mental involvement, we can clarify under which constellations job autonomy promotes high performance and in which situations autonomy is less effective. Such insights have been identified as central for job performance in situations with low opportunity for monitoring employees (Langfred, 2004; Janssen, 2003). Managers should understand the relationship between mental involvement and job autonomy as drivers of innovative work performance. Managers can apply this insight in the recruitment process of new employees, as mental involvement ensures higher efficiency in situations with high job autonomy.The data are collected through a survey among employees in a pension, banking and insurance company in the Danish financial sector. It forms part of a larger employee satisfaction survey performed by Ennova A/S. The sample consists of 294 usable responses, which gives a response rate of 93 per cent.The paper is organized as follows: first, the theoretical background will be outlined and the conceptual framework developed. Second, the methodological considerations and choices will be addressed. Then the results will be presented and the hypothesis examined. This will be followed by a discussion and conclusion.\n2. Theory and conceptual model: The influence of job autonomy and innovation trust on IWB\n3. Methodology: Data and sample\n4. Analysis and results: The analysis was performed as a combination of a confirmatory factor analysis (CFA) and a hierarchical regression analysis including moderating effects (Baron and Kenny, 1986). A CFA analysis was conducted in order to test for discriminate validity. The respective fit measures in relation to the CFA analysis are (CFI 0.955; RMSEA 0.074). These measures show a good discriminate validity (Tabachnick and Fidell, 2007). The variance inflation factor also confirms that multicollinearity was not a problem (interval 1.02-2.35). The internal consistency of the scales was measured by Chronbach's a statistics showing an appropriate internal consistency (0.879-0.909) (Table II).As expected the results presented in Table III show that the inner environment of innovation has an effect on IWB. Further, the mental states included in this study also have a significant impact on the effectiveness of the inner environment. In the following section the results of Table III will be presented.Control effects\n5. Discussion and conclusion: IWB is argued to be central for innovation to emerge in the financial sector. The study further extends the research stream on IWB by focusing on two central negative aspects of IWB. The first problem is a decrease in efficiency because of monitoring problems induced by a high degree of autonomy. The second problem arises because of the negative consequences of IWB induced by a high level of IWB among employees.The influence of job autonomy and innovation trust on IWB\n",
"role": "user"
},
{
"content": "Purpose: - Organizational performance is positively affected by employees' innovative behaviour, but recent studies indicate that innovative work behaviour may have negative performance consequences. Negative tensions may arise due to employees' different views on innovation, and efficiency may be reduced because of high job autonomy in innovative job settings. This study aims to examine how job satisfaction and mental involvement via moderation effects increase the effectiveness of innovative work behaviour.\nDesign/methodology/approach: - A theoretical model was developed and tested on panel sample data representing 294 employees in a Danish financial company. The response rate on the survey was 93 per cent. The data was analysed via confirmatory factor analysis and hierarchical regression models.\nFindings: - This study shows that job satisfaction and innovative work behaviour can be aligned in environments characterized by innovation trust. Further mental involvement is identified as a component increasing innovative work behaviour in situations without the opportunity for monitoring.\nPractical implications: - From a managerial perspective: the alignment of job satisfaction and innovative behaviour is central, as this joint optimization affects organizational performance positively.\nOriginality/value: - This study reports one of a very limited number of studies focusing on the negative effects of innovative work behaviour. This study demonstrates that innovation trust aligns job satisfaction and innovative work behaviour, and further that mental involvement increases the efficiency of job autonomy.\n",
"role": "assistant"
}
] |
The impact of web site structure on link analysis
|
[
"Information retrieval",
"Internet",
"Worldwide web"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Web mining tries to extract new knowledge from the web. Web mining can be used to monitor human information behavior and web pages on a large scale. So far, web-mining research within information retrieval has focused mainly on the link structure of the Internet. Links have become an important source of knowledge for information retrieval systems on the web. Search engines exploit the link structure of the web in order to improve search results. The number of links toward a web-page are considered to be an indicator for the quality of that page (Henzinger et al., 2002). This measure of quality or desirability is used in the ranking of the result documents of many web search engines.Link analysis has gained great importance despite the fact that the intentions for why and when authors of web pages set links to other pages are not yet fully understood. The reasons for a single link can be manifold and may be based on a variety of considerations, which may include quality, topic, and/or age of the target page. It is obvious, for example, that navigational links should not be interpreted as quality indicators. The reasons for humans to point to other pages from their own pages need to be further investigated.This study identifies structural aspects of sites, which exhibit strong correlations to the number of links to a page. That means, that web authors are more likely to set links to pages with these features. We will discuss the consequences for the design of link analysis algorithms. These algorithms take the number of in-links as one feature in the ranking of their search results. If the number of in-links are biased by the structure of a web site, the developers need to be aware of that. They need to ask whether this positive bias toward homepages is desired. For many navigational search tasks, this might be welcome. On the other hand, homepages are unlikely to be good results for informational search tasks. In this case, the bias should be neutralized within link analysis algorithms. An alternative approach for exploiting the site structure by aggregating the pages to the site level is presented by Xue et al. (2005).Throughout the rest of the paper and in concordance with the terminology in web link analysis, we will refer to links, which point to a web page as the in-links or back-links of that page. Links in a page pointing to other pages are called the out-links of that page. Links which point to other hosts is called external out-links. We will speak of hierarchy in terms of an inverted tree. That means, high levels will refer to pages close to the root or homepage whereas low or deep levels will refer to pages close to the leaves.The paper is organized as follows. The next section reviews previous work in link analysis in more depth. The following section describes our experiments and the results. A discussion and interpretation of the results follows. Within this discussion, we also present a method for alleviating the structure bias within web retrieval. The closing section reveals some directions for further study.\nLink analysis research: The Internet poses several new challenges for information retrieval and, at the same time, offers additional knowledge sources. One of the main challenges for search engines is the heterogeneous content quality of Internet pages (Henzinger, 2000). The common approach to automatically measure the quality of a page has been link analysis. However, link analysis is a heuristic method. The number of links pointing to a page, are considered as main quality indicator (Brin and Page, 1998). A large number of algorithms for link analysis has been developed (Henzinger, 2000). The most well known are probably the PageRank algorithm and its variants (Jeh and Widom, 2003; Lempel and Moran, 2000; Haveliwala, 2002).The basic assumption of PageRank and similar approaches is that the number of in-links of a web page can be used as a measure for the popularity and consequently for the authoritative quality of a page (Henzinger, 2000; Brin and Page, 1998). PageRank assigns an authority value to each web page, which is primarily a function of its back links. Additionally, it assumes that links from pages with high authority should be weighed higher and should result in a higher authority for the receiving page. To account for the different values each page has to distribute, the algorithm is carried out iteratively until the result converges. PageRank may also be interpreted as an iterative matrix operation (Meghabghab, 2002). The PageRank value gives the probability for a random surfer to reach a page. A random surfer selects a link at random and jumps to a random page after a certain number of interactions.Similar measures have been used for decades in bibliometrics for the evaluation of scientific literature (Choo et al., 2000). Meanwhile, the availability of many papers online and some technical advancement have made bibliometric systems for scientific literature available on the Internet (Lawrence et al., 1999). The availability of such measures will eventually lead to an even greater importance and impact of quantitative evaluation. Research is also already investigating the relationship between the scientific quality of universities and the in-links of the corresponding university pages (Thelwall and Harries, 2003).The Kleinberg algorithm (Kleinberg, 1998) is a predecessor of PageRank and it works similarly. However, it assigns two types of values to the pages. Apart from the authority value, it introduces a so-called hub value. The hub value represents the authority as an information intermediate. The Kleinberg algorithm assumes that there are two types of pages, content pages and link pages. The hub value or intermediate quality is high when the page refers to many pages with high authority. Accordingly, the topical authority is increased when a page receives links from highly rated hubs (Kleinberg, 1998). Unlike PageRank, which is intended to work for all pages encountered by a web spider of a search engine, the Kleinberg algorithm is originally designed for the results of a topical search. This result set is expanded to include pages with links pointing to pages in the set and pages to which pages in the result set point.However, link analysis has several serious shortcomings. The assignment of links is a social process leading to remarkable global patterns. The number of in-links for a web page follows a power law distribution (Adamic and Huberman, 2001; Dill et al., 2001). In such a distribution, the median value is much lower than the average. That means many pages have few in-links while few pages have an extremely high number of in-links. This finding can be explained by assuming that web page authors act according to economic principles and invest as little time as possible for their selection. Like other social actors in networks, they rely on the preferences of other actors (Barabasi, 2002). If some people already set links to a page, others are more likely to link to the same pages than to other pages. Consequently, pages with a high in-link degree are more likely to receive further in-links than other pages (Pennock et al., 2002). Another reason for setting links is thematic similarity. A co-citation matrix between web pages of different topics shows that most links are based on topical similarity (Chakrabarti et al., 2002). Definitely, quality assessment cannot be the only reason for setting a link.The quality or authority value derived from the link-structure needs to be aggregated with the retrieval ranking (Yang, 2000). The power law distribution of authority values makes the values of some pages very dominant while other have little influence. A study of university sites questions the assumption that quality is associated with high in-link counts. It was shown that the links from university sites do not even lead to pages with scientific material in most cases. They rather refer the user to link collections and subject specific resources (Thelwall, 2002).An experiment carried out by Amento et al., suggests, that the human perception of the quality of web pages can be predicted equally well by four formal features. These four include link analysis measures like the PageRank value and the total number of in-links. However, simple measures like the number of pages at a site and the number of graphics on a page correlated also with the human judgements (Amento et al., 2001). Quantity might to be an indicator for quality as well. Another experiment showed that a notion of quality learned from examples can lead to better retrieval results (Mandl, 2006).Large-scale evaluation of web information retrieval is carried out within TREC (Text Retrieval Conference, http://trec.nist.gov) (Voorhees and Harman, 2000; Voorhees and Harman, 2001; Voorhees and Harman, 2000). TREC provides a test bed for information retrieval experiments. The annual event is organized by the National Institute for Standards and Technology (NIST) which maintains a large collection of documents. Research groups apply their retrieval engines to this corpus and optimize them with results from previous years. They send their results to NIST, where the relevance of the retrieved documents is assessed. A few years ago, a web track has been introduced, where a large snapshot of the web forms the document collection (Hawking, 2000, Hawking and Craswell, 2001; Hawking and Craswell, 2002). In this context, link based measures have been compared with standard retrieval rankings. The results show that the consideration of link structure does not lead to better retrieval performance for informational tasks (\"find information about X\"). This result was observed in the web track at TREC 2001 and TREC 2002 where different corpora were used for experimentation. Only for navigational search tasks like home page finding (\"find the homepage of X\"), link based authority measures like the PageRank algorithm have led to an improvement (Hawking, 2000; Hawking and Craswell, 2001, Hawking and Craswell, 2002; Savoy and Rasolofo, 2000). As a consequence, further refinement of link-based algorithms is necessary.Our analysis demonstrates that several other structural factors need to be taken into account when exploiting link information.\nExperiments: In our experiment, we were especially interested in the structure of sites and pages and their influence on the in-links.In order to determine the hierarchical structure of a site, we applied two methods.1. Yahoo set: A large site, which allows the automatic determination of the hierarchical level of a page, was selected. We chose the German version of Yahoo and focused on the health branch. We extracted some 10,000 pages.2. Search engine set: The second method relies on a heuristic determination of hierarchy. The order in which a page is crawled from a homepage is defined to be the hierarchical level. This measure can be determined automatically. However, it may not always coincide with human perception of hierarchical organization. Nevertheless, for the purpose of this study, we accepted the crawl order on a site as a sufficiently reliable indicator of the hierarchical organization. We chose common keywords from the above set of Yahoo pages in order to get topically similar pages. These keywords were sent to a search engine and the result pages were parsed. For all result pages, the respective host was fully crawled. In order to achieve that, each hit was stripped of the file part of the URL to reach a homepage. From that homepage, we started a crawl limited to the site and two steps. Some 15,000 pages were downloaded and analyzed.For all pages, a document object model (DOM) was derived to analyze the internal structure. This allows us to count the number of the HTML tags.In addition, the number of in-links to each of the pages was determined by querying Altavista.Hierarchical site structure\nDiscussion: This study shows that humans are much more likely to set a link to a page which is positioned at a higher level of the hierarchy of a web site. Links are more likely to point toward more general topics then toward special topics low in the hierarchy.In-links at different hierarchical levels\nOutlook: Link analysis should not be based solely on an algorithmic analysis. Taking human factors into account might potentially improve the web search results. The behavior of web authors can be easily extracted from the link structure of the web. This knowledge source allows for a refinement of link analysis algorithms.We demonstrated a structure bias in the number of in-links of a web page. This structure bias can be found for the site structure as well as for the internal page structure. PageRank and other link-based metrics only consider the fact that a link has been set, but not why it has been set. The research presented here provides some hints for the reasons for setting links. We intend to investigate not only the target of a link as we did in this paper but also its origin with respect to structure. In future work, we intend to establish a cross level link map, which shows from where links point to where with respect to the hierarchical position. Also language is an important factor, which determines the targets of web links. Links tend to exist between pages of the same language (Kralisch and Mandl, 2006).\n",
"role": "user"
},
{
"content": "Purpose: - This web mining paper aims at analyzing whether the number of links pointing to a web page is biased by the structure of web sites.\nDesign/methodology/approach: - By web-design mining methods, two collections of web pages are extracted and the in-links counts are determined by querying web search engines.\nFindings: - The paper finds that the structure bias and pages on a higher hierarchical level are likely to receive more links than other pages.\nResearch limitations/implications: - The links are set by web page authors whose behaviour is not yet fully understood and which requires further research.\nPractical implications: - The paper shows that the structure bias of in-links should be considered by link analysis measures used in search engines. Potential consequences are presented.\nOriginality/value: - The number of links toward a web page are considered to be an indicator for the quality of that page. This measure is biased and cannot be solely trusted as a knowledge source for the quality of a page.\n",
"role": "assistant"
}
] |
Interpreting interrelations across multiple levels in HGLM models : An application in international marketing research
|
[
"Multilevel marketing",
"Marketing models",
"International marketing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: International marketing research often involves models with variables that belong to different units of analysis, which themselves may form a hierarchical structure. For example, managers may be nested within companies, or consumers within countries.Research studies, which influenced international marketing/international business researchers the most (i.e. cited the most), propose models/theories that are indeed suitable for multi-level modeling. A quick glance at the seven JIBS articles, which have been cited more than 200 times[1], illustrates situations where multi-level research would be beneficial (see Table I).Hierarchical Linear Modeling (HLM, also known as multi-level modeling) was originally developed to deal with hierarchical (nested) data. A more generalized version of HLM called \"Hierarchical Generalized Linear Modeling (HGLM)\" (see Goldstein, 1991; Wong and Mason, 1985) is employed when the dependent variable is dichotomous, which is not seldom in marketing research. HGLM is ideally suited for research on international marketing, but has not been applied much in this literature.In this research, we:(1) provide a brief theoretical overview of HLM and HGLM;(2) illustrate the application of the method using the domains of consumers within countries and a dichotomous dependent variable; and(3) demonstrate the calculation and the interpretation of log-odds results.The empirical tests are conducted using data from across 31 countries from more than 31,000 consumers.\nOverview of HLM: Assume that we want to model consumer data from different countries. Two common traditional methods to deal with such nested data structures have been disaggregation and aggregation. The problem with the first approach is that the consumers in the same country will have the same value on each of the country variables. Therefore, the \"independence of observations\" assumption, which is basic for classical statistical techniques, does not hold (Raudenbush and Bryk, 2002). The second approach aggregates consumer characteristics over countries, and does a higher-level analysis. However, all the within-group information is lost, the relations between aggregated variables often seem to be much stronger than what they actually are, and the relations between the aggregate variables can be very different from the relations between the non-aggregate variables. Waste of information and distortion of interpretation are the downsides of this second approach (Raudenbush and Bryk, 2002).Individuals in the same group are often closer or more similar than individuals in different groups. Consumers in different countries can be independent, but consumers in the same country share the same value on certain variables. If unaccounted for, these unobserved variables go into the error term of the linear model and cause correlation between disturbances. The disturbances have a group and an individual component. Group components are correlated within groups and independent between groups, whereas individual components are independent. In addition, some groups may be more homogeneous than other groups, thus the variance of the group components can differ (Raudenbush and Bryk, 2002). Therefore, combining all variables that belong to different levels of analysis into one regression equation undermines two basic assumptions of traditional linear model analysis: homoscedasticity and independence.One approach to solve these problems is to include an effect in the model that corresponds to the grouping of the lower-level units, thus employing ANOVA or ANCOVA. However, there are a number of problems with this approach (Luke, 2004). As the number of groups increase, there are more parameters to estimate, and the model has less power and greater complexity. The treatment of group parameters as fixed effects ignores the random variability of the group characteristics. Furthermore, ANOVA methods are not very flexible in handling missing data or unbalanced designs (Luke, 2004).HLM was developed to deal specifically with hierarchical (nested) data in education research (Bryk and Raudenbush, 1992; Raudenbush and Bryk, 2002). This method avoids the weaknesses outlined above. Each of the levels in the data structure has its own sub-model, which captures the relationships among variables within a given level and specifies how variables at one level influence relations occurring at another (Raudenbush and Bryk, 2002). Disciplines such as sociology, biometrics, econometrics, and statistics have all contributed to the development of such models for nested data structure. Although not very common, studies in organizational behavior (Hofmann, 1997; Hofmann and Gavin, 1998; Klein et al., 1994) and strategic management research (Song et al., 2002) also utilized this technique. Other names used in different literatures include multi-level linear models, mixed-effects models, random-effects models, random-coefficient regression models, covariance components models, etc. In marketing, multi-level modeling started to become popular as well (Bijmolt et al., 2004).A simple example of HLM follows for a two-level model, where there are individual- (level-1) and group-level (level-2) variables. To first isolate and then account for the effects of group-level variables, the individual-level variables are modeled as having a separate regression equation for each group. The parameters of these regression equations are then regressed on the group-level variables. This procedure lets group-level variables be used to explain variation in the individual-level parameters and allows testing for main effects, and interactions within and between levels.Assume there are two level-1 variables (kh1 and kh2) and one level-2 variable (o1). The value of the dependent variable (Yij) can be predicted from the values of level-1 independent variables.The regression equation is: (Equation 1) where \"i\" refers to the person number and \"j\" refers to the group number.Each group will have a separate regression equation, and the coefficients b0, b1, and b2 will be allowed to change from group to group. Further analyses can explain their variability. Thus, level-2 regression equations are formed to predict the value of the level-1 parameters using values of the level-2 independent variable: (Equation 2) (Equation 3) (Equation 4) Note that there is a separate equation for each parameter in Equation (1).If the level-2 Equations (2)-(4) are substituted into the level-1's Equation (1), the combined model is: (Equation 5) Coefficients that are allowed to vary from group to group, e.g. b0j, b1j, and b2j, are referred to as \"random\" coefficients. Coefficients \"g\" are not assumed to vary across groups (and hence they lack the subscript j); therefore, they are referred to as \"fixed\" coefficients (Hox, 1995). Standard OLS cannot be used to estimate this equation. As reviewed above, one necessary condition to conduct OLS is that the random errors are independent, normally distributed, and have constant variance. Note that the random error in Equation (5), which is [u0j+(u1j*kh1ij)+(u2j*kh2ij)+rij], is not independent across groups since the components u0j, u1j, and u2j are common to every individual within group j. The errors do not have equal variances either, since u0j, u1j, and u2j vary across groups and kh1ij and kh2ij vary across individuals. Although standard regression analysis is inappropriate, iterative maximum likelihood procedures can be used to estimate such models (Raudenbush and Bryk, 2002).Note that if u0j, u1j, and u2j were null for every j, Equation (5) would be equivalent to an OLS regression model.\nOverview of hierarchical generalized linear models: When the dependent variable is dichotomous, linear regression methods should not be employed for two reasons (Snijders and Bosker, 1999). First, ordinary linear regression might give a fitted value that is outside the permitted range. For example, a dichotomous outcome variable Global Brand Ownership (GBO) may be 0 (failure; the consumer does not own the brand) or 1 (success; the consumer owns the brand). A fitted value of \"0.80\" can be interpreted as an 80 percent probability of owning the brand. A fitted value of \"1.1\" is meaningless, however.The second reason against linear regression methods is that the mean and variance of a Bernoulli distribution are related (Snijders and Bosker, 1999). For a dichotomous variable Y with probability p for outcome 1 (success), the probability for outcome 0 (failure) is 1-p. Then: (Equation 6) The variance is not a free parameter because it depends on p (Raudenbush and Bryk, 2002).Furthermore, ordinary HLM is inadequate because of two assumptions:(1) linear relationships between the predictors and the dependent variable; and(2) the normality of the random effects (Raudenbush and Bryk, 2002).These assumptions are violated when the dependent variable is dichotomous. The level-1 random effect cannot be normally distributed, since it can take only two values. In a multi-level setting, the dichotomous outcome, Yij for level-1 unit i in group j, can be represented as the sum of the probability (average proportion of successes) in group j plus a residual for the individual i (assuming constant probability of success per group): (Equation 7) The residual Rij has a mean zero; however, it can have only the values Pj and 1-Pj, since Yij must be 0 or 1. Given the value of the probability Pj, the variance of the residual is: (Equation 8)The linearity assumption does not hold either. GBO (our dependent variable) must lie in the (0, 1) interval, requiring a nonlinear transformation of the predicted value.The analysis of nonlinear structural models and non-normally distributed errors is accomplished by HGLMs, described below. The level-1 model consists of three parts: a sampling model, a link function, and a structural model.Level-1 sampling model\nAn application of HGLM in international marketing: In order to illustrate the unique interpretations available from the results of HGLM, we chose global branding as the context. Global brands are among the most important intangible assets a company can have. According to the Interbrand/BusinessWeek Study (2009), the financial value of the top 100 global brands exceeds $1 trillion. Many researchers have investigated different aspects of global branding, and prestige and quality are two associations commonly linked to global brands (Holt et al., 2004a, b; Steenkamp et al., 2003). However, research to date has not identified how these associations are related to overall global brand equity (GBE).The model we test (Figure 1) rests on the contingency view, which states that the impact of a particular factor depends on other factors (Zeithaml et al., 1988). We begin with the premise that, as identified by previous researchers, perceived brand quality and brand prestige drive \"GBO\". We also propose that the strengths of these relationships are contingent on overall GBE. Thus, the first important question concerns the impact of quality versus prestige on GBO, controlling for GBE. Second, are the strengths of the relationships between either quality or prestige and GBO contingent on GBE? If so, how do these interactions with GBE affect GBO?In summary, our contingency model encompasses:(1) the core relationships of equity, quality, and prestige to ownership (across 31 countries);(2) the interaction effects, and also examines;(3) the control variables of age and income; and(4) country-level GDP per capita.We examine whether the relationships among ownership, equity, quality, and prestige depend on age, income, and/or country-level GDP per capita. Figure 1's multi-level model enables us to scrutinize multiple core relationships because the interdependence of observations from consumers who are from the same country is explicitly incorporated into the analysis.Model variables and their descriptions\nThe results and their interpretations: Random effects\nOther practical considerations when designing and analyzing international multi-level studies: There are certain issues needed to be taken into account while conducting multi-level studies. Although these issues are very broad and cannot be explained thoroughly in this article, below, we try to provide the main sources to consult when designing and analyzing multi-level studies.Level-1 and level-2 sample size\nSummary and research implications: The interplay of variables that belong to different units of analysis usually presents the most interesting research questions for international marketing researchers. Examples include the interactions of: country economic development level and consumer behavior; country culture and firm behavior; headquarters market orientation, subsidiary country culture and employee behavior. Usually \"context\" interacts with the \"actors\" and analyzing this interaction properly helps us to fine-tune our theories. In order to contribute empirical generalizations in marketing, we need to take into account the nested data structures while conducting our analyses. If we neglect the dependence of data and treat those as unrelated, empirical conclusions from statistical analyses will not be reliable. If, on the other hand, we confine ourselves only to the study of relationships within a single level, the conceptual development of more comprehensive models will be restricted.In this paper, we illustrate a contingency framework model, which includes a dichotomous dependent variable. By using data collected from more than 31,000 consumers across 31 countries, we demonstrate how to analyze, calculate, and interpret results in an HGLM model. We also include and cite a rich set of scholarly resources in multi-level modeling research for further information on specific modeling issues. Although it seems complicated at first, the basic idea of multi-level modeling is simple - we have different regression equations at each level of the data and we look at how variables at different levels influence the relationships occurring at other levels. We hope to have provided enough inspiration and encouragement, as well as a practical guideline, for international marketing researchers in their future research.\n",
"role": "user"
},
{
"content": "Purpose: - Although the use of data from different levels is very common in international marketing research, the practice of employing multi-level analysis techniques is relatively new. The paper aims to provide an application of a specific case of multi-level modelling - where the dependent variable is dichotomous, which is often the case in marketing research (e.g. whether a consumer buys the brand or not, whether he/she is aware of the brand or not, etc.)\nDesign/methodology/approach: - A hierarchical generalized linear model is employed.\nFindings: - Since this is a technical paper, the authors would like to emphasize the process rather than the empirical findings. In summary, the paper: provides a brief theoretical overview of Hierarchical Linear Modeling and Hierarchical Generalized Linear Modeling; illustrates the application of the method using the domains of consumers within countries and a dichotomous dependent variable; focuses on interpretation of log-odds results; and concludes with practical issues and research implications.\nOriginality/value: - The main value of this research is to demonstrate how to employ multi-level models when the dependent variable is dichotomous. Multi-level techniques are quite new in international marketing research, although nested data structures are relatively common in our field. This is a technical paper that guides the researchers as to how to apply and interpret the results when modeling such data with a dichotomous dependent variable.\n",
"role": "assistant"
}
] |
Entrepreneurial self-efficacy and intentions: Outcome expectations as mediator and subjective norms as moderator
|
[
"Subjective norms",
"Entrepreneurial intentions",
"Entrepreneurial self-efficacy",
"Entrepreneurial outcome expectations"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Entrepreneurial intentions signal the willingness and desire to consider the creation of a new venture as a career option (e.g. Bird, 1992, 1988; Bird and Schjoedt, 2009) and as such are a precursory step in the process of launching a venture (Lee and Wong, 2004; Fayolle et al., 2006; Linan and Fayolle, 2015). Based on a diverse and complex assemble of personal and social perceptions towards entrepreneurship, individuals develop positive or negative entrepreneurial intentions (e.g. Linan and Chen, 2009) which are strongly related to actual startup behaviour years later as a career choice (Delanoe-Gueguen and Linan, 2018).\nTheory development: Entrepreneurial outcome expectations as a mediator between entrepreneurial self-efficacy and intentions\nMethod: Data collection and sample\nResults: Table I reports the means, standard deviations and correlations for all variables. Entrepreneurial intentions have a positive and significant correlation with entrepreneurial self-efficacy (r=0.26***), entrepreneurial outcome expectations (r=0.22***), gender (r=0.11***), and prior family business exposure (r=0.09**). Men had significantly higher entrepreneurial intentions than women (F(1; 1,025)=13.29***), women had significantly higher subjective norms than men (F(1; 1,025)=5.24*) and men had significantly higher entrepreneurial self-efficacy than women (F(1; 1,025)=7.74***).\nThe mediation role of entrepreneurial outcome expectations: Following the recommendations of Aguinis and et al. (2018), the mediation of entrepreneurial outcome expectations in the relationship between entrepreneurial self-efficacy and entrepreneurial intentions (H1) was tested following the bootstrapping method developed by Preacher and Hayes (Preacher et al., 2007). This study used PROCESS version 3 for SPSS (Hayes, 2017) with gender, prior business exposure and general self-efficacy as control variables, mediation model (Model 4), with 5,000 bootstrapping samples to obtain 95% bias-corrected confidence intervals (BCs CIs).\nThe moderated mediation role of subjective norms: Building on the mediation model tested previously, the effect of subjective norms as a moderator was added to the relationship between entrepreneurial self-efficacy and entrepreneurial outcome expectations, i.e. the proposed first stage moderated mediation (Hayes, 2017). Model 7 on the PROCESS macro version 3 for SPSS (Hayes, 2017), controlling for gender, prior family business exposure and general entrepreneurial self-efficacy, with 5,000 bootstrapping samples to obtain 95% BCs CIs was used to test the moderation role of subjective norms on the effect of entrepreneurial self-efficacy on entrepreneurial outcomes expectations (H2a and H2b) and on the indirect effect entrepreneurial self-efficacy on intentions via outcome expectations (H3).\nDiscussion: This study aimed to expand our knowledge and understanding of the relationship between domain-specific self-efficacy and entrepreneurial intentions. Building on social career cognitive theory, this study tested part of the Liguori's et al. (2018) framework on entrepreneurial intentions by exploring entrepreneurial outcome expectations as a mediator and subjective norms as a moderator on the relationship between entrepreneurial self-efficacy and intentions. Using a sample of 1,026 US-based students from public and private universities, this study found that entrepreneurial outcome expectations partially mediate the relationship between entrepreneurial self-efficacy and entrepreneurial intentions. In addition, and in line with the hypotheses, the effect between entrepreneurial self-efficacy and outcome expectations is positive and significant for individuals with more favourable subjective norms comparing with individuals with a less favourable subjective norms towards entrepreneurship. The interaction effect of subjective norms and entrepreneurial self-efficacy on outcome expectations is still significant when considering the indirect effect on entrepreneurial intentions. Below the theoretical and practical implications of these results are discussed.\n",
"role": "user"
},
{
"content": "Purpose: Building on social career cognitive theory, the purpose of this paper is to investigate outcome expectations as a mediator and subjective norms as a moderator in the relationship between entrepreneurial self-efficacy and intentions.\nDesign/methodology/approach: Using a sample of 1,026 students from US public and private universities retrieved from the Entrepreneurship Education Project, this study tests a first-stage moderated mediation model in a two-step process.\nFindings: Results show that entrepreneurial self-efficacy is positively related to entrepreneurial intentions through the partial mediating effect of entrepreneurial outcome expectations, and that this relationship is consistently significant and positive for individuals with lower, average and higher subjective norms towards entrepreneurship.\nResearch limitations/implications: These findings contribute to the literature on entrepreneurial intentions by providing a comprehensive overlook on the mechanisms and boundary conditions relevant for intentions.\nPractical implications: These results reinforce the need for educators and policy makers to ensure programs manage outcome expectations and recognize the role of peer, parent and mentor role models on the construction of these expectations and, consequently, on entrepreneurial intentions.\nOriginality/value: Exploring the combined effect of entrepreneurial outcome expectations as a mechanism and subjective norms as boundary conditions on the relationship between entrepreneurial self-efficacy and entrepreneurial intentions is an unexplored issue to date, and helps to understand how and why entrepreneurial intentions emerge.\n",
"role": "assistant"
}
] |
Factors influencing SRI fund performance
|
[
"Factor models",
"SRI funds",
"Responsible investment",
"G10",
"G11",
"G12",
"G19"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The socially responsible investment (SRI) fund performance has received the attention of both academicians and practitioners. The awareness for the social issues influencing living conditions has increased the popularity of social and ethical investments during the last few decades. The total net assets of SRI funds reached $12 trillion in 2018 (www.ussif.org).\n2. Literature review: Empirical studies on SRI fund performance focus on the returns of SRI indices and SRI funds. Earlier studies (Sauer, 1997; Guerard, 1997; Konar and Cohen, 2001; Statman, 2006; Adler and Kritzman, 2008, Mallett and Michelson, 2010, among others) cannot provide convincing evidence about the advantage of socially responsible investing. Among the studies investigating SRI index performance, Grossman and Sharpe (1986), Hutton et al. (1998), Luck and Pilotte (1993), Diltz (1995) and Heyes (2005) report that SRI indices perform better than conventional indices. Lyn and Zychowicz (2010), using the Sharpe and Treynor measures, report that faith-based funds perform better than other socially responsible funds. Schroder (2004) also supports these studies showing that most of the German, Swiss and US SRI funds provide at least comparable returns relative to their benchmarks. Among international studies, Luther et al. (1992) and Mallin et al. (1995) also support the view that on a risk-adjusted basis ethical trusts outperform the other trusts.\n3. Data and methodology: 3.1 Data\n4. Empirical results: 4.1 Performance of SRI funds\n5. Conclusions: This study analyzes SRI funds during January 1995 and May 2015 period using a sample of 152 SRI funds. On average, SRI funds appear to be providing comparable returns relative to benchmark market returns, leading us to conclude mixed evidence regarding their performance.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine socially responsible investment (SRI) fund performance and investigate the factors influencing fund performance.\nDesign/methodology/approach: The study uses return data from the Morningstar database for 152 SRI funds from January 1995 to May 2015. The initial analysis includes the use of various risk-adjusted performance measures, including Sharpe ratio, Treynor ratio, Information ratio, Sortino ratio and M2. The study also uses four factor models, including Jensen single-factor model, Fama-French three-factor model, Carhart four-factor model and Fama-French five-factor model to explain SRI fund returns. Finally, a cross-sectional regression analysis is applied to investigate the determinants of SRI fund returns.\nFindings: The results show that, on average, the SRI funds provide comparable risk-adjusted returns relative to various benchmark market indices. Market factor is significant in explaining SRI fund returns. Examining each factor model, the results do not support Fama-French's three-factor model as neither size nor value factor is significant. The author finds weak support for Carhart's momentum factor along with the market factor. Finally, the Fama-French five-factor model shows market, size and operating profit factors explain SRI fund returns. The study also finds the fund performance is stronger for funds with the higher turnover ratio, the larger fund size and more managerial experience and lower for funds with higher expense ratio. Also, funds formed with negative screening perform better than positive or mixed screened funds.\nOriginality/value: SRI funds have received considerable attention from investors. This study contributes to the literature by examining SRI fund performance and investigating factors influencing their performance using multiple factor models and cross-sectional regression analysis. The findings are relevant for investors who demand responsible investment opportunities without sacrificing returns for nonfinancial screenings. Findings also suggest that investors should consider fund characteristics when selecting SRI funds.\n",
"role": "assistant"
}
] |
Semantic technologies for aligning heterogeneous business process models
|
[
"Business process management",
"Semantic technologies",
"Business process modelling",
"Ontology matching",
"Process model matching",
"Semantic alignment",
"Natural language processing",
"Semantic analysis",
"Model linkage",
"Model integration",
"Semantic modelling"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: One of the core management tasks in enterprises is the design and continuous adaptation of business operations according to current needs and reasonable allocation of the resources available. The basis for decisions regarding the shaping or reshaping of business operations in and between enterprises is the availability of a complete and correct description of the underlying business processes. For this purpose the use of models has become an accepted means. Over the past decades, a number of different methods, techniques, languages and supporting tools for modeling conceptual business processes have been developed. However, mostly they do not include precise rules or explicit guidelines for the application of the natural language for labeling the model elements. Furthermore, even though sometimes practical guides or naming conventions exist, they are informal or not specific enough (Mendling et al., 2010, p. 468). Instead, the choice of words and phrasing of expressions for designating the domain facts is often left up to the modelers' choice. Thus, models can differ not only due to the use of different modeling languages, but even more so, they can differ considerably due to dissimilarly applied natural language. As a solution for preventing such semantic differences, the use of agreed-upon sets of terms or reference standards for creating labels has been suggested (Thomas and Fellmann, 2007). However, the creation of a universal vocabulary cannot solve the problem of semantic interoperability completely. Even in case construction rules and readily available vocabularies, glossaries or thesauri are in place, nonetheless the phrasing of element labels may vary also already between team members (Becker et al., 2010a). Furthermore, the adherence to naming rules upon designing models supports only their creation.\n2. Research method: The research carried out is based on a design-science-oriented approach. Design-science research systematizes design as a science in information systems research and provides for the construction of novel, innovative and viable artifacts, such as languages, symbols or models, usable as abstractions or representations, practical instantiations or methods in order to increase knowledge for solving organizational problems of general interest (Hevner et al., 2004; March and Smith, 1995). Thereby, a method is to be understood as a planned, result-oriented approach for systematically solving a task (Sarshar, 2008). Method engineering provides for new principles for systematically reaching a goal (Gutzwiller, 1994). It includes the design, construction and adaptation of methods, techniques and tools (Brinkkemper, 1996). The steps undertaken for developing the novel method presented here followed the suggestions as outlined by Peffers et al. (2007).\n3. Related work: The question of how to compare models and identify matching model elements is not new and has been approached in different ways. In the following an overview of the relevant literature is given with regard to the specific subject aspects addressed.\n4. Automating semantic model matching: In effect, since the introduction of business process management, process modeling produces independent models. For the specification and representation of business processes various different business process modeling languages are available, while the model elements are labeled in natural language as per individual decision. As a result, when models are to be compared, semantic heterogeneity is encountered. In case of organizational restructuring or mergers and acquisitions business process model collections from autonomous sources need to be matched. Thereby, typical questions for semantic analysis may be:\n5. Semantic model alignment: The method presented here is called Semantic Model Alignment. Its application facilitates automated establishment of meaning-oriented relations between business process models through automated disambiguation and linkage. These relations express semantic correspondences and are here used to describe semantic similarity. Thus it eases the identification of occurrences, similarities and potential consolidation points and enables the coordination and harmonization of semantically heterogeneous models independently of the modeling and domain languages. The core idea of the method is the reuse of the knowledge contained in business process models and the combination of ontology matching with information linguistics techniques into a heuristic method for matching element labels without the need for preparatory efforts. In the following the various subsequent method steps are presented.\n6. Language-aware Semantic Matching: In order to provide for automation as described, the LaSMat ontology matcher has been designed and implemented for supporting the application of Semantic Model Alignment. The matcher is implemented in Java and can be used through a Java-API or its graphical user interface. For matching, the model ontologies to be compared are selected and the phrases contained in the ontology labels are taken. The matching process realized foresees several steps as illustrated in Figure 4 and detailed in the following.\n7. Proof of concept and experimental evaluation: For evaluation and proof of concept of the method presented here first experiments were conducted after the target format descriptions for the conversion and the supporting ontologies have been developed and the procedure of Language-aware Semantic Matching was prototypically implemented.\n8. Conclusion: This paper presents a novel approach for automating semantic model matching by way of Semantic Model Alignment.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose a solution for automating the task of matching business process models and search for correspondences with regard to the model semantics, thus improving the efficiency of such works.\nDesign/methodology/approach: A method is proposed based on combining several semantic technologies. The research follows a design-science-oriented approach in that a method together with its supporting artifacts has been engineered. It application allows for reusing legacy models and automatedly determining semantic similarity.\nFindings: The method has been applied and the first findings suggest the effectiveness of the approach. The results of applying the method show its feasibility and significance. The suggested heuristic computing of semantic correspondences between semantically heterogeneous business process models is flexible and can support domain users.\nResearch limitations/implications: Even though a solution can be offered that is directly usable, so far the full complexity of the natural language as given in model element labels is not yet completely resolvable. Here further research could contribute to the potential optimizations and refinement of automatic matching and linguistic procedures. However, an open research question could be solved.\nPractical implications: The method presented is aimed at adding to the methods in the field of business process management and could extend the possibilities of automating support for business analysis.\nOriginality/value: The suggested combination of semantic technologies is innovative and addresses the aspect of semantic heterogeneity in a holistic, which is novel to the field.\n",
"role": "assistant"
}
] |
Leveraging emerging markets for commercial success
|
[
"Innovation",
"Emerging markets",
"Business development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Global manufacturers are focused intently on the opportunities to develop and sell their products in emerging markets. But long-term success will take far more than simply making minor adjustments to existing products, lowering prices, and developing new sales channels.\nDelivering different products at dramatically lower price points: Serving the majority of emerging market customers will require manufacturers to design products that are priced far below their offerings in developed markets. This is not simply a matter of shaving 10-15 percent off costs. Instead, new products need to be designed at prices that are orders of magnitude lower than in developed markets.Nokia and Motorola now offer cell phone handsets for less than $50, and Royal Philips Electronics NV has a project to drive production costs below $15 by 2008 (BusinessWeek, 2005a; Dow Jones International News, 2005). Made in Romania, Renault's Logan is a $6,000 car that was designed for the Eastern European market, but has proven to be popular around the world, including in Western Europe (BusinessWeek, 2005b).Lower cost does not necessarily mean less sophisticated. Passive RFID chips developed by Philips enables an efficient transport payment system used in Shanghai and other major Chinese cities. Global manufacturers can be certain of one thing: if they don't develop low-cost products, local firms will surely do so. Haier Group, the Chinese appliance manufacturer, has designed a tiny washing machine for the rural market that sells for just $38 (BusinessWeek, 2005c).\nAcquiring deeper customer knowledge: Companies need to invest the resources required to gain a deep understanding of the requirements of customers in emerging markets. Each country is different, and the needs within a single country can vary widely.Procter & Gamble now spends 30 percent of its $1.9 billion R&D budget on developing products for low-income consumers, with laboratories that recreate the temperature and humidity in Mexico and China (Financial Times, 2005a, b). Not content with traditional methods, the company's researchers spend time in people's homes to understand how they live and actually use products. The result has been a series of innovations including new versions of Tide and diapers designed to meet the special needs of low-income households.\nGlobalizing R&D: Many major manufacturers have now located R&D facilities in emerging markets not only to reduce expenses, often benefiting from tax credits and other government incentives, but just as important to better incorporate local needs and expertise into product design. Rather than being confined to a few industry leaders, 40 percent of the executives surveyed at companies selling new products in emerging markets said these were designed locally. Due to their proximity to customers and key suppliers, Emerson turned to its engineering teams in The Philippines and other Asian countries when designing lower-cost power supplies.While the benefits can be substantial, a key challenge is to effectively integrate research and development teams around the world with each other and with commercial managers. In addition, an explicit strategy to protect a company's intellectual property assets is essential given the weaker patent and intellectual property protections in many countries.\nMastering the complexity of global value chains: For companies to deliver commercially viable products at dramatically lower prices requires an efficient global value chain encompassing governance, business processes, and management know-how. Deloitte Member Firms' research into the operations of the largest and most complex companies in its database of more than 800 global manufacturers have found that only 15 percent have successfully mastered the growing complexity of their value chains, but these companies are up to 50 percent more profitable.Most importantly, as they expand sales in emerging markets, companies will need to strike a balance between efficient global value chains and the responsiveness that more local autonomy can provide. One promising approach is to create \"micro\" operations that can flexibly customize products to meet the needs of local markets or customer segments, while basing them on the efficiency and expertise provided by a global platform.\nAdapting marketing and sales: Manufacturers need to consider carefully their approach to marketing and sales. Companies with fairly standardized products often rely on brokers, agents, and alliances to quickly create distribution channels. Some companies with more complex offerings instead develop their own sales forces or seek partnerships to reduce risk and lower the costs of after-sales service.In India, Cisco invested in a large sales operation due to fewer technology distributors and integrators that serve as important sales channels elsewhere (BusinessWeek, 2005d). Cell phone makers in China predominantly sell their products through retail stores and local distributors, rather than service providers as is common in the USA (Wall Street Journal Asia, 2004).But global manufacturers can also find opportunities that local players have overlooked. Dell recognized that more than 90 million people in China had internet access either in their homes or at work. Defying conventional wisdom, Dell's online PC sales have been growing rapidly (Wall Street Journal Europe, 2005).\nTailoring talent management: Manufacturing executives will need to rethink how they recruit, develop, deploy, and connect the skilled employees on which they rely. Once seen as an inexhaustible supply of low-cost labor, many emerging markets are now facing the same shortages of skilled labor that are all too familiar in developed countries. In addition to offering bonuses or other perks, strategies have included expanding recruiting outside the largest cities and providing additional training to less skilled hires.Providing opportunities for professional growth and advancement is critical. Philips encourages its researchers to publish papers and apply for patents, and offers global prizes and recognition for innovative work. The Indian automotive manufacturer Mahindra & Mahindra Ltd. has created a separate engineering services division, Mahindra Engineering Services, that allows its key engineers to expand their skills by working on a wide range of projects.\nConclusion: Emerging markets are becoming the catalyst for new product and service innovation. But tapping the talent and growth potential of these rising economies will require manufacturers to shed many of their assumptions about customer needs, product design, and innovation strategies that they have relied upon in the developed economies. Companies will need to look beyond traditional strategies in order to meet the needs of markets with significantly lower per capita GDP, while still leveraging the efficiency and expertise provided by their global networks. Those that do so successfully will thrive as the rules of global competition are rewritten in the years ahead.\n",
"role": "user"
},
{
"content": "Purpose: - Emerging markets are becoming the catalysts for new product and service innovation. Long-term success, however, will take far more than simply making minor adjustments to existing products, lowering prices, and developing new sales channels, according to a new report, \"Laboratories of innovation: leveraging emerging markets for commercial success\" by the Member Firms of Deloitte Touche Tohmatsu (Deloitte).\nDesign/methodology/approach: - Study by Deloitte Touche Tohmatsu.\nFindings: - The Deloitte study shows that companies will need to acquire a new set of competencies and organizational structures to generate a continuing stream of innovative products tailored to the needs of consumers and industrial buyers in emerging markets.\nPractical implications: - Companies will need to acquire a new set of competencies and organizational structures to generate a continuing stream of innovative products tailored to the needs of consumers and industrial buyers in emerging markets. They will need to allow for local autonomy, for example local R&D and local commercial operations, while taking advantage of their parent company's governance, business processes, and management expertise to offer these products at dramatically lower prices that match the lower purchasing power of most buyers in emerging markets.\nOriginality/value: - Successful companies are looking beyond traditional strategies to meet the needs of markets with significantly lower per capita GDP.\n",
"role": "assistant"
}
] |
Contextual analysis and newspaper archives in management history research
|
[
"Methodology",
"Mixed-methods",
"Research methods",
"Case study research",
"Trade publications"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Historians are currently engaged in a philosophical debate about the uses of methodology in management history and the relationship between theoretical/empirical perspectives and post-modern critiques of history writing. One aspect of these debates is the role of contextual analysis, which incorporates external situational influences in management research (Galvin, 2014). Another aspect is the role of newspaper archives as source material, especially in case studies, which require multiple sources to support data collection and analysis (Batiz-Lazo, 2016). This article explores an often neglected data source in management history research - newspaper[1] archives - and investigates, how newspaper data can provide relevant contemporary perspectives to support contextual analysis in management history research.\nLiterature review: Contextual analysis\nData and method: A survey of research in management history was undertaken to establish the extent to which management historians use newspaper archives as a data collection method and to explore how management historians can adopt contextual analysis as an element in their research methodologies.\nThe use of newspaper archives in the recent management history literature: The textual analysis of those articles, which used substantial data from newspaper archives provides evidence to support the proposition that newspaper archives can facilitate elements of contextual analysis in historical research, even though the authors did not formally research contextuality. The key newspaper archival sources used in these articles were broadsheets such as the Guardian and New York Times (Riad, 2014) and the Chicago Tribune (McLarty and Rosen, 2014); financial and business media such as Het Financieele Dagblad (de Jong et al., 2017); local newspapers such as the Yorkshire Post and York Evening Press Tennent (2017); specialist technical print media such as The Horological Journal (Proctor, 2013); and professional organisations' publications such as Industria Portuguesa and Pessoal (Matos, 2016).\nDiscussion and conclusion: The focus of this research is to examine \"how, if at all, can newspaper archives be used for the purposes of contextualisation in management history research?\" The evidence from the survey of management history articles suggests that newspaper archives can be used to research contextuality in management history research. The contextualisation theory discussed in the literature review provides several justifications as to why management historians might adopt contextual analysis in their research strategies. Such justifications include the fact that management organisations are open systems, which are influenced by external factors (Tsui, 2006); that a contextual environment fluctuates between different temporal periods (Zahra et al., 2014), which consequently influences changing contexts in management history research; that the \"meanings\" of different organisational phenomena change through time (Bamberger, 2008); and finally, contextual analysis engages in rich, qualitative research (Johns, 2006), which is also a feature of much management history research. This is a convincing argument for management historians to incorporate contextualisation, where appropriate, into their research strategies - and one data collection tool, which is available to help facilitate contemporary contextual analysis is newspaper archives.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to reveal, how newspaper archives can support contextualisation in management history research by providing quantitative and/or qualitative, accurate, contemporary and cost-effective, data which is not always available elsewhere.\nDesign/methodology/approach: The paper comprises a literature review, which summarises research into contextual analysis and newspaper archive theory; combined with content and textual analysis of articles published in the Journal of Management History and Management and Organizational History (2013-2017).\nFindings: The findings reveal that the concept of contextualisation is absent from recent management history articles and that few management historians use newspaper archival sources as a data collection strategy.\nResearch limitations/implications: There is compelling evidence to suggest that contextual analysis can - perhaps should - be incorporated into management historians' research strategies because managerial organisations operate in open systems, which are influenced by external factors.\nOriginality/value: This paper juxtaposes two neglected aspects of management history research, contextuality and newspaper archives, and proposes that a key source for historic contextual analysis is newspaper data.\n",
"role": "assistant"
}
] |
Assessing the internalization of the mission
|
[
"Leadership",
"Mission statement",
"Co-workers’ engagement",
"Mission internalization",
"Organizational alignment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Mission statements, which are commonly defined under headings such as mission, vision, values, credo, and philosophy (Abrahams, 1999; Leuthesser and Kohli, 1997; Blair-Loy et al., 2011), capture the \"reason of being\" and identity of an organization (Wang, 2011). These statements are of great relevance to a company because they facilitate consistency in the definition and implementation of the company's strategy (Drucker, 1974; Pearce, 1982; McGowan 1986; Ireland and Hitt, 1992; E Sa et al., 2011), they serve as a source of motivation and emotional security for employees (Kanter, 2009), and they have a significant impact on company performance (Bart et al., 2001; Desmidt et al., 2011).\n2. Literature review about mission internalization: Internalization is phenomenon in which people accept a certain external influence and integrate it into their personal beliefs and values (Kelman, 1974). Internalization goes beyond acceptance or identification and occurs when ideas or practices presented to an individual are intrinsically satisfactory and congruent with the individual's own value system.\n3. Dimensions of mission internalization: A lack of mission internalization is not due to inadequate communication. Often, the actions carried out to communicate the mission - posters, videos, speeches, etc. - can be extremely helpful in encouraging employees to endorse the company's mission. Rather, the underlying issue is that the communication campaigns often fail to incorporate the necessary conditions to generate of internalization among employees.\n4. Methodology: The data collection was conducted through two phases. The first one was addressed to the two companies which are working in mission statement process. The first company is a market leader in spare auto part distribution that was established in Spain and that operates in Europe. The second company operates internationally in engineering construction activities, and its headquarters are located in South America. The sample provided by these two companies was used to conduct the entire analysis that resulted with the internalization scale. The survey was launched in January 2015, and 156 questionnaires were collected, from which only 132 were retained for our analysis (61 from the first company and 71 from the second company).\n5. Results: This section is composed of two subsections. The first proceeds with the exploratory analysis of the 38 items that measure internalization using the sample of 132 respondents from the two aforementioned companies and concludes with the definition of the five dimensions proposed for the scale. Next, the second subsection is devoted to reliability and validity analyses of the scales. Here, the analysis is expanded using a second, larger sample (n=400).\n6. Conclusions and contribution: The first finding in this paper is that IM is a multifactor construct. It is measured using a scale of 18 items arranged in five dimensions. The five dimensions are leadership, co-workers' engagement, importance, knowledge, and implication. Findings suggest that all of the dimensions must be met to achieve the IM, not just a few of them. Omitting one or several of these dimensions is could be the cause of failure in mission internalization.\n",
"role": "user"
},
{
"content": "Purpose: Many companies have a mission statement that they disseminate through corporate communication to stakeholders and particularly to employees. However, the communication action alone does not ensure that employees take true \"ownership\" of the mission. Having a mission and internalizing that mission are quite different. The purpose of this paper is to provide a scale to assess the internalization of the mission (IM). Additionally, the authors explore the relationship between IM and organizational alignment.\nDesign/methodology/approach: Based on previous research on mission internalization, the authors test the conditions necessary for reaching true mission internalization. A first sample of 132 managers from two companies was used for an exploratory analysis: thereafter, a second universal sample of 400 people was used to confirm the scale. Structural Equation Modeling was used to analyze the dimensions deployed in the latent IM construct. This construct has been examined as a second-order factor. A multi-group analysis across these two companies provides nomological validation of the IM scale.\nFindings: A scale of 18 items gathered under five dimensions is proposed. Accordingly, the findings are that IM comprises five dimensions: leadership, importance, knowledge, co-workers' engagement and implication. The five dimensions count equally for the IM.\nPractical implications: This study provides a useful measure to assess the IM. To achieve a good degree of internalization across employees, the employees must feel that the mission is worthy of engagement and that it is aligned with their personal values.\nOriginality/value: The paper addresses gaps in the current literature on mission statements regarding the effective implementation of the corporate mission. The results can serve as criteria for managers to obtain better IM for all employees.\n",
"role": "assistant"
}
] |
Textual construction of comparative space: How analyst corporate governance reports redefine and create "best practice"
|
[
"Comparative space",
"Corporate governance reports",
"Editing",
"Inscriptions",
"Sell-side financial analysts",
"Textual reality construction"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Sell-side financial analysts (analysts hereafter) work for brokerage firms, analyse publicly traded companies, write equity research reports and seek to offer investors relevant information to assist them in making investment decisions (Fogarty and Rogers, 2005; Imam and Spence, 2016). In the existing corporate governance (CG) literature, analysts are often viewed as important constituents of \"external governance mechanisms\" and \"institutional actors\" of corporations (Aguilera et al., 2015; Hambrick et al., 2008). Both theoretical and empirical research highlights the capacity of analysts to monitor company activities, scrutinise corporate conduct and detect managerial misbehaviour (Healy and Palepu, 2001; Jensen and Meckling, 1976; Wiersema and Zhang, 2011; Yu, 2008; Zuckerman, 1999). In the past two decades, analysts based in the USA and UK have generated additional work products pertaining specifically to CG, namely, analyst CG reports. While institutional investors have expressed mixed views on the reports, and companies have responded to them by adjusting their governance practices (Gullapalli, 2004; Sweeney, 2004), academic research on them has been scant (but see Tan, 2014). This paper seeks to fill this gap and considers investigating analyst CG reports to be essential, as they may transform the way in which CG is assessed and suggest an extended CG role for analysts. Specifically, this paper focusses on analysing how analyst CG reports constitute a comparative space in which the governance systems of companies are evaluated and compared, and \"best practices\" are redefined and created.\n2. Analysing analyst CG reports: relevant literature and theoretical consideration: This section first outlines the relevant literature on CG that this paper seeks to build upon and extend. It then unpacks the theoretical lenses that inform the empirical analysis.\n3. Data and methods: This study draws on the CG reports produced by analysts as empirical data for three reasons. First, textual documents are used intensively as empirical material in social science research (Bryman, 2008). Prior sociological research in accounting and finance has attended closely to analyst reports (Beunza and Garud, 2007; Fogarty and Rogers, 2005; Tan, 2014). This paper follows this tradition. Second, while sociological accounting research addressing the constitutive potential of accounting inscriptions largely adopts a field-based approach, the contents and forms of the inscriptions are indeed examined (e.g. Qu and Cooper, 2011). More importantly, research demonstrating the construction of textual reality focusses on analysing the texts under investigation almost exclusively (Bloomfield and Vurdubakis, 1994, 1997; Young, 2006). This paper examines the \"reality\" constructed textually in analyst CG reports, instead of the use of these reports by financial market participants. An in-depth analysis of analyst CG reports is hence essential. Third, according to the Scandinavian institutional formulation of \"travel of ideas\" discussed earlier, as ideas circulate they are often represented in the form of written texts (Sahlin-Andersson and Engwall, 2002). Analyst CG reports provide a basis for examining how notions of \"best practice\" are reinterpreted and how governance information is modified.\n4. Textual construction of comparative space: This section demonstrates the textual construction of a comparative space in analyst CG reports, which includes four dimensions. First, the space is constructed for users that are highly edited and simplified. Second, the construction of this space requires the selection of certain \"building materials\", i.e., those governance issues included in the space that render companies amenable to evaluation and comparison. Third, by linking the range of governance issues chosen to certain formal regulations, companies are rendered governable and regulatory requirements are reinterpreted. Lastly, using different types of inscriptions, such as narratives and tables, the space highlights \"winners\", i.e., those companies which do better than others, and constructs their governance procedures as \"best practices\".\n5. Discussion and conclusion: It is only in the past decade or so that research on CG has pushed its frontiers forwards. In this regard, as important constituents of external governance mechanisms and institutional actors of corporations, the role of sell-side financial analysts in CG has been explored. The present study has made a further contribution to this line of enquiry by examining the CG reports produced by analysts based in the USA and UK. Specifically, this paper has demonstrated that analyst CG reports construct a comparative space in which the governance systems of companies are evaluated and \"best practices\" created. It has suggested an extended CG role for analysts, namely, constructors of this comparative space, which goes beyond those roles suggested in previous studies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to contribute to scholarly work on the role of sell-side financial analysts in corporate governance (CG). It examines the more recent work products pertaining specifically to CG that analysts based in the USA and UK have generated in the past two decades, namely, their CGCG reports. Specifically, this paper focusses on analysing how analyst CG reports constitute a comparative space in which the governance procedures of companies are evaluated and \"best practices\" are created.\nDesign/methodology/approach: This study involves a social constructivist textual analysis of 48 CG reports produced by analysts based in the USA and UK between 1998 and 2009.\nFindings: Analyst CG reports textually construct a comparative space comprising four dimensions. First, the space is constructed for some carefully edited users to evaluate the governance of companies. Second, the construction of this space requires the selection of \"building materials\", i.e., governance issues included in the space that render companies amenable to evaluation and comparison. Third, by linking the range of governance issues chosen to formal regulations, firms are rendered governable and regulatory requirements reinterpreted. Lastly, by using different types of inscriptions, such as narratives and tables, the space highlights \"winners\", i.e., those companies which do better than others, and constructs their governance procedures as \"best practices\".\nResearch limitations/implications: This research provides a first step towards an in-depth understanding of analyst CG reports. The insights from this paper generate a range of areas for future research, including how these reports are produced and used.\nOriginality/value: This paper adds to the existing literature focussing on the role of analysts in CG. It extends previous studies by examining the more recent and debatable work products generated by analysts, namely, their CG reports, and suggests an extended CG role for them. Theoretically, analyst CG reports are conceptualised as \"inscriptions\" that construct \"documentary reality\". The notion of \"editing\" is also drawn upon, to analyse a particular way in which documentary reality is constructed. Accordingly, this paper broadens the theoretical perspectives used in CG research.\n",
"role": "assistant"
}
] |
Withdrawing a declaration of will: Towards a framework for digital signature revocation
|
[
"Data security",
"Digital signatures",
"Extensible Markup Language"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Digital signatures are increasingly gaining momentum in our everyday electronic transactions. Signed documents in digital format have become commonplace for a wide range of information types, such as transaction records, books, scientific work, contracts and even governmental decrees. Digital signatures preserve basic security characteristics of a digital document, such as integrity and authenticity, while it is the principal expression of an author's intent. European law, the electronic signatures directive (European Union, 1999), further increases the value of electronic signatures by granting them legal validity equivalent to traditional hand-written signatures. Similar legislation exist in the US, Japan and the United Nations, rendering the digital signatures legally binding (Broderick et al., 2001).Various electronic signature schemes have been proposed. The main procedure is based on public key cryptography, where the signer encrypts (signs) a sequence of data using his/her private key and the verifier of the signature ensures the originality of the data by decrypting the signature using the public key of the signer and obtaining the original data (Rivest et al., 1978). From the first steps of public key cryptography until today, many value-added characteristics are enhanced, by integrating new technologies in the digital signature process. Hashing algorithms gave a solution to the computational efficiency of the signatures; digital certificates (Kohnfelder, 1978) and self-certified keys (Girault, 1991) provided the means for effective identification of the signer; the public key infrastructure (PKI) architectures build the necessary trust relationships and finally the time-stamping (Adams et al., 2001) and notarisation (Ansper et al., 2001) techniques providing additional proof that add value and longevity (Lekkas and Gritzalis, 2004) to a digital signature.The creation of a digital signature cannot be denied as an action, since it can be algorithmically proved, using cryptographic techniques. However, there are many weak points in the procedure of digitally signing data. The questions arising relate to who is using the signature-creation-data, whether he/she performs a wilful act and whether the software and hardware used for this action can be trusted. The objective of this paper is to investigate the legal and technical reasons why a declaration of will, denoted by a digital signature, can be cancelled and how this cancellation can be technically achieved. In section 2 we identify the weaknesses of the signature creation process and in section 3 we justify the necessity for a digital signature revocation mechanism from a legal and a technical view. In section 4 we examine previous works that relate to sorts of signature cancellation. In section 5 we propose technical solutions for establishing a signature revocation mechanism and for disseminating the relevant signature status information (SSI) to the relying parties. For a relying party, a digital signature is considered as revoked (and thus not valid) if revocation information exists and this information is properly validated. We finally present, in section 6, a comparison between the alternative proposals for signature revocation.\n2. Weaknesses and risks on the signature creation process: 2.1 Can we trust the signature-creation-devices?\n3. The emerging need for digital signature revocation: 3.1 A legal view\n4. Related work: 4.1 Technology related to signature revocation\n5. A proposed solution framework: 5.1 Alternatives for enabling signature revocation\n6. Technical comparison of signature revocation alternatives: A comparative summary of the proposed signature revocation framework against other alternatives, discussed in the previous sections, is presented in Table I. The alternatives taken into consideration are the generic signature revocation mechanisms described in section 5.1. The comparison includes:* the trivial case of signature deletion;* the signature revocation tokens in their stand-alone detached form;* the case of a \"revocation authority\" (see section 5.1);* the conditional signatures; and* the expiring signatures (see section 4.1).The comparison continues further in Table II, focusing on the SRT solution (labelled \"b\" in Table I) with the SSI distribution alternatives presented in section 5.3. The four variations of SSI taken into consideration are: (b1) the attached SRTs (see section 5.3), (b2) the existence of a central repository of SRTs (see section 5.3), (b3) the online SSI responder (see section 5.7.1) and finally (b4) the protocol for mutual signature revocation (see section 5.7.2).The applicability of the revocation scheme refers to the basic type of signed data for which the scheme is more suitable. The criteria for the applicability are the criticality of the data, its scope, its lifespan and the number of addressees.Some revocation schemes such as the \"expiring signature\" are addressed to short-lived signed data (e.g. buying a ticket) while the rest of the schemes are addressed to data preserved for long periods (e.g. a governmental decree).Another diversifying characteristic of the revocation schemes is the requirement for an online directory or service in order to obtain the signature status information. The SSI distribution efficiency depends on the ability of an arbitrary relying party to easily and quickly determine the status of a signature. Similarly, the characteristic of scalability refers to the limitations on the distribution of the SSI to a large number of relying parties.Some of the schemes require the establishment of additional protocols (further than a simple signature structure and its verification algorithm) either for the interpretation of a special data structure (e.g. attached signature revocation) or for the communication with an online request-respond service (e.g. online SSI protocol).Finally, some schemes require the existence of a trusted \"revocation authority\" either for executing the revocation procedure itself or for the provision of trusted signature status information.Based on the above technical comparison of the various alternatives, we may now draw up a conclusion that a scheme based on the proposed \"signature revocation tokens\" with a central repository for their archival and distribution (labelled \"b2\" in Table II) is the most efficient solution for an average digital signature application. This variation is superior in terms of data longevity, SSI distribution efficiency, scalability and additionally it does not require the existence of trusted authorities or the establishment of special protocols.\n7. Conclusions: Although the creation of a digital signature is deemed non-repudiable there are questions arising on whether the signer performs a conscious and wilful act, since the signature-creation-devices cannot always be trusted even if they are based on tamper-proof devices such as the PKI-enabled smart cards. There are several cases where a signer has to withdraw the declaration of will he/she made when creating a signature, such as delusion or fraud during the signature creation process, mutual cancellation of a contract or cease of a trust relationship. From the legal perspective, the revocation of a declaration of will and the cancellation of a legal act are identified in the civil law. The need for a signature revocation mechanism is emerging and technology must provide the necessary solutions.Related research has not provided any efficient solution to the problem as a whole. The proposed solution framework for digital signature revocation is based on the creation of a secure declaration by the signer that withdraws his/her initial intent, namely the signature revocation token. The key issue of this solution is the establishment of a proper mechanism for the dissemination of the signature status information to the relying parties, enabling them to reach a deterministic decision on whether a specific signature is valid or revoked.XML signatures may be extended to support the proposed solution, by incorporating into the signature data structure a referral to SSI. An online repository of signature revocation tokens or a relevant responder is deemed necessary. A relying party has to check the existence and the validity of a signature revocation, further to the validation of the digital signature itself, in order to conclude on the status of the signature. A solution based on the \"signature revocation tokens\" with a central public repository for their archival and distribution proved to be the scheme with the most comparative advantages against other alternatives. As a future work, a case study on the large-scale implementation of this solution will be performed and the relevant experimental results will be presented.\n",
"role": "user"
},
{
"content": "Purpose: - The objective of this paper is to investigate the legal and technical reasons why a declaration of will, denoted by a digital signature, can be cancelled and how this cancellation can be technically achieved.\nDesign/methodology/approach: - Proposes a technical framework for establishing a signature revocation mechanism based on special data structures, the signature revocation tokens (SRT), and investigates the alternatives for disseminating the signature status information (SSI) to the relying parties.\nFindings: - A relying party has to take into consideration the possible existence of a signature revocation, in order to decide on the validity of a digital signature. A scheme based on a central public repository for the archival and distribution of signature revocation tokens exhibits significant advantages against other alternatives.\nOriginality/value: - Identifies various intrinsic problems of the digital signature creation process that raise several questions on whether the signer performs a conscious and wilful act, although he/she is held liable for this action. The law faces the eventual right of the signer to claim a revocation of a previously made declaration of will, especially in cases of an error, fraud or duress.\n",
"role": "assistant"
}
] |
Government legislation reduces sickness absence in Slovakia: The 2003 Act, cutting the amount of compensation for ill workers, has seen large reductions in absences
|
[
"Legislation",
"Slovakia",
"Compensation",
"Absence",
"Reduction",
"Sickness"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: A university study has shown that the 2003 legislation introduced by the Slovakian Government dramatically reduced the amount of sickness absence. The Act number 462/2003 brought in a reduction of illness benefits in Slovakia for insured persons, suggesting that the decreased sums available made employees more reluctant to call in sick. However, the Trnava University researchers cautioned against a simplistic interpretation of the results. They said the legislation could also have a negative effect if more employees went to work when genuinely ill.\nOther factors: The level of compensation was just one of the many factors, however. The authors cited studies showing how the rates of sickness absence were much lower among higher earners. Men in the lowest grades of earners are six times more likely to go off sick than those in the highest grades, according to one study. The lowly paid cannot afford to be off work if their rates of pay are further cut, but this depends also on the levels of compensation available in a particular country. Meanwhile, economic downturns with high levels of unemployment see much less sickness absence than periods of high employment because workers are more afraid of losing their job and being unable to find another one. The phenomenon is known as \"presenteeism\", when employees report for work even when ill.\nLarge reductions: The most common illnesses were respiratory diseases. In 1997, 43,661 out of every 100,000 insured persons who went off sick had a respiratory illness. But this had fallen to 9,927 out of 100,000 in 2012. The drop had begun as soon as the Act came into force - between 2003 and 2004, the number of cases fell by 50 per cent and it has continued to fall. The next most common illnesses were musculoskeletal complains and these dropped from 16,177 out of 100,000 in 1997 to 5,638 out of 100,000 in 2007. Again, the effect of the legislation was instant with a drop of 40 per cent from 2003 to 2004. Injuries and accidents outside the workplace came next and they fell from 5,910 out of 100,000 in 2000 to 2,219 in 2013. They also fell significantly between 2003 and 2004 by 27 per cent.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: The authors found that sickness absences had reduced significantly since the 2003 Act was introduced in Slovakia, cutting the amount of compensation for ill workers.\nPractical implications: The paper provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nOriginality/value: The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.\n",
"role": "assistant"
}
] |
An investigation of mothers' and tween daughters' clothing preferences and purchase intentions toward a prosocial clothing company
|
[
"Clothing",
"Consumer socialization",
"Tweens",
"Daughters",
"Mothers",
"Prosocial messaging"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Identified as a distinct target market as early as 1987 (Hall, 1987), tween consumers represent \"youth's middle tier\" and demonstrate a desire to stay abreast of the latest product and brand trends as well as the capacity to \"wield influence over the family purse\" (Cook and Kaiser, 2004, p. 217). Although the age range associated with the tween market varies, authors frequently define this market as spanning ages seven or eight to 13 or 14 years (e.g. Cotton Incorporated, 2012; EPM Communications Inc, 2008). The growing importance of the tween market to consumer goods companies is evidenced by the fact this market accounts for over $40 billion in spending annually (EPM Communications Inc, 2010). Tween girls, in particular, are of special interest to marketers, as they outspend their male counterparts, exercising their purchasing power in product categories such as clothing and accessories, make-up, music, games, and phones (Leung, 2009; Tweens and the Retail Market, 2009). As tween girls engage in such purchases, their decisions may be shaped by a variety of factors, including their perceptions about a company's socially responsible business practices as well as their interactions with others, particularly their mothers (Grant and Stephen, 2005; Tweens, Teens and Social Responsibility, 2011).\nLiterature review: Tween girls' clothing preferences and purchasing behaviors\nEducational engagement: Educational engagement has been conceptualized in terms of one's academic investment, motivation, and commitment to learning (London et al., 2007). Because educational engagement involves the articulation of personal goals for the self (i.e. possible future selves; Markus and Nurius, 1986), it is linked to the development of identity. As noted within the discussion of the proximity of clothing to self, individuals use dress to construct and develop identity (see also Elliott and Wattanasuwan, 1998). For adolescents, in particular, clothing consumption is closely tied to self-concept and can be used as a means of self-expression as well as to enhance confidence (Piacentini and Mailer, 2004). Although unexplored to date, it therefore follows that some adolescents may use clothing to express educational engagement and/or to enhance feelings of academic aptitude; this is the premise upon which Pretty Brainy is founded. The present work examines the role of tween girls' educational engagement in shaping purchase intentions toward the Pretty Brainy brand.\nTheoretical framework: consumer socialization: Consumer socialization refers to the \"processes by which young people acquire skills, knowledge, and attitudes relevant to their functioning as consumers in the marketplace\" (Ward, 1974, p. 2). Central to the concept of consumer socialization are the kinds of attitudes, knowledge, and skills that children and adolescents acquire; the means by which these attitudes, knowledge, and skills are acquired; and the ways in which these attitudes, knowledge, and skills may change over time. In particular, through consumer socialization, children and adolescents gain product and brand knowledge, shopping knowledge and skills, purchase decision-making skills, and awareness of and attitudes toward advertising (see John, 1999; Ward, 1974). The development of these attitudes, knowledge, and skills may occur through formal or informal learning (e.g. observation, modeling, purposive teaching) as a result of exposure to and interaction with a variety of socialization agents, such as parents, siblings peers, media, and school (Grant and Stephen, 2005; Ward, 1974).\nMethod: A two-part online survey was administered to a national, convenience sample of mother-daughter pairs. The sample for this research was provided by a market research firm, which recruited tween girls aged 11-14 years and their mothers for participation in this study. Potential participants were recruited through multiple web sites, and the survey was administered online via a link that was e-mailed to those who agreed to participate in the study.\nResults: In all, 106 mother-daughter pairs participated in the online survey. The mean age for the mothers was 42.47 years (range: 26-64 years), and the mean age for the daughters was 12.53 years (range: 11-14 years). Approximately 35 percent of the mothers were high school graduates; 55 percent had earned a college degree; and almost 10 percent had earned a graduate degree. The mean scores for daughters' educational engagement were 5.03 for math and 5.25 for science. The annual household incomes reported by the mothers were <$25,000 (19 percent); $25,001-$50,000 (29 percent); $50,001-$75,000 (25 percent); $75,001-$100,000 (13 percent); and >$100,000 (14 percent). Approximately 69 percent of the participants were Caucasian, and approximately 30 percent were non-Caucasian.\nDiscussion and conclusions: Findings provide implications for both researchers and practitioners. Results contribute to the literature by enriching understanding about the influence of mothers upon tween girls' clothing consumption behaviors, helping to clarify inconsistencies in prior work as well as lending new insights into the role of mothers in socializing their daughters with respect to socially responsible clothing consumption. In particular, results demonstrated that mothers and their daughters assigned similar degrees of importance to several clothing characteristics examined in this study - including aesthetics/design, comfort, fit, use of socially responsible construction/production methods, and use of high performance technology. Further, both mothers and daughters identified the same characteristic - fit - as most important in shaping their clothing purchase decisions. As such, findings provide support for work by Brock et al. (2010) and Sancheti (2009), which suggests that fit may be an especially important attribute in shaping clothing purchase decisions, and address prior inconsistencies regarding the relative importance of aesthetics and comfort to both mothers and daughters. Taken together, these findings also provide support for the consumer socialization perspective, and in particular, the notion that parents may transmit product knowledge and purchase decision-making skills to their children (John, 1999; Ward, 1974). Additionally, mothers' and daughters' responses to the socially responsible clothing company, Pretty Brainy, were similar with respect to their evaluations of the company's clothing, web site, and brand, providing additional support for the premise that mothers may socialize their daughters to adopt specific product and brand attitudes (John, 1999; Ward, 1974). As such, in various ways, findings suggest that the consumer socialization process may encompass consideration for socially responsible business practices. Thus, findings provide an implication for marketers of tween clothing brands, who may benefit from invoking a dual marketing strategy in which they communicate the socially responsible aspects of their brand to tween girls as well as to their mothers (who may reinforce the importance of this message to their daughters).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to invoke a consumer socialization approach to compare mothers and tween daughters on variables that may shape their clothing preferences and consumption behaviors. Additionally, this study explored the variables that predict how mothers and daughters respond to Pretty Brainy, an online clothing company that incorporates prosocial messaging into its product design and marketing.\nDesign/methodology/approach: Data were collected from 106 mother-daughter pairs using an online survey. In completing the survey, participants were instructed to view and evaluate the Pretty Brainy brand web site and clothing.\nFindings: Mothers and daughters assigned similar degrees of importance to several clothing characteristics - including aesthetics/design, comfort, fit, use of socially responsible construction/production methods, and use of high performance technology - providing support for the consumer socialization model. Mothers and daughters were not similar on all clothing constructs explored, however, differing on the importance of some clothing characteristics examined (notably, brand name), as well as past socially responsible clothing purchase behavior, clothing involvement, and proximity of clothing to self. These differences may be explained in part by the mothers' and daughters' respective life stages. Among both mothers and daughters, attitude toward brand was the strongest predictor of purchase intention toward Pretty Brainy, which is consistent with established models of the attitude-intention relationship.\nOriginality/value: This study provides understanding about the influence of mothers upon tween girls' clothing consumption behaviors, helping to clarify inconsistencies in prior work as well as lending new insights into the role of mothers in socializing their daughters with respect to socially responsible clothing consumption.\n",
"role": "assistant"
}
] |
Smart spare parts management systems in semiconductor manufacturing
|
[
"Industry 4.0",
"Demand forecast",
"Smart factory",
"Spare parts management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Hannover Messe 2015 showcased the next steps of Germany to secure its manufacturing footprint in the high labor cost areas, and enhance the competitive position of its local manufacturing companies. The vision of a smart factory specified in Industry 4.0 pointed out the direction of a future production system. To realize the vision, a cyber-physical system plays a key role and has to be achieved through the new technologies, such as Internet of Things (IoT) and big data analytics. In order to reduce the labor and inventory costs, a semiconductor company in Singapore has requested us to develop a smart spare parts management system for them in the Industry 4.0 environment.\n2. Literature review: Industry 4.0 receives more and more attention from practitioners and academics (Lee et al., 2014, 2015; Shrouf et al., 2014; Tan and Chua, 2016) since it was announced as one of the key initiatives of the high-tech strategy in German in 2011 (Kagermann et al., 2013). According to the survey by Deloitte (2014), most Swiss manufacturing companies believe that their global competiveness will be boosted by Industry 4.0. Singapore Government is also looking to foster Industry 4.0 revolution, which can improve the productivity of manufacturing companies and address its stringent labor issues (e.g. high cost and aging workforce) (Tan and Chua, 2016). Lee et al. (2014) described two trends that accompany Industry 4.0, i.e. manufacturing service transformation and industrial big data, and proposed a framework for self-aware and self-maintained machines. Shrouf et al. (2014) presented the architecture for IoT-based smart factories in Industry 4.0 and the main characteristics of the factories. He also provided a method to support the IoT-based paradigm in energy management. Lee et al. (2015) provided a five-level architecture as a guideline for cyber-physical systems, which are referred to as the transformative technologies that manages interconnected systems between physical processes and computation.\n3. Smart spare parts management system: In a semiconductor fab, it is a challenge to prepare the right quantity of spare parts for the right machine at the right time due to the demand and supply uncertainties. The current structures to manage the two types of spare parts are illustrated in Figure 1. While the demand of consumable spare parts can be triggered by usage-based or time-based PM, the demand of contingent spare parts is triggered by machine breakdowns. As shown in Figure 1, a fab often sources the consumable spare parts from multiple suppliers and the contingent spare parts from a particular supplier. Due to their distinct natures, the inventory management systems for the consumable and contingent spare parts are developed, respectively.\n4. Conclusion: In this study, we propose a smart inventory management system, which aims to prepare spare parts for the right machine at the right time with the right quantity through the new technologies, such as IoT and big data analytics, in a semiconductor company. To achieve the goal, the smart use of information plays a critical role. Recent advances in data acquisition capability and predictive technologies have paved way for taking advantages of the information from machines, shop floors and external suppliers.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose a smart spare parts inventory management system for a semiconductor manufacturing company.\nDesign/methodology/approach: With the development of the Internet of Things and big data analytics, more information can be obtained and shared between fabs and suppliers.\nFindings: On the basis of the characteristics of spare parts, the authors classify the spare parts into two types, the consumable and contingent parts, and manage them through a cyber-physical inventory management system.\nOriginality/value: In this new business model, the real time information from machines, shop floors, spare parts database and suppliers are used to make better decisions and establish transparency and flexibility between fabs and suppliers.\n",
"role": "assistant"
}
] |
A hybrid risk management model: a case study of the textile industry
|
[
"Hong Kong",
"Textile industry",
"Risk management",
"Hierarchical holographic model",
"Enterprise‐wide risk management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: In today's business environment, many uncertainties are affecting the business operational environment of an organisation. Many of them are unpredictable and would occur suddenly, they include financial crisis, terrorist attacks and natural disasters. These risks can have significant impact on both the short- and long-term performances of a business organisation (Tang, 2006) and organisations are likely to suffer loss if they do not actively manage these risks.Risk management is a pro-active approach to identify, analyse, and manage all potential risks faced by a company. To sustain a business in today's environment, it is necessary to assess the risks faced by the organisation and develop contingency plans to mitigate consequences and assure continuation of critical processes (Pai et al., 2003). A great variety of tools and techniques were introduced in the past decade to deal with the risk management (Beasley et al., 2005; Jiang et al., 2009). Hierarchical holographic modelling (HHM) is a holistic and comprehensive analytical method the aim of which is to capture and represent the essence of inherent diverse characteristics and attributes of a system. On the other hand, enterprise-wise risk management (ERM) provides a framework that includes methods and processes used by organisations to manage risks that directly related to achievement of their objectives. Risk filtering, ranking, and management (RFRM) assesses all critical risks systematically. To develop a comprehensive and systematic approach in risk management, HHM, ERM and RFRM could be combined to form a hybrid risk management model, which bridges the gap between industrial level risk management and enterprise level risk management. The hybrid model extracts the potential risk scenarios from holographic view and filters them for particular case organisation.The aims of the hybrid risk management model are:1. to identify potential risk scenarios in the business environment of an industry;2. to filter potential risk scenarios in business operations of a company;3. to rate risk scenarios in risk matrix; and4. to develop and select proactive activities that might minimise, or even prevent any negative impact from adverse risks.This paper first provides a review on risk management, HHM, and ERM. Second, a hybrid risk management model is illustrated using a case study. The findings are discussed and conclusions are drawn from the application of the proposed methodology.\n2 Literature review: 2.1 Risk management\n3 Hybrid risk management model: Years of experience have taught us that problem identification is more important than problem solving (Corbett, 2004). However, risks identification is not an easy task and many risk managers perform their duties at the front-line level. They consult line managers in the design and implementation of risk management programs (Corbett, 2004). Thus, line managers can only identify the risks at the front-line level and raise issues of how to improve risk management practice.Risk scenarios should be identified with a holistic view. The potential risks can be identified in the industry level with the use of HHM approach. It includes all possible risks that can happen in the entire industry which are relevant to the organisation's business operations and objectives. However, not all the potential risks identified by HHM approach have the same effect on different enterprises in the same industry. Therefore, the ERM approach is applied to identify the risks that are faced by the enterprise. It is always rather abstract to say how important the risk is and thus, it is necessary to convert the level of seriousness of the risks from subjective to quantitative measures. RFRM is presented to assess the risk rating, in terms of the consequence of hazard and likelihood of occurrence. The proposed hybrid risk management model integrates the approaches of HHM, ERM and RFRM is shown in Figure 1.In the proposed methodology, a quantitative definition of risk R, defined by Kaplan and Garrick (1981), is suggested to be used in terms of the ideas of a \"set of triples\": Equation 1 Where Si denotes the ith \"risk scenario\" identification, Li denotes the likelihood of that scenario, and Xi denotes the resulting consequences of the scenario, i.e. the measure of damage. Once the risk scenarios are identified, the likelihood of the risk scenario refers to how likely is that the risk scenario will happen. What the consequence will be if it does happen are then determined and the risk rating is calculated with equation (1). Based on the risk level assessment result, risks are classified into categories and further actions are then determined.\n4 Research method: A case study was conducted to show how the hybrid risk management model can be applied in real-life business context settings. Data was collected through semi-structured interviews of four key personnel working in the case company, within a risk management workshop at the beginning of year 2008. Semi-structured interviews with open-ended questions encouraged the interviewee to expand and interviewee is likely to discuss more on their feelings and attitudes as well as facts (Jennerich and Jennerich, 1997). Semi-structured interviews are appropriate to gain in-depth exploration into ideas and relationships not previously considered (Kerlinger, 1986). The research question of the study is \"how to apply the hybrid risk management model to identify, analyse, and control the potential risk scenarios in a textile enterprise?\"Based on the industry and company background, the case study objectives are:1. to analyse the applied technique and strategy of risk management in the organisation;2. to identify the main risks and crisis within the entire industry;3. to enable the management and staff in the company to better understanding the risk and to appreciate the importance of risk management; and4. to identify the potential risks and to be aware of where those risks might originate, and to know what action should be taken to prevent them from happening.\n5 Case study: ABC (name disguised to protect identity as agreed), a glove manufacturer which belongs to the textile industry, is used to demonstrate how the hybrid risk management model is applied. Through this case, various common risks and particular risks will be identified; risk and crisis will be shown to occur under certain structures; and some reasonable solutions be recommended.5.1 Industry background\n6 Discussion: After conducting the case study, it is clear that the textile industry has faced lots of risks. They include economic, reputation, resources, operations, environmental, market, policy, managerial and financial risks. If the company is well-prepared before the risks are materialised, they can turn risks into opportunities. The proposed hybrid risk management model enables the company to have a better planning and gain competitive advantage over its competitors. Once the company faces the potential risks, actions must be taken immediately to reduce potential loss.Beside the risks identified by the company through the HHM and ERM approach, there are several points the company should be aware of. The company should work out an action plan to transfer or mitigate following risks:* Global warming. It seems that the GHE will continue and it will have an adverse impact on the sales volume of those gloves the major function of which is to keep the hands of the wearer warm. In order to transfer this GHE risk, it is suggested that the company should invest more in R&D. The company can focus on designing gloves with various novel functions.* Product safety and quality issues remain a concern in export. Product recall is a very serious problem in Asia, especially in Mainland China. Products are often rejected because of poor quality and reduced safely standard. ABC can take the advantages of using well-established QC systems for quality control and assurance.* Relying so much on exports (focusing only on the US and Europe markets) is risky for the company. ABC should expand its market and enter other markets to acquire more market share. It is well known that China is a potential market for consumer goods. ABC should work out a business plan to capture opportunities in local market.Through the case study, the hybrid risk management model has been presented to provide a clear direction and guidelines for risk management. Normally, the risk management model is performed by the top management team. One of the key factors in managing risks and being successful in reducing risks is for the organisation to provide a well-established internal culture of risk management. This requires staff to have an in-depth understanding of the organisation's business environment and the operating situation within the entire supply chain. Another critical issue for the hybrid risk management model is the qualitative assessment method. Since there is a lot of subjectivity in scores given, a different person making the same assessment may come up with a completely different conclusion. The success is a matter of judgment dependent on the expertise of the risk management team. In order to address this issue, appropriate risk management training is important. On the other hand, multiple factors may be employed to evaluate risks in order to increase the degree of objectivity. Different evaluation criteria such as costs and resources required may be added to facilitate team members to assess the risks. Providing clarifying detail to score can help team members less subjective the scoring. Furthermore, risk management team should standardise its team rules to reach consensus from all team members. However, the presented hybrid risk management model does not consider any strategies to promote the risk management concept to the staff. They may not know what to do when risks come. Thus, the staff should obtain more understanding on the actions planned for the organisation. The management team should promote desired risk culture, frame risks in the context of strategy and activities.\n7 Conclusion: The hybrid risk management model provides a comprehensive risk analysis tool using a top-down approach that helps an enterprise to gain a better understanding of its risks and have a better chance of success. By applying the HHM and ERM tools to identify the risks, the company can prioritise risks so that options to mitigate the risk can be appropriate considered and addressed. Most of the uncertainties and risks that the glove manufacturing industry is facing have been identified by HHM, the most likely and high level risks in the industry are identified. ERM method was applied to identify the enterprise's risks, triggers, and current mitigation strategies. Furthermore, some additional outstanding risks are provided for the case company to either avoid or transfer those risks. The proposed model not only provides theoretical merits to the literature, but also applicable to different industries for risk management practices.Today, customer has higher expectations in terms of speed of delivery and service quality. It is challenging for a company to sustain efficient product flow and on time delivery simultaneously. In order to ensure the flexibility of the supply chain, a company should have internal competencies to manage its external partners. This is important in the areas of customer relationship management and supplier relationship management, where enterprises need to establish a trusting and long term relationship with those external partners. Therefore, further research is required to combine the risk management model with the concept of customer relationship management and supplier relationship management.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to propose a hybrid risk management model, focusing on identification and evaluation of potential risk scenarios in industry/enterprise level, which assists in preventing negative impacts from adverse risks.\nDesign/methodology/approach: - The proposed hybrid risk management model embraces the concept of hierarchical holographic modelling (HHM), enterprise-wise risk management (ERM) and risk filtering, ranking, and management (RFRM) that could be applied in real commercial settings. A case study is conducted in order to validate this comprehensive theoretical model.\nFindings: - This study shows the potential risks that may be faced by the textile industry in Hong Kong. Corresponding responses are suggested for the risks in different levels, which provide a systematic approach in managing the risks.\nResearch limitations/implications: - The use of a single case study may limit the generalizability of the findings.\nPractical implications: - The risks suffered by the textile industry are identified through the case study, which provide an insight for better planning and preparation, so as to gain a better chance of success than that of competitors.\nOriginality/value: - The proposed model does not only provide theoretical merits to the literature but can also be applied to different industries for risk management practices.\n",
"role": "assistant"
}
] |
Apprenticeship bullying in the building and construction industry
|
[
"Workplace",
"Bullying",
"Victims",
"Perpetrators",
"Apprenticeship",
"Apprentice"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In 2014, the number of Australian apprentices and trainees in training was 316,400 (National Centre for Vocational Education Research; NCVER, 2015), with the majority of apprentices commencing their studies being male and aged 19 years or less. Most apprentices were enroled in training related to the building and construction industry (e.g. carpentry, electrical trades). Yet despite the large number of individuals commencing their apprenticeship, many individuals do not complete it. In fact, approximately half of all Australian apprentices do not complete their apprenticeship (Bednarz, 2014). Whilst non-completion can be attributed to negative workplace experiences and personal reasons (Bednarz, 2014; Smith et al., 2005; Snell and Hart, 2008), one specific aspect of negative workplace experiences that has received very little attention by researchers is apprenticeship bullying, despite the fact that \"difficulties with bosses and other employees\" is one of the main contributors to non-completion.\nMethod: Participants\nResults and discussion: Based on the lack of research on apprenticeship bullying as well as several measurement issues associated with design of quantitative tools to operationalise bullying (see Cowie et al., 2002), we argue that an exploratory qualitative design is required to provide further insight into the definition and prevalence of apprenticeship bullying, as well as the antecedents, consequences and coping behaviours associated with apprenticeship bullying.\nImplications: The findings from the current study have important implications for apprentices, other employees, bosses, employers and organisations, the building and construction industry and researchers.\nLimitations and future research: Although this study has provided an important contribution to the literature on apprenticeship bullying, the study was not without its limitations. First, we did not ask participants in this sample whether they were perpetrators, bystanders or targets of apprenticeship bullying themselves. It is reasonable to assume that personal experiences of bullying will influence their interpretation of events, and that it is recommended that future studies include data from bullying perpetrators, targets and non-targets in order to compare differences in perceptions and observations. Given the debate around subjective vs objective assessment of bullying (see Einarsen et al., 2011), the inclusion of other qualified employees and bosses may provide some insight into the perspective of perpetrators and bystanders. Indeed, this would potentially be another way of validating some of the findings in past research, as well as raise some important questions about whether or not bullying has occurred, and whether or not someone is a victim of bullying. For example, it may be worthwhile to explore whether bosses and bystanders have different interpretations of whether a behaviour is banter or bullying.\nConclusion: In conclusion, this study has provided clarification on the characteristics and behaviours associated with apprenticeship bullying from the perspective of past apprentices. Moreover, it has highlighted the difficulties in interpreting behaviours that may or may not be considered bullying. This study also identified apprenticeship bullying as a common problem in the building and construction industry, and that specific personal attributes of apprentices and the culture and leadership style within an organisational setting exacerbate the likelihood of being targeted. The consequences of apprenticeship bullying affect not only the individual in terms of their psychological health and work-related experience and outcomes, but the also the organisation where a bullying culture impacts on others in the workplace. Apprentices tend to use avoidance-based coping to deal with being targeted, though it is recommended by past apprentices that they approach the problem in order to deal with it. To help them do this, however, more education and training is required both for apprentices, but also for employers and others in the workplace about clarifying what is apprenticeship bullying, how to report it, and the preferred approach to stop it or prevent it from happening. Much larger research studies should also be undertaken given the importance of this topic.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore the definition, prevalence, antecedents, consequences and coping behaviours associated with apprenticeship bullying in the building and construction industry.\nDesign/methodology/approach: The sample comprises 13 participants aged between 22 and 27 (M=23.8, SD=1.26) who were all men who had completed their apprenticeship within the past five years. Participants completed an individual semi-structured interview in which several themes and sub-themes were identified.\nFindings: Four characteristics (intention to harm, imbalance in power, repeated behaviours, target interpretation) and two types (personal, work-related) were identified by participants as fundamental to the definition of apprenticeship bullying. Being the target of bullying was found to be commonplace during an apprenticeship. Several antecedents were identified at the individual level (age, personality, physical appearance, work ethic, work performance) along with factors at the organisation level (culture and leadership style) that determined whether bullying would occur. The consequences of apprenticeship bullying included poor individual (personal and work-related) and organisational outcomes. Targets of apprenticeship bullying also tended to use avoidance coping.\nOriginality/value: This study has provided clarification on the characteristics and behaviours associated with apprenticeship bullying, as well as identified the antecedents and consequences of apprenticeship bullying from the perspective of past apprentices in the building and construction industry.\n",
"role": "assistant"
}
] |
High performance work systems and organization attraction: The moderating effects of vocational interests
|
[
"Recruiting",
"High-performance work systems",
"Organization attraction",
"Vocational interest"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Attracting and recruiting qualified applicants is a critical component of organizational success (Carroll et al., 1999; Chapman et al., 2005). Organization attraction, which refers to how a prospective employee assesses the desirability to work for a particular company (Hannon, 1996), can be an important factor in recruitment. Organizational attraction has also been referred to as company employment image (Highhouse et al., 1999). Over the last several decades, researchers have identified numerous predictors for organization attraction, such as corporate social performance, employer brand, diversity practices, advancement opportunities, organizational culture, familiarity, job characteristics, and location (e.g. Amber and Barrow, 1996; Turban and Greening, 1997; Highhouse et al., 1999; Lemmink et al., 2003; Lievens et al., 2005; Lievens and Highhouse, 2003; Williamson et al., 2008).\nTheory and hypotheses: Organization attraction\nMethod: Sample and procedure\nResults: Confirmatory factor analysis was conducted using the AMOS 21.0 software to assess the discriminant validity of the six vocational interests in the RIASEC framework. To reduce the number of indicators, we used three-item parcels for those measures with more than three items to achieve a better item-to-sample size ratio (our ratio is 4.3, which is relatively low (Hair et al., 1995; Hogarty et al., 2005). Parceling can be particularly effective when the items from a unidimensional scale are parceled, and if a research focusses on the relationships among latent variables instead of understanding the relationships among items (Little et al., 2002), which is the case for our study. We evaluated the model fit by using kh2 statistics, kh2 to degrees of freedom ratio, comparative fit index (CFI), Tucker-Lewis index (TLI), and root-mean square error of approximation (RMSEA). The six-factor model provided a generally favorable fit to the data (kh2 (120, 171)=222.36, p\nDiscussion: Organization attraction is a popular topic of interest for HR researchers because it is presumed to have an important role in recruiting highly qualified job applicants and in enhancing organizational performance. The current paper contributes to this area of study by showing that HPWS is positively associated with organization attraction and that social vocational interests moderate the relationship. Our findings offer several important theoretical implications for studies on HPWS and organization attraction, as well as suggest additional opportunities for future research.\nConclusion: This study contributes to our knowledge of the effects of HPWS on organization attraction. In addition, job applicants' social vocational interest plays an important role in strengthening the relationship between HPWS and organization attraction. We call for future research to develop a more responsive theory to shed light on how specific HR practices or bundles interactively affect organization attraction. It would be also interesting to examine how job applicants' other characteristics beyond vocational interests such as socio-demographic characteristics and personality play its roles in responding to organizations' HR practices or bundles during the job search processes.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to test the effect of human resource systems on organization attraction. Furthermore, the authors theorize and test how the vocational interests of prospective employees can serve as boundary conditions that affect the relationship between high-performance work systems (HPWS) and organization attraction.\nDesign/methodology/approach: To achieve these ends, this study conducts a scenario-based experiment with prospective employees to examine the effects of HPWS and vocational interests on organization attraction.\nFindings: The authors demonstrated that HPWS is an important feature for organization attraction. Despite the generally positive linkage between HPWS and organization attraction, the most important implication of the findings is that job applicants also have an important role in responding to the features being used by a firm to attract applicants through HPWS. For example, potential job applicants with higher (rather than lower) social vocational interests are more likely to be attracted to the HPWS of firms.\nResearch limitations/implications: This study has limitations that must be considered. In particular, the authors treated HPWS as a unidimensional construct. Given the study design, it is unclear whether the attraction effects are driven by HPWS as a whole or whether they are being driven by any single or multiple component(s) of the system. Future research needs to consider examining how specific practices are matched with specific vocational interests by using multiple scenarios where they bundle different high-performance work practices. Doing so would further the understanding of which specific practices affect attraction and for whom.\nOriginality/value: This study contributes to the authors' knowledge of the effects of HPWS on organization attraction. In addition, job applicants' social vocational interest plays an important role in strengthening the relationship between HPWS and organization attraction.\n",
"role": "assistant"
}
] |
Potential pitfalls of process modeling: part A
|
[
"Organizational processes",
"Process management",
"Modelling"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Process modeling is a widely-used approach to achieve the required visibility for existing processes and future process scenarios as part of business process improvement projects. The intellectual challenges related to process modeling keep many academics entertained and a plethora of tools, methodologies and educational material in the form of publications and seminars is available. However, process modeling has also strong opponents. It is criticized for being over-engineered, time-consuming, costly and without (sufficient) value. Thus, the challenge is to find the right level of modeling for the underlying purpose. I am strong supporter of process modeling, but this paper is not about the advantages of modeling. It is dedicated to the typical pitfalls of process modeling. Awareness of the main challenges is often a better secret of success than blindly following recommendations why we should do it. In general, it has been proposed to increasingly learn from failures in order to derive a list of those factors that characterize true successful practice (Denrell, 2005).\n1 Process modeling in the twenty-first century: Flowcharting and process mapping as a means to visualize a business process have been around \"forever\". One of the first systematic approaches was flow diagrams as developed by Goldstine and von Neumann in 1946, and flowcharts have been a part of software development since the beginning of programming. Flowcharts in all types of sizes and shapes have been popular in organization management. They are part of policies, procedures and organizational handbooks.The current generation of business process analysts prefers the term \"process modeling\" rather than flowcharting or mapping. Process modeling claims a more disciplined, standardized, consistent and overall more mature and scientific approach. It facilitates process visibility and has to satisfy an increasingly heterogeneous group of stakeholders (from the CXO to the end-user) and modeling purposes. It has to be scalable, configurable and usually be able to provide a bridge between IT capabilities and business requirements.A second change over time has been an increased focus on business process modeling. The prefix \"business\" encourages the community of business representatives, end-users, and, most of all, potential process owners not only to understand process models, but also to more, and more actively, model their own business processes. The assumption is that it is easier to pick up the concepts and techniques of process modeling than to articulate the complexities of a certain business domain to a business process analyst. As a response upper-CASE tools are available, which support the business modeling community, but have only limited intentions to convert the outcomes into executable or implementable process specifications.Another observation is related to the increased size of process modeling initiatives. A number of organizations conduct process modeling with an enterprise-wide scope and even globally. As a consequence, the investment related to tools, methodologies, training and modeling activities reached the point, where process modeling increasingly faces the \"Where is the return on investment?\" question.Such a development can be appreciated from an academic viewpoint as it provides opportunities for countless research projects. However, this situation is also exposed to the danger of over-engineered techniques, tools, modeling conventions, etc. resulting in projects which finally fail.Like business process management, knowledge management and customer relationship management, nobody seriously questions the need for process modeling, but the million-dollar question is how much modeling is actually required?A wide variety of responses to this question can be found in current projects all over the globe. At one extreme, some companies consciously minimize their efforts related to process modeling as they do not believe in \"fat methodologies\". In these cases, the blueprinting of future business processes is conducted with the simplest drawing tool, even if the modeling project is followed by a multi-million dollar ERP implementation. Such a behavior is in sharp contrast to the actual impact on the costs of business processes. The early phases of business process design might not be the most expensive ones, but they tend to have the highest impact on the benefits and costs of the implemented business processes.At the other extreme, companies invest a significant amount of time and money to select the most appropriate modeling tool, write literally hundreds of pages of guidelines attempting to precisely standardize the layout of a process model. They spend weeks customizing the tool, comprehensively modeling their processes in terms of scope and depth. This approach is then rolled out all over the world with the aim of developing a consistent template for execution.Thus, the actual practice of modeling sees all variations and can range from brown (butcher) paper to the use of sophisticated modeling techniques with high expressive power leading to executable process specifications. However, elaborated modeling techniques often come with the price of limited understandability.A process model has two complexity drivers. One driver is the way process modeling is approached, i.e. modeling complexity. How difficult is it to design a model within the provided modeling environment (tool, techniques, guidelines, etc.)? How complex does the model look which is derived? Or, as a representative from an American financial service provider phrased it: \"Does the model still fit on one page? If not, it is not a good model.\" The other complexity driver is the complexity of the process itself, i.e. process complexity. A process model is like a mirror; it reflects. But unlike a mirror, it also allows deeper focus on the elements of interest. Unfortunately, process modeling is sometimes seen as being very complex and modeling complexity is blamed for this. However, it is possible to reduce and manage modeling complexity to a large extent, which allows concentrating on process complexity. Giving up on modeling means giving up on comprehending business processes and escaping from dealing with the process complexity of an organization.Process modeling is an area where artists (heavy right brain utilization) meet scientists (heavy left brain utilization), internal knowledge workers meet external knowledge owners, business meets IT. It is not only about the final artifacts (the models), which represent the outcome of these modeling session, but it is the process of modeling itself and its impact on subsequent activities and projects, which deserves attention.This brief introduction shows that process modeling is (back) on the radar screen and gets attention. Consequently, I thought it might be worthwhile to increase the awareness for common traps. The following list provides such an overview of typical characteristics of unsuccessful process modeling. This first part of the paper covers pitfalls related to strategy and governance (1-3), and the involved stakeholders (4-6). The second part will cover tools and related requirements (7-10), the practice of modeling (11-16), the way we design to-be models (17-19), and how we deal with success of modeling and maintenance issues (19-21). I like to stress, that these pitfalls are about process modeling and not about business process management.1.1 Lack of strategic connections\n",
"role": "user"
},
{
"content": "Purpose: - This paper summarizes typical pitfalls as they can be observed in larger process modeling projects.\nDesign/methodology/approach: - The identified pitfalls have been derived from a series of focus groups and semi-structured interviews with business process analysts and managers of process management and modeling projects.\nFindings: - The paper provides a list of typical characteristics of unsuccessful process modeling. It covers six pitfalls related to strategy and governance (1-3) and the involved stakeholders (4-6). Further issues related to tools and related requirements (7-10), the practice of modeling (11-16), the way we design to-be models (17-19), and how we deal with success of modeling and maintenance issues (19-21) will be discussed in the second part of this paper.\nResearch limitations/implications: - This paper is a personal viewpoint, and does not report on the outcomes of a structured qualitative research project.\nPractical implications: - The provided list of total 22 pitfalls increases the awareness for the main challenges related to process modeling and helps to identify common mistakes.\nOriginality/value: - This paper is one of the very few contributions in the area of challenges related to process modeling.\n",
"role": "assistant"
}
] |
The Taliban: a study of book collections on the Taliban in academic, public and West Point libraries
|
[
"Taliban",
"Afghanistan",
"Academic libraries",
"Public libraries",
"West Point",
"Collections management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: From October 2001 to date (March 20, 2011) US troop fatalities were 1,505 in Afghanistan and 4,440 in Iraq, with injury rates close to nine times the death rate (see www.icasualties.org). Non-fatal injuries can be catastrophic from a combination of devastating ordinance and higher survivability rate due to rapid medical battlefield intervention. This results in longer and more costly recuperation and higher disability payments, which funnel into the estimated three trillion dollar cost of the war of the war (Stiglitz and Bilmes, 2008).A study by Stanford University and the Naval Postgraduate School indicates that post-traumatic stress disorder (PTSD) among Iraq veterans could be as high as 35 percent (Atkinson et al., 2009). Concussive brain injury from repeated exposure to multiple roadside bombs causes functional deficits that are often hard to diagnose and treat. These two brain assaults - one physical and one mental - cause major social and societal fallout.Despite the enormous financial and human cost, just 4 percent of the US public put the war/fear of war as their top concern[1]. This could be because news coverage is scant in the print and other media. In October 2010 the new edition of the online game \"Medal of Honor\" allowed players to be Taliban and \"kill\" US troops, which reflects the general disconnect of the wider population from the war.I wanted to see whether this disconnect was reflected in our libraries as well. Or would our libraries reflect the severity of the human and economic cost of the war by taking the lead and building a comprehensive collection of books on the war that would provide their respective constituencies with the tools for research and reflection? I examined the book holdings on the Taliban (as an index term of the wider war) in public and private colleges in New York and New Jersey, in New York Public Library, the Brooklyn Public Library and the United States Military Academy at West Point.\nMethodology: Books on the Taliban were chosen as an index of the Afghanistan war since the Taliban are the major defining element of the Afghanistan war. WorldCat was searched (see www.worldcat.org) using the following criteria: Taliban as keyword, then restricted to English language, physical book material, no fiction and no juvenile works, i.e. \"kw: taliban\" > \"English\" > \"Non-Fiction\" > \"Non-Juvenile\", resulting in 1,668 titles.Three major subgroups were identified within this set:1. \"kw: taliban\" > \"English\" > \"Non-Fiction\" > \"Non-Juvenile\" > \"History & Auxiliary Sciences\" (355 titles);2. \"kw: taliban\" > \"English\" > \"Non-Fiction\" > \"Non-Juvenile\" > \"Sociology\" (171 titles); and3. \"kw: taliban\" > \"English\" > \"Non-Fiction\" > \"Non-Juvenile\" > \"Political Science\" (77 titles).Duplicates were eliminated, as were theses, government publications, ephemera and other items. Closely related titles were amalgamated. Only printed books were considered. Only works that had \"Taliban\" in the title or subtitle were chosen from the cumulative groups, above resulting in a total of 83 titles.This was done to provide reasonable numbers to analyze, but it also insured that every title had \"Taliban\" as a keyword as well as in the title. The presence of a word in the title of a book does not guarantee the book will reflect the topic in a cogent or scholarly manner. However, the three non-fiction subject categories provided some confidence that the books would be close to target. The methodology was regarded as safe and reasonable as evidence in the discussion section will verify. For samples of excluded titles, see Appendix 1.Data for title, author, publisher, pagination and number of libraries in WorldCat were recorded on an Excel file for the libraries below. Internal counter keys were used to help with summations. The data is available from the author.The holdings of the 83 titles in the libraries were determined from WorldCat. The individual library catalogs were not searched. Any errors from this method were assumed to be evenly distributed across the libraries. No study of the consistency of WorldCat holdings versus local catalogs has been published to date in the literature.Collections in the following libraries were analyzed:* City University of New York (CUNY) (the largest urban university in the USA);* Columbia University (COL) (private university);* New York University (NYU) (private university);* Princeton University (PRI) (New Jersey) (private university);* Rutgers University (RUT) (New Jersey) (state university; public);* New York Public Library (NYPL);* Brooklyn Public Library (BPL); and* US Military Academy at West Point (USMA).\nFindings: Only NYPL and Columbia had over 50 percent of the 83 titles. Twenty seven percent of the 83 titles (n=22) were not held by any library and just seven books were in all eight libraries.New York Public Library scored better than all the academic libraries with 49 percent of the 83 titles, followed closely by Columbia University with 46 percent. There was an eight-point drop to 38 percent for New York University, which modestly outpointed the 35 percent for Princeton (see Tables I and II and Figure 1).Close behind were the two public universities: 33 percent for the City University of New York and 31 percent for Rutgers. Brooklyn Public Library was seven points behind the next at 24 percent and West Point ranked last with 20 percent of the 83 titles in their collection 29 percentage points adrift of NYPL.To summarize: NYPL>COL[?]NYU>PRIN> CUNY>RUT[?] BPL>USMA.Although Columbia and Princeton rank second and fourth in the 2011 US News & World Report, NYPL outdid both of them (as well all the other academic libraries). The rankings were unreliable indicators based on the libraries investigated here for instance NYU (33) scored higher than Princeton (2) and Rutgers (29). A follow-up study analyzing holdings for the top 30 ranked colleges is in progress.In case the selection criteria of \"Taliban\" as keyword plus \"Taliban\" in the title were prone to unintentional errors by excluding valid titles, various searches were run to boost the reliability of the choice.The sequence of holdings were identical for both the search of Taliban as keyword or Taliban in the title (English language, books including e-books, all types of documents, not juvenile, not fiction). CUNY was excluded because there was no realistic way to check holdings without checking 13 individual college holdings and then eliminating duplicates.The ranking sequence for the other libraries closely parallels that found in the current study (Table III). Although this study revealed tepid holdings for print books, the holdings are dramatically worse when all book formats are included and theses and government documents are included (it is not possible to exclude these from the WorldCat version available at CUNY, which allows one the advantage of using the advanced search option to limit by WorldCat library).\nConclusion: Holdings on the Taliban are modest in all the libraries in this study. All of the libraries can do more. Academic libraries in particular need to build collections for research to retain their relevance as academically viable entities. The number of titles in the West Point Library should be the highest rather the lowest. It seems reasonable that every publication on the Taliban should be in the collection of a military academy library.Many of the books not held by the libraries are published outside the USA and may not be as easy to buy as mainstream titles published in the USA and available through Coutts and other vendors. But the inertia in acquiring such material is outweighed by the importance of having the most extensive collection possible on this critical contemporary crisis.A follow-up study is under way to examine holdings in the two remaining Armed Forces Academy, libraries since it seems critical that these three institutions at least would have adequate coverage of the Taliban.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is to document and compare the extent of book collections on the topic of the Taliban in selected academic libraries in New York and New Jersey, the New York Public Library and the military academy at West Point.\nDesign/methodology/approach: - Books on the Taliban were chosen as an index of the Afghanistan war since the Taliban are its major defining element. WorldCat was searched (www.worldcat.org) using the following criteria: \"kw: taliban\" > \"English\" > \"Non-Fiction\" > \"Non-Juvenile\". In total 1,668 titles were found. Duplicates were eliminated, as were theses, government publications, ephemera and other items. Closely related titles were amalgamated. Only works that had \"Taliban\" in the title or subtitle were chosen from the cumulative groups above, resulting in a total of 83 titles.\nFindings: - Only two libraries crossed 50 percent of the \"core\" collection of 83 titles. A total of 27 percent of the 83 titles (n=22) were not held by any library, and only seven books were in all libraries. New York Public Library topped the modest rankings and scored above the academic libraries and West Point. West Point ranked last by a big margin (29 points). New York Public Library scored better than all the academic libraries with 49 percent of the 83 titles, followed closely by Columbia University with 46 percent. There was an eight-point drop to 38 percent for New York University, which modestly outpointed the 35 percent for Princeton.\nResearch limitations/implications: - A possible research limitation is that the degree of library holdings may not reflect the low level of engagement by the public, academia and military in the ongoing wars.\nSocial implications: - The research may indicate a low priority by collection development librarians in public libraries and in the libraries of academic and military colleges to develop a comprehensive collection of material on the Afghanistan and Iraq wars. The paper argues that more extensive collections should be built despite the media and the general population's antipathy to the war. A comprehensive collection for students, scholars, the public and the next generation of officers about a contemporary war which has profound financial, political and military sequelae should be a priority for collection-building librarians.\nOriginality/value: - The low holdings of books on the Taliban, and hence the war in Afghanistan, in public and academic libraries and the library of West Point may reflect the low level of interest of the US population in the wars in Afghanistan, Iraq (and Libya).\n",
"role": "assistant"
}
] |
Impact of founder CEO and CEO ownership on entrepreneurial orientation, moderating role of CEO narcissism
|
[
"Founder CEO",
"CEO ownership",
"Entrepreneurial Orientation",
"CEO Narcissism"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Entrepreneurial behaviors and entrepreneurial conducts are very famous among the current organizational researchers (Lumpkin and Dess, 2001). Until now, we have little knowledge about the impact of these behaviors on entrepreneurial aspects which are very important for organizational success (Stam and Elfring, 2008). In routine work circumstances, the role of a leader or a chief executive officer and its impact on employees' behavior is an undeniable fact. Therefore, it is very appealing to investigate how distinct leadership status and attitudes support the entrepreneurial orientation (EO) (Murad, 2014). The perspective of the current study is the relation between funder leader and EO and the moderating role of narcissism of a leader.\n2. Literature review and hypotheses: 2.1 Founder CEO\n3. Research methodology: 3.1 Sampling and measurement\n4. Results and discussion: The data was examined through SPSS by carrying on statistical analysis of demographics, construct, and hierarchical regression analysis.\n5. Conclusion: As it has been established from that EO may give inconsistent results when tested in empirical studies and so is the case of narcissism (Wiklund and Shepherd, 2011). Similarly, the personality of CEO is also vital which is impacted by its ownership and shares. However, the force represented by CEO's personality is not sufficient to thoroughly explain the phenomena of changes in performance of the firm. For CEO ambitions and attributes to exert influence upon organizational outcomes, it is a must that those ambitions be translated into the behavior of specific firm-level strategy. On the other hand, EO is considered the valid concept, due to its ability to act as a tool for evidence gathering for actions of entrepreneurs and decision-making by them across multiple geographic and organizational contexts. Such features make it interesting for a comparative study like this. Based on these concepts five hypotheses have been developed which have been discussed in detail below.\n",
"role": "user"
},
{
"content": "Purpose: This study aims to explore the moderating impact of narcissism overload on the relation between founder CEO and entrepreneurial orientation (EO) in registered private schools of Pakistan.\nDesign/methodology/approach: Data were collected through a stratified random sampling method with the help of previously validated questionnaires. A sample of 121 replies was gathered for analysis. SPSS has been used to find the results.\nFindings: Results depict that CEO narcissism moderates the relation between founder CEO and EO and does not moderate the relationship between and CEO ownership and EO.\nOriginality/value: Many studies focused on the founder personality characteristics (such as generalized self-efficacy or locus of control) are not directly observed, but rather inferred their effect indirectly. The study contributes to examine how the founder CEO variable interacts with CEO personality to influence EO. This study will propose a practical approach to investigate whether and how the narcissism constructs moderate the founder CEO-EO relationship. Direct association between stock ownership and EO will also be examined.\n",
"role": "assistant"
}
] |
The effects of fatalism and denial on earthquake preparedness levels
|
[
"Middle East",
"Educational institutions",
"Denial beliefs",
"Fatalistic beliefs",
"Risk preparedness",
"Earthquake risk reduction"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Earthquakes kill tens of thousands every year, and approximately 90 per cent of these fatalities occur in developing countries (OECD, 2008), mostly due to a lack of awareness and preparedness and poor construction design and practices caused by corruption (Escaleras et al., 2007). For instance, the tragic 7.0-magnitude Haiti earthquake in 2010 claimed the lives of more than 200,000 people, while the 8.0-magnitude Sichuan Earthquake in 2008 killed more than 80,000 people and destroyed thousands of buildings, including schools and hospitals, in China. As well, the 6.3-magnitude in Bam, Iran, in 2003 resulted in the loss of 30,000 human lives, while the 1999 Marmara earthquake in Turkey killed more than 19,000 (Anbarci et al., 2005). Thus, earthquake-vulnerable communities must be properly and adequately educated and aware of the involved seismic risks in order to cautiously prepare to respond to this hazard and to take the necessary safety measures to reduce the damaging effects of such disasters.\nPurpose and importance of the study: Lebanon has insufficient infrastructure to resist an earthquake hazard, and disaster planning is limited at the government and local community levels. Moreover, the Lebanese community's perceptions of earthquakes risks are not well understood. The purpose of this study is to evaluate Lebanese youths' perceptions of earthquake risk. Literature on this subject is scarce, and no scholars have attempted to evaluate the Lebanese public's perceptions of earthquake hazards. This research investigates the preparedness level of Lebanese college students for earthquake disasters based on a questionnaire that assesses their level of awareness, fatalism and denial regarding disasters.\nTheoretical framework: Researchers have discussed the factors that affect individuals' preparedness for natural hazards. Based on previous models for disaster preparedness (DP), Paton (2003) proposed the social cognitive model which consists of three phases: the factors that might motivate people to be prepared, these motivational factors linked with intentions to preparedness, and these intentions linked with preparedness. This study explores the first phase of Paton's (2003) social cognitive model to identify factors that might motivate or de-motivate people to be prepared for disasters. Specifically, this study examines college students' DP and mainly the effects of hazard and vulnerability awareness (HVA), denial beliefs (DB) and fatalistic beliefs (FB) on college students' behaviour towards earthquake disasters:\nMethodology: Research on hazard perception defines perceived risks in terms of one's expectations of the personal impacts (death, injuries, disruption to daily activities, etc.) from a natural hazard. It was shown that prior beliefs about a natural hazard, information, and social warnings have an effect on personal risk perception (Lindell and Prater, 2000; Mileti and Peek, 2000; Mileti and Sorensen, 1987). Individuals' level of knowledge about disasters influences their ability to handle disaster information and their awareness and behaviours of disaster prevention and reduction.\nFindings: The majority of respondents were female (54 per cent). Participants were distributed among the following majors: engineering and sciences (36 per cent), humanities and social sciences (32 per cent) medicine and nursing (14 per cent), and business (18 per cent). As shown in Table I, 41 per cent of participants were sophomores, 37 per cent juniors and 22 per cent seniors.\nDiscussion: The above average level of the HVA (M\nLimitations: This research is a case study that included college students from one country in the Middle-East region. The dearth of available information about DP in general and factors that influence preparedness activities in particular in the Arab World was one of the main constraints of this study. Therefore, there is a strong need for more studies in the region providing data and analysis assessing the status of disaster planning in the Middle-East region. The use of the Red Cross measures of preparedness which is not specific to earthquakes is another limitation of this study and has narrowed the definition of preparedness. Future investigations should explore preparation scales dealing with essential dimensions for earthquake DP activities that include actions for survival, preparedness planning and hazard mitigation. Also, surveys are a convenient way for gathering information from a large number of participants, but are not enough to gain a complete understanding of students' beliefs regarding natural hazards. Interviews may confirm the findings and may be appropriate to gain a more complete picture of participants' awareness, denial and FB towards earthquake disasters.\nConclusion: This study sheds light on the factors impacting the preparedness level of college students in Lebanon for earthquake disasters. The findings indicate an above average level of earthquake HVA but poor level of DP. Although participants do not exhibit high DB towards earthquake hazards, they possess some inclinations to FB. The statistical analysis showed a significant negative correlation between DB and DP as well as a significant negative correlation between FB and DP. Moreover, a significant positive correlation was found between HVA and DP. Multiple regression confirmed that earthquake HVA has positive effect on college students' preparedness while denial and FB play a negative role on their earthquake preparedness level.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the effects of awareness and beliefs on college students' preparedness for earthquake risks in Lebanon.\nDesign/methodology/approach: A questionnaire was used to investigate Lebanese college students' (n=901) perceptions and preparedness for earthquake hazards. Three factors were assessed to determine students' disaster preparedness (DP): hazard and vulnerability awareness (HVA), fatalistic beliefs (FB) and denial beliefs (DB).\nFindings: The findings indicate an above average level of earthquake HVA among college students. Although the results do not reflect a high tendency towards DB, participants showed some inclinations to FB. Multiple regression showed that earthquake HVA plays the most important role in determining earthquake preparedness while denial and FB have a negative effect on DP. Fatalism is embedded in Middle Eastern societies, so higher education institutions are urged to increase the preparation knowledge of college students.\nOriginality/value: This study offers a discussion of the interaction of awareness and belief factors and their effects on vulnerable communities.\n",
"role": "assistant"
}
] |
Content and context: "fair" values in China
|
[
"China",
"IFRS",
"IASB",
"Transaction cost economics",
"Relational contracting",
"Guānxì"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: For last year's words belong to last year's language, And next year's words await another voice (Eliot, 1943/1959, Little Gidding).\nIntroduction: Accounting is embedded in socio-economic environments and may be viewed as a consequence of underlying political systems and power distribution (Arnold, 2009). It evolved over time to meet heterogeneous users' needs shaped by firm- and country-specific requirements, and therefore developed into nationally divergent accounting systems. The International Accounting Standards Board (IASB) has the difficult task of developing \"globally accepted\" accounting and financial reporting standards in a globalised but not yet harmonised world. Socio-economic and cultural backgrounds still differ even for listed global corporations, but especially for more regionally oriented private firms.\nThe challenge in addressing cultural norms and ethnicity: The research question in this study arose as a response to the increasing body of research which observes that the impact of IFRS in capital markets worldwide depends on local demands, local regulations, local enforcement and monitoring (inter alia Taplin et al., 2002; Ball et al., 2003; Ding et al., 2005). The Institute of Chartered Accountants of Scotland (ICAS, 2010) note that \"the [Chinese] government was unwilling to move to full principles based standards in areas where judgments were difficult to verify, such as fair values\" (p. 8). Additionally, the operation of international capital markets requires translation of accounting standards into local languages. As would be expected, translation has therefore been identified as a major issue challenging equivalent implementation of IFRS (Nobes, 2006, 2013; Zeff, 2007; ICAS, 2010). Different languages are infused by different ways of thinking, and currently IFRS are translated into 48 different languages. But is the content of the IFRS also culturally neutral? Accounting textbook writers from 23 different European Union countries, who relied on translation of IFRS into their native language did not think so (Baskerville and Evans, 2011).\nPart one: IFRS and fair value research - financial reporting in China: The term \"fair value\" was first introduced into the IFRS in 1982 when IAS 16 \"Property, Plant and Equipment\" was issued (Cairns, 2006). IAS 16.6 (1982) defined it as an \"amount for which an asset could be exchanged between a knowledgeable, willing buyer and a knowledgeable, willing seller in an arm's length transaction\". The term and similar market-based definitions have been used in several subsequent standards but IAS 39 \"Financial Instruments: Recognition and Measurement\" in 1998 was a major breakthrough in terms of the diffusion, use and application of the fair value measurements.\nAccounting-related research on guanxi: Outside of financial reporting research there have been long-standing linkages with cultural norms in research focussing on guanxi, ethical reasoning (Trevino, 1986) and accounting or auditing. Au and Wong (2000) observe that ethical reasoning in relation to the assessment and reporting of audit adjustments, and the conflicts these engender, are influenced by the level of guanxi in relation to the audit client and its officers. The higher the levels of ethical reasoning, the lower are the impact of guanxi and the perceived manipulative influence on audit adjustments. Some management accounting research noted the impact of guanxi: Abdallah (1992) observe how the manager of a firm describes that control over all subordinates is a necessity, in order to prevent the operations of the guanxi system, which tends to depend on the use of connections to achieve results. More recently Merchant et al. (2011) observe that \"Chinese management practices appear to differ, for example, in the importance placed on personal relationships (guanxi), rather than rules and contracts\" (p. 663). Generally, it is observed:A good guanxi network becomes a company's distinctive competitive advantage in terms of accessing scarce resources and lowering financing cost, which in turn can lead to high value(Chen et al., 2014, p. 19).\nPart two: Guanxi as a governance structure according to Oliver Williamson: The recourse to the different modes of governance structures in this section helps us to understand that market transactions are neither the exclusive nor the superior form of economic transaction. It can be demonstrated that the simple exchange in anonymous markets reflects a specific governance structure, just as the guanxi-based transaction reflects another. Both governance structures might be similarly efficient, of course, in different settings with regard to different institutional environments, investment characteristics, frequency of transactions and information levels. Neutral accounting would display economic transactions without bias in either direction. However, the contrast between fair value measurements with a guanxi context, offered as follows in this study, exposes the IFRS bias towards market governance. This bias is interpreted as a consequence of a Western belief in the primacy and potential efficacy of markets.\nDiscussion and conclusion: The stated intent of Chinese market regulators and government ministries is to introduce the concept of fair value in China including regulations on related party disclosures (which might serve as another example of the sceptical Western market-based attitude towards guanxi). It may well be a long time before these rules are comprehensively followed, due to the reliance on guanxi to maintain effective transactional trust in the absence of a reliable \"willing buyer, willing seller\" trading environment independent of negotiating power. This will be the case especially if the accountants and auditors entrusted with recording and attesting to the value of such transactions are equally influenced by cultural factors, such as guanxi. Thus it is unlikely that such disclosures will be ideal (Schultz and Tang, 2004) or reliable. As a result, as suggested by Nuthall et al. (2009), any swift union of Chinese and Western accounting systems over fair value appears unlikely, as it would mean a substantial and potentially traumatic break with existing and historical cultural practices.\n",
"role": "user"
},
{
"content": "Purpose: The evolution of International Financial Reporting Standards (IFRS) was nurtured by economists and accountants loyal to the philosophical basis of what is often referred to as \"Western\" market economies, being classical and neoclassical contracting theories. The purpose of this paper is to illustrate how a particular Asian cultural attribute (guanxi ) impacts on the efficacy of fair value measurement.\nDesign/methodology/approach: Using a literature review and research of studies of the adoption of IFRS in China, studies of both guanxi and fair value in Chinese accounting research, this study unbundles Williamson's governance structure and contracting theory to examine how guanxi is positioned orthogonally to fair value (market-oriented valuation) principles for financial reporting. This is followed by a case study of the events surrounding the collapse of China Medical Technologies.\nFindings: Guanxi is integral to Asian economies and economic transactions. Resulting conditions, characterised by relational contracting, may not meet the qualitative characteristics of neutrality and faithful representation in fair value measurement of assets and liabilities. The same may be true when insider or \"trusted party transaction\" values prevail for large ticket transactions among entities in any jurisdiction.\nResearch limitations/implications: Future research on the impact of guanxi may be constrained by its often hidden, and yet dynamic, character; and the varieties of its manifestations.\nOriginality/value: This study highlights how difficult it may be to achieve both comparability and relevance in the asset and liability recognition and measurement rules in Asian (and possibly also other) economies adopting accounting principles that are developed in a Western context.\n",
"role": "assistant"
}
] |
The role of technology readiness in customers' perception and adoption of self-service technologies
|
[
"Communication technologies",
"Customer services quality",
"Customer satisfaction",
"Consumer behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Increases in labor costs and innovations in technology have contributed to the growth of technology-based self-services (Dabholkar, 1996). Many service providers have adopted a wide range of technologies in the process of service delivery. Customers use the new technologies to produce and consume services without direct personal contact with firm representatives. The trend for self-service technologies (SSTs) will continue as SSTs are increasingly implemented in customer-firm interactions (Meuter et al., 2000; Meuter et al., 2005).SSTs are technological interfaces that enable customers to take advantage of a service without any service employee involvement (Meuter et al., 2000). Types of SSTs include telephone/interactive voice response, interactive kiosks, and the internet. There is limited understanding of what influences customers' perception and adoption of SSTs. Since these new technologies have transformed the nature of customer communication and service (Howard and Worboys, 2003), it is necessary to investigate how customers assess the attributes of SST service and how service outcomes are affected.Although consumers are increasingly sophisticated in their technological interactions, they may avoid certain SSTs if uncomfortable, even when benefits are obvious (Meuter et al., 2003). Customers' technology readiness (TR) should be taken into account in order to accurately predict the perception and behavior of customers (Parasuraman, 2000). Currently, there has been little research examining consumers' readiness to adopt, willingness to use, or evaluation of SSTs and any consequent influence on behavior. A research framework that explains the antecedents and consequences of customers' assessments of SSTs is still underdeveloped.This study attempts to integrate related literature in order to develop a conceptual framework for investigating SST adoption and its related issues. This paper is organized as follows: we first review the relevant literature on TR, SST service quality, satisfaction, and behavioral intentions. Next, a conceptual framework and derived hypotheses are presented along with research methodology and results. Finally, we clarify the findings, discuss implications and limitations, and suggest further research directions.\nConceptual background: Technology readiness (TR)\nResearch methodology: Measurement\nResults: Validation of SST-QUAL\nDiscussion: Though SSTs are becoming increasingly accessible and have obvious benefits for service firms and consumers, understanding of perceptions and use of these technologies has not been fully undertaken. The current results help to explain perceptions and behaviors when consumers use SSTs. Firstly, TR is an important driver of SST service quality and behavioral intentions, echoing Zeithmal et al.'s (2002) propositions. Increased customer TR leads to higher perceived service quality and favorable behavior intentions when using SSTs. Secondly, perceived SST service quality has a significant positive influence on SST-satisfaction and SST-behavioral intentions. Higher perceived service quality is more likely to lead to satisfied customers who have favorable behavioral intentions regarding SSTs. Thirdly, this study confirms that the more satisfaction customers experience when using SSTs, the more likely they are to use it again and recommend it to others. These results reinforce the role TR plays in consumer-SST interactions. Firms should thus pay special attention to TR in order to improve favorable outcomes of SST implementation.Apparently, results do not support the positive relationship between TR and SST satisfaction predicted in our model. However, this does not contradict the results of Zeithmal et al.'s (2002) qualitative study that TR influences customers' behavior and service quality as well as Meuter et al.'s (2003) empirical study that the impact of technology anxiety on customer satisfaction is unclear compared to its significant impact on behavioral intentions such as word of mouth. Although extant studies indicated that TR has a positive impact on satisfaction (Yen, 2005; Liljander et al., 2006), these studies only investigated the simple association between TR and satisfaction, without developing a more complete framework capable of exploring the causal relationships among major constructs. Our analysis addresses this by examining the causality among the constructs in our model. Results suggest the influence of TR on SST-satisfaction is mediated through SST-service quality. In other words, customers with more positive levels of attitude toward technology, ability to use technology, and willingness to adopt technology are more likely to appreciate SSTs, resulting in higher perception of service quality, which in turn enhances customer satisfaction. This interesting finding requires further theoretical and empirical assessment in future research.Implications\n",
"role": "user"
},
{
"content": "Purpose: - This study aims to examine how technology readiness (TR) influences customers' perception and adoption of self-service technologies (SSTs) through development of an empirical model to explore the relationships among TR, perceived service quality, satisfaction and behavioral intentions toward SSTs.\nDesign/methodology/approach: - A theoretical framework is proposed to suggest the links between the four constructs. Extant research and concepts from various fields, including marketing, psychology and information system (IS), are reviewed, deriving six hypotheses. Data from SST users is examined through structural equation modeling (SEM).\nFindings: - Results indicate TR influences perceived SST service quality and behavioral intentions, while perceived SST service quality has a positive impact on customer satisfaction and behavioral intentions toward SSTs.\nResearch limitations/implications: - This study represents an early attempt at explaining the role of TR in customer SST usage. Future research directions are discussed, with an emphasis on incorporating attitudinal, behavioral, situational and psychographic factors to improve our understanding of SST usage.\nPractical implications: - TR should be given increased attention by firms implementing SSTs to improve customer perception and adoption of SSTs. Firms also need to examine all SST service quality dimensions in order to improve customer satisfaction and behavioral intentions toward SSTs. An aggressive approach toward strengthening TR drivers and reducing TR inhibitors will show benefits for both firms and customers.\nOriginality/value: - This paper represents the first study to propose and empirically examine the relationships among TR, perceived SST service quality, satisfaction and behavioral intentions toward SSTs.\n",
"role": "assistant"
}
] |
Patient-driven computers in primary care: their use and feasibility
|
[
"Patients",
"Computers",
"Professional education"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nBackground: Although primary prevention strategies have great potential to improve health across the general population in a cost effective way, they appear relatively under-utilised (Shakeshaft and Frankish, 2003). A good example is the provision of screening and brief intervention for alcohol misuse in general practice (primary care). Several reviews and meta-analyses have established that brief interventions provided by primary care physicians can significantly reduce individuals' alcohol consumption (e.g. Wutzke et al., 2001; Bien et al., 1993). Despite this, detection of, and intervention for, excessive drinking among primary care patients is known to be low, with primary care physicians likely to provide advice only when patients' smoking or alcohol consumption levels are high (Richmond et al., 1996a; Kaner et al., 1999).Previous analyses have investigated the apparent anomaly between evidence for effectiveness and failure to implement appropriate activities to reduce excessive alcohol consumption (e.g. Shakeshaft and Frankish, 2003; Richmond et al., 1998). These have principally focused on two areas thought to be amenable to change: characteristics directly relevant to health care providers (such as their awareness of research evidence and the quality of their undergraduate and on-going training); and patient characteristics (such as their reluctance to disclose their risky behaviours).A third possibility, that has received relatively little critical attention, is the feasibility of modifying the nature of health care consultations. In the parlance of systems research, this is akin to modifying systems of work (Berwick, 2001). The underlying premise is that regardless of the way prevention and early intervention is delivered, health care providers may be too busy to screen for risk factors outside the purview of the immediate consultation, and are unlikely to have time for adequate training in those areas not perceived by them as essential to the provision of high quality health care. In this context, health care providers may regard studies which only quantify the prevalence of at-risk behaviours among their patients, as being of marginal relevance to their routine clinical care.One novel method for potentially modifying the nature of health care consultations is the integration of patient-driven computers into routine clinical care (Moyer and Finney, 2004, Butler et al., 2003, Shakeshaft and Frankish, 2003). This has two potential advantages. Firstly, hand held computers make relevant information, tailored to individual patients, available to practitioners at the time when it is likely to be most useful: during the consultation. Secondly, hand held computers facilitate in situ professional education: providers learn preventive recommendations and guidelines in the context of a specific patient encounter; they also learn how to provide evidenced-based interventions by being prompted, by the computer, through appropriate advice for each patient. As such, this type of education capitalises on the advantages offered by problem-based learning (Finucane et al., 1998).\nPatient-driven computers in routine clinical care: How might hand held computers work in practice?\nMethod: Sample\nResults: Sample\nDiscussion: Methodological considerations\nConclusion: For the present, this study has demonstrated that the use of computer technology to collect information and health related behaviours in primary care settings is feasible, at least for a substantial proportion of GP patients.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this research is to explore the feasibility of using patient-driven, hand-held computers in primary care settings, in order to address the apparent failure to implement prevention initiatives into the routine delivery of health care services.\nDesign/methodology/approach: - During an eight-day period, patients of an English primary care practice who were at least 16 years of age were asked to complete a health-related survey using a hand-held computer. They received tailored, on-screen feedback.\nFindings: - A total of 143 patients (approximately 55 per cent of all patients) began using a hand-held computer, of whom 115 (80 per cent) answered all questions. Of these, 24 per cent reported being smokers, 7 per cent and 19 per cent were at-risk of alcohol harm in the long and short term respectively and 14 per cent rated their overall health as poor or very poor. Most patients rated their level of satisfaction with the hand-helds as excellent (36 per cent), very good (29 per cent) or good (24 per cent), while 89 per cent agreed to their primary care physician seeing a summary of their feedback.\nOriginality/value: - This is the first study to evaluate the feasibility of using hand-held computers to conduct patient screening and feedback in primary care settings.\n",
"role": "assistant"
}
] |
Unveiling the myths of M&A integration: challenging general management and consulting practice
|
[
"Mergers and acquisitions",
"Cultural integration",
"Integration",
"Myths",
"Task integration",
"Integration speed"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Mergers and acquisitions (M&As) are widely regarded as the \"champion's league\" of strategic management. Many small-, medium- and large-sized firms are increasingly engaging in M&As, as they offer a wide range of opportunities. M&As allow access to new markets, technologies and resources; firms can develop and grow their organizations more quickly than they can by organic growth; and they allow companies to share and diversify risks (Marks and Mirvis, 1998). Even in the current global economic development slowdown, the market for corporate control in terms of volume and value is enormous. According to Bloomberg, 2.2 trillion US dollars were transferred through M&As alone in 2011, a sum that nearly equals the gross domestic product of Brazil, a fast-growing country with 192 million inhabitants.\nThe myths of integration: Myth number 1: integration is beneficial and necessary\nConclusions and implications: The integration process in the post-merger stage is a crucial phase where value is created or not. Therefore, many companies, managers and consultants invest an enormous amount of time and effort in fostering depth and speed of integration, build on their previous integration experiences and meticulously plan future integration processes. However, our studies have shown that following these established principles may in fact include subscribing to common myths regarding M&A integration. Table IV summarizes the four myths identified and the implications for managers.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to detect and challenge generally accepted management and consulting practice in Mergers & Acquisitions (M&As). M&As have been an important issue in strategic management and corporate development for decades. The integration process of two separate entities has been found to be of importance, and has, accordingly, received a significant amount of attention by research, management and consulting literature. Based on these insights, managers tend to rely on well-established and generally accepted rules developed by practice and consultants that should support a successful integration process and the generation of value. Nonetheless, M&As' efforts still often fail to create value. So is the common practice of the established drivers and beneficial consequences of the integration of M&As right, or do the experiences of consultants, companies and managers reveal something different?\nDesign/methodology/approach: To understand these challenges, the authors spent four years studying M&A projects and subsequent integration processes of more than 400 companies that engaged in M&A efforts. The data derived from four survey-based quantitative studies among more than 430 CEOs, CFOs and other senior managers in the field of M&As and personal interviews that were conducted to get in-depth insights.\nFindings: This extensive research on the efforts and projects of M&As over many years and including many companies reveals that successful integration processes are complex, social and culturally dependent endeavors and that the application of commonly accepted and established principles oversimplifies and disregards the interdependencies.\nOriginality/value: The present paper unveils four established principles concerning the successful integration after M&As as tenacious myths and provides more differentiated insights into value-destroying and value-creating mechanisms in M&As.\n",
"role": "assistant"
}
] |
A review of techniques for risk management in projects
|
[
"Risk management",
"Project management",
"Product development",
"Risk assessment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Projects are managed through the concurrent engineering (CE) philosophy for faster time to market and to achieve project objectives through a shorter iterative process. CE is the development of product and process through simultaneous functions that aims at reducing time to market, overall development cost and achieve a high product quality (Salamone, 1995; Caillaud et al., 1999). Owing to the multi-functional nature of teams in CE, product and process information is shared and a quick overall understanding of the product and process is developed. This leads to an achievement of the right design in the first attempt and helps attaining a clarity for the issues in the implementation phase of the project, resulting in an overall lower developmental cost and a quicker response to market as compared to a traditional over the wall approach (Jo et al., 1993).The design process determines product geometry, materials, functional specifications, machining processes, assembly sequences, tools and equipment necessary to manufacture a product. Production plans, control tools such as inventory controls, resource allocations and job scheduling are other important outputs of the design process. Hence, it can be asserted that design influences to a great extent, the quality and the cost of the product (Salamone, 1995; Jo et al., 1993). Short comings in the product design process results in extra costs generated through project delays, penalties, excess of materials used, labour, additional operations, resource reallocations, rescheduling and rework. Hence, risk management in product design is beneficial and should complement project management activities, especially in a CE environment. This paper describes a framework for the risk management process, as proposed by the Australian Risk Management Standard (Risk Management Standard AS/NZS 4360, 1999) and reviews the techniques that can be used for each process within risk management so that a framework can be evolved for designing tools for risk management.\nThe risk management process: In general, unexpected events occur in projects and may result in either positive or negative outcomes that are a deviation from the project plan. Positive outcomes are opportunities while negative outcomes generate a loss. Risk focuses on the avoidance of loss from unexpected events (Williams, 1995). Several definitions of risk are available in the literature and risk is usually referred to as an exposure to losses in a project (Webb, 1994; Chapman and Ward, 1997) or as a probability of losses in a project (Risk Management Standard AS/NZS 4360, 1999; Larson and Kusiak, 1996a; Remenyi and Heafield, 1996; Jaafari, 2001; Kartam and Kartam, 2001). In this paper, the later definition of risk has been used because this definition implies that risk is quantifiable and lends itself to assessment and analysis through computational methods. A situation where it is not possible to attach a probability of occurrence to an event is defined as uncertainty (Clemen, 1996; Taha, 1997). While uncertainty is not measurable, it can be estimated through subjective assessment techniques (Raftery, 1994).The risk management process refers to uncovering weaknesses in methods used in product development through a structured approach so that timely mitigation actions are initiated to avoid risk, transfer risk, reduce risk likelihood or reduce risk impact (Risk Management Standard AS/NZS 4360, 1999). The risk management process proposed by the Australian Standard for Risk Management is shown in Figure 1. It is composed of seven iterative sub-processes of establishing the context of risk, identifying risks, analysing risks, evaluating risks, communication and consultation across stakeholders and monitoring and controlling risk events. The risk management process blends itself to CE product design and development, as changes and iterations in the design stage cost less than changes initiated in the implementation phase (Salamone, 1995; Jo et al., 1993). Hence, early discovery of risk events leading to downstream losses is much more preferable than treating losses when they cannot be prevented.\nTechniques for context establishment: Context establishment in the risk management process involves representation of project units (functional, process, data, etc.) and their inter-relationships (Ahmed et al., 2003a, b). This enables in representing project status in several forms such as resource usage, equipment requirements, budget availability, stakeholder involvement, contract deliverables, strategic goals and schedule, depending on the desirable aspect of the project that is important for any particular purpose.Project modelling tools and techniques are in general, are also the techniques for context establishment for risk management. These are project network diagram, precedence diagramming method (PDM), generalized activity networks (GANs), design structure matrices (DSM), IDEF3 process modelling and IDEF0 functional modelling. These are described in detail in the following paragraphs.Project network diagrams\nTechniques for risk identification: Risk identification is studying a situation to realize what could go wrong in the product design and development project at any given point of time during the project. Sources of risk and potential consequences need to be identified, before they can be acted upon to mitigate. Experts in their own domain have intuitive methods of recognizing a risk situation. As such, the identification tools presented in this section are more general in nature and need a collaborative approach so that all aspects of the project are examined for risk situations.Checklists\nTechniques for risk analysis: After risk events are identified, their characteristics need to be assessed so that it is determined whether the risk event is worth further analysis. Once it is decided that a risk event needs analysis then it needs to be determined whether the risk event information can be acquired through quantitative or qualitative means. Measurement metrics for risk also need to be determined so that these metrics can be used for computation of risk magnitude and risk analysis leading to risk mitigation plans (Amornsawadwatana et al., 2002).Risk is measured using two parameters - risk probability and risk consequence (Risk Management Standard AS/NZS 4360, 1999; Chapman and Ward, 1997; Ward, 1999; Boehm and DeMarco, 1997; Conroy and Soltan, 1998; Duncan, 1996; Baccarini and Archer, 2001; Patterson and Neailey, 2002; Pyra and Trask, 2002). Risk probability or likelihood indicates a chance of a risk event occurring while risk consequence, severity or impact represents an outcome generated from the risk event. Risk magnitude is the product of risk probability and consequence. To measure risk magnitude, probability and consequence of a risk event needs to be determined, which constitute the risk assessment function.In practice, the risk quantities are either quantitative or qualitative in nature. The quantitative approach to determination of risk parameters requires analysis of historical data through statistical analysis. In many instances, quantitative data is hard to achieve and is restricted to very small domain of the problem where historical trends could be sustained. An example of quantitative data for determining risk consequence is a historical record of money spent on correcting non-compliance of tooling usually used in fabrication of the type product being currently developed. Though, the risk may not eventuate, there is a fair estimate of the cost of the risk actually eventuating. Quantitative data is not always available when needed or not in the form required, hence a qualitative approach using subjective assessment techniques are often more appropriate for risk management. The subjective approach utilizes mainly the relative measures of human judgments, feelings and opinions in comparison to ideal situations. Though the subjective approach is influenced by individual bias, preferences and expertise, it provides a basis for risk assessment where it is more important to highlight risk events that are possible, rather than an exact prediction of a catastrophic event. An example of qualitative assessment is that the impact of the non-conformance of a fabrication tool to be used in the project is very high, but the chance of such an eventuality is very low. Though, the terms very high and very low can be represented on a nominal scale, it is not an exact measure. Organisations employ qualitative assessment techniques to identify risk because an expert opinion is the best source available, rather than an unreliable quantity.\nTechniques for risk analysis: The function of risk analysis is to determine influence of risk factors on the system as a whole. Risk events form a cumulative effect on one or more aspects of the project and it is easier to mitigate risk events if they can be bunched in groups and preferably dealt at a higher level in the long run than focusing on one particular risk event, in which case the project is likely to be micro-managed. Several techniques in the literature that are currently applied for project analysis can also be applied for risk analysis. These are summarized in this section.Probability and impact grids\nTechniques for risk evaluation: Risk evaluation is the function of risk management where risk events need to be prioritised so that risk mitigation plans are determined either based on past experience, lessons learnt, best practices, organizational knowledge, industry benchmarks and standard practices (Ahmed et al., 2003a, b). In risk evaluation, different aspects of the project - strategic, budget or schedule may be considered in light of a risk event to determine risk mitigation options and incorporate the most suitable option into a mitigation plan. This section describes several evaluation techniques that can be applied for risk evaluation.Decision tree analysis\nRisk mitigation: Risk events diminish project objectives when harmful effects realize due to unforeseen circumstances. Risk management attempts at studying in detail, all aspects of project management, so that all controllable events have an action plan or a risk mitigation plan. A reactive approach or a feed back approach refers to risk mitigation actions initiated after risk events eventuate and can be seen as initiation of contingency plans. On the other hand, a pro-active approach or a feed forward approach refers to actions initiated based on chance of a risk event occurring, such as insurance (Kartam and Kartam, 2001; DeMaio et al., 1994). A combination of these two approaches is applied to risk management to avoid risk, reduce the likelihood of risk, reduce the impact of risk, transfer risk, or to retain the risk (Risk Management Standard AS/NZS 4360, 1999).\nFramework for risk management tools: A framework for risk management tools can be developed in relation to the risk management process in Figure 1 and is shown in Figure 4. Context establishment function is accomplished through techniques presented in the third section of this paper and the result is an establishment of a risk structure that will facilitate the subsequent functions in the risk management process. For example, in process focussed risk management context, the risk model could be a process model. Then, information features such as technical, financial, schedule, organisational, etc. aspects may be tagged to the process units to provide a relevance for risk assessment. A risk query mechanism may then be formulated through techniques presented fourth section and imposed on the process model through interactive or collaborative interfaces to collect quantitative and qualitative data as described in fifth section. The risk evaluation consists of decision support systems using techniques presented in sixth section of this paper. Risks worth investigating further due to their high chance of occurring or high potential impacts or leading to new opportunities are then pursued leading to being treated. This whole process of risk management is collaborative and requires incremental contributions from all participants within the organization and supplementing project management approach, which is more proactive.\nConclusions: Project risk management endeavours to supplement project management practices by investigating project structure, organizational environment, external environment, products, processes and procedures in detail. It further, supplements the existing knowledge with lessons learnt, best business practices, industry benchmarks and case studies such that risk mitigation plans are in place when risk events do eventuate. This prevents crisis situations and also provides future avenues for opportunities.This paper presents techniques that are commonly used in project management and elsewhere, outlining their usefulness to project risk management, especially in CE projects. These techniques add to an understanding of risk management functions and build on team communication and collaboration, not necessarily completely dependent on a collaborative computer network or a computer application. All the techniques presented in this paper have their own characteristics and a specific realm of application. As such, a combination of these techniques is likely to fulfil most needs for risk management by a project team and evolve tools that are tailored for their needs but are generic in structure. Several software tools are also commercially available for risk management, but they address only a specific aspect of risk management using limited number of techniques presented in this paper. The framework for risk management tools presented in this paper provides an integrated approach to risk management in projects that can be used for development of risk management tools that suit specific domain but are generic in structure and may or may not be in the form of computer applications.Current state of development in hardware and software technology enables integration of applications for the techniques presented in this paper. There are many risk management tools commercially available to support project management but tend to address either a limited scope of application or limited processes in risk management. Future developments in integrated and generic tools will lead to widespread use of risk management principles in project management, retain organisational knowledge and provide a competitive business edge.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to provide a review of techniques that support risk management in product development projects using the concurrent engineering (CE) philosophy.\nDesign/methodology/approach: - The Australia/New Zealand risk management standard AS/NZS 4360:1999 proposes a generic framework for risk management. This standard was adapted for product development projects in the CE environment. In this paper, existing techniques were reviewed for their applicability to processes in risk management; namely, techniques for establishing context, risk identification, risk assessment and treatment.\nFindings: - Risk management is an activity within project management that is gaining importance due to current business environment with a global focus and competition. The techniques reviewed in this paper are used on an ad hoc basis currently. A more risk focused approach is likely to result in an integration of several of these techniques, resulting in an increased effectiveness of project management.\nPractical implications: - The techniques reviewed in this paper can be used for the development of risk management tools for engineering and product development projects.\nOriginality/value: - This paper provides a gist of techniques categorized in the form that they are applicable for implementation of risk management functions in product development projects using CE philosophy.\n",
"role": "assistant"
}
] |
Translating improvement methodology into healthcare culture
|
[
"Strategic alignment",
"Toyota Production System",
"Health care improvement",
"Improvement culture",
"Lean health care"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: For years, organizations have been on a quest to create a quality improvement culture only to keep getting on the same rollercoaster of endless reincarnations of previous unsuccessful methodologies leading to continued frustration of teams. In order to truly get a culture of improvement, we have to stop the rush to improvement and slow down to establish a standard for how we incorporate quality improvement into the daily activity of health care, through building on what we do best and that is problem solving.\nMethodology: In order to effectively drive a true cultural shift in how we approach improvement in health care we have to acknowledge that first improvement in health care is not a new concept. Looking back to 1854 we see one of the first truly documented improvement initiatives in health care led by Florence Nightingale during the Crimean War. Nightingale demonstrated that by truly understanding the problems of mortality at Scutari Hospital they were able to make drastic improvement reducing the mortality rate from disease within six months of arriving in Russia from 42.7 to 2.2 percent (Sheingold and Hahn, 2014). Nightingale consulted Statistician William Far to guide her on the statistical analysis, he advised her \"we do not want impressions, we want facts,\" she did, however, go against his advice somewhat, making a visual depiction of her data, called a coxcomb, which along with her report went on to have enormous impact on the design and practices of hospitals (Rehmeyer, 2008).\nLeadership: Introducing lean in health care is not for the faint hearted (Miller, 2005) and enforces the key element as being a system adoption vs a piecemeal application of tools. The cultural change that is required to make improvement apart of daily activity in health care does not come without strong visible leadership. In the Cardiac Sciences program the leadership had the foresight to understand the impact that quality and improvement where having in health care and to realize that traditional approaches were not going to lead to the system and cultural changes required to transform health care.\nProblem solving in health care: Using terminology familiar to frontline health care providers allowed the methodology of TBP or eight-step problem solving to be translated easily into an effective clinical application of patient care. Routinely health care providers perform systematic diagnosis of patients whether upon first admission to hospital or part of the daily care plan. A critical analysis of patient symptoms supported by diagnostic tests allows practitioners to deduct a root cause of the patient's condition to allow for accurate treatment. Figure 2 depicts a basic algorithm for patient diagnosis.\nRoles: A common complaint within organizations when implementing lean or any improvement methodology is that these activities happen off the side of managers desks and frontline personnel do not have time to perform improvement work. Realigning current roles to include elements of improvement thinking is a key step in changing the culture and allows for stronger recruitment into positions.\nImplementation: A Google search on December 6, 2015 produced 184 million hits in 0.54 seconds for how to implement improvement/lean. To say there are differing approaches to implementation of methodologies in organizations would be an understatement. \"The best methods are always the ones that get used\" (Kreafle, 2007). The following phases outlined our approach to implementation.\nFindings: Establishing a model area to demonstrate the effectiveness of the system is imperative to getting broader leadership buy in. Our model area was identified through a deviation from standard in our Arrhythmia service wait time. Using a multidisciplinary team approach to apply systematic problem solving we were able to show the following results:\nResults to date: For 2014, we had 50 formal frontline driven improvement initiatives reported that where worked on by unit quality teams. An example of frontline driven improvement is a savings of over $1,000 which resulted from removing non-essential supplies from a supply cart and reorganizing for ease of use.\nConclusion: Improvements in the Cardiac Sciences program are driven from a system perspective, with alignment to organizational priorities. The program has established cascading KPIs which are visible in all program areas and drive systematic problem-solving activities throughout the program. The use of eight-step problems solving thinking adapted into the clinical diagnosis process is well established within the Cardiac Sciences program, over 55 percent of all program staff have received formal problem solving training and 100 percent of leadership and management have been trained as coaches. (Note there are over 600 employees in the Mazankowski Alberta Heart Institute, CK Hui Heart Center Royal Alexandra). Quality platforms (visual quality management boards) are in place in unit and program areas providing frontline staff a forum to practice the methodology under the guidance of their manager which supports the organizations integrated quality management framework goal to develop a clear line of site throughout the organization. And this ultimately moves the QI mindset from off the side of the desk to becoming the work you do, empowering frontline care providers to engage in improving the work they do. Health care systems are historically siloed but this has been able to engage cross-departmental, multidisciplinary teams in improving the system and patient safety that leads to improved job satisfaction and ultimately improved patient outcomes.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to discuss improvement of the business of health care delivery through the application of systematic problem solving. This was strengthened by recurrence prevention through standardization at every level transforming into culture.\nDesign/methodology/approach: The methodology utilized is set derived from the true fiber and fabric of Toyota, the Toyota Business Practice (TBP) or eight-step problem solving and its translation into health care thinking by aligning to the process of clinical diagnosis of patients. The methodology that gives energy and direction to TBP is Hoshin Kanri, a Japanese approach to strategic planning. When you combine focus and purpose through strategic direction alongside a culture of systematic problem solving you get results.\nFindings: The application of the Toyota mindset resulted in a cultural shift which built on the strength of the current organizational culture. This approach had a broad impact on the program impacting leadership and management roles, improved employee engagement, complete visibility of organizational priorities, improved system performance, visibility and awareness of the vision and defined measures that drive the health care system. This has also resulted in cost diversions of approximately five million dollars CDN.\nOriginality/value: A grass roots application of real-time problem solving through strategic alignment.\n",
"role": "assistant"
}
] |
Managers' retention decisions regarding young intermediate-level educated employees
|
[
"Youth unemployment",
"Human resource management",
"Vocational training",
"Retention decisions",
"Vignette"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Young employees starting their career are typically employed on a fixed-term contract (De Lange et al., 2014; European Commission, 2012). Although they could benefit from temporary employment, losing a job at the end of a contract could also hinder or delay their career development (De Lange et al., 2014; Gebel, 2010). This may be especially the case for young employees who completed senior secondary vocational education (i.e. a total of 10-14 years of education; non-college). They may face difficulties in retaining and obtaining a new job as the demand for middling occupations, such as clerical and retail work, is generally decreasing in favour of high-skilled and low-skilled occupations due to technological change and task offshoring (Goos et al., 2014). Besides, young intermediate-level educated employees are qualified with a relatively narrow range of job-related competencies, while generic and transferable skills are thought to be critical for one's employability (Heisig and Solga, 2015; Akkermans et al., 2015). Retention decisions by managers may therefore become a critical transition point in the early career of young intermediate-level educated employees. Yet, most studies on early careers rely on observations of individual employees, their characteristics, resources and behaviours (e.g. Defloor et al., 2015; De Lange et al., 2014; Akkermans et al., 2014), while few studies have paid attention to demand-side factors and the preferences and decision-making process of managers (Wolbers, 2014; De Wolf and van der Velden, 2001). This study therefore focusses on the retention decisions of managers to gain a better understanding of which factors influence the probability of job retention for young intermediate-level educated employees.\nTheoretical background: A retention decision can be considered as a decisional outcome of a complex evaluation process, in which managers evaluate various pieces of information and subsequently take a retention decision that they believe will maximise the benefits for the organisation (Karpinska et al., 2013). The unique feature of retention decisions, contrary to hiring decisions, is that managers have had interactions, and thereby knowledge on the productive capabilities of employees eligible for contract renewal. This makes it unlikely that retention decisions are a result of so-called statistical discrimination (Arrow, 1973; Phelps, 1972), i.e. the attribution of average group characteristics to individuals to compensate for a lack of knowledge on individual productivity. In addition to the productive capabilities of employees, a firm's opportunity structure and managers' experiences and interactions with young employees could play a role, as those may facilitate the evaluation of retaining of young employees. In this study, we therefore assume that manager's retention decisions are influenced by three sets of factors: factors relating to the productive capabilities of young employees; factors relating to the organisational context; and factors relating to the manager charged with taking a retention decision. We elaborate on these three sets of factors below.\nData and methodology: A vignette experiment and a survey were designed to collect data on managers' retention decisions. Step 1: a vignette experiment was carried out to simulate retention decisions and to study the preferences of managers regarding the productive capabilities of employees and a firm's staffing situation. Step 2: the same group of managers participated in a survey to collect information on managers' characteristics and those of their organisation. Carrying out the survey after the managers had judged the vignette experiment reduces the risk of priming, in the sense that the nature of the questions in the survey could influence managers' vignette decisions (Pager and Quillian, 2005).\nResults: Table III reports the results of the multilevel regression analysis. We reported the unstandardised regression coefficients, which refer to the likelihood of retaining young employees on a scale from 1 to 11.\nDiscussion and conclusion: This explorative study provided valuable insights into a pivotal yet understudied juncture in the early career of young intermediate-level educated employees by being among the few studies examining their opportunities in the labour market from an employers' perspective. Using the combination of a vignette and survey study, this study has shed more light on the probability of job retention for young intermediate-level educated employees by showing which young employees are most likely to be retained and under what conditions managers favour retention. First, the results indicate that their retention probability is rather limited as managers are generally not inclined to favour retention of young intermediate-level educated employees. Second, this study shows that the probability of retention depends on the productive capabilities of young intermediate-level educated employees, the characteristics of their employing organisations and those of their manager. A high level of productive capabilities is an important condition for increasing the retention probability of young employees. Particularly important were task performance, commitment and social knowledge and skills. While young intermediate-educated employees are qualified for a rather narrow set of occupations which require specific vocational knowledge and skills, this study highlights the importance of working hard and having versatile sets of competences for securing employment in an organisation. Although having high productive capabilities could increase their probability of job retention, factors related to the organisational setting and characteristics of a manager charged with the retention decision appear to outweigh the importance of the productive capabilities of young employees.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the opportunities in the labour market for young employees with intermediate-level education by studying which young employees are most likely to be retained and under what conditions managers favour retention.\nDesign/methodology/approach: Retention decisions are examined by combining a vignette experiment and a survey study. Hypothetical profiles of 252 young employees were rated on their retention desirability by 21 managers, each working in a different organisation. Information on the managers' characteristics and their organisations were collected in a survey.\nFindings: Managers are generally not inclined to suggest retention. Their decisions are highly dependent on their own characteristics and organisational factors, even when young employees perform well and display desirable work-related behaviour.\nResearch limitations/implications: While the small scale and explorative nature of this study limit its generalisability, this paper highlights the importance of combining information on employees, the organisation and managers; when studying (early) careers and employment decisions.\nPractical implications: This study suggests that job retention is only in part within an individual's control, and the future efforts to combat youth unemployment need to account for organisational and managerial characteristics.\nOriginality/value: The focus on the employer's perspective is new to research on early careers, making it a starting point for further lines of exploration. Further, this study provides a comprehensive insight into factors that influence managers' retention decisions by combining three sets of factors in a single research design.\n",
"role": "assistant"
}
] |
Best of both worlds: hybrid knowledge visualization in police crime fighting and military operations
|
[
"Police",
"Knowledge visualization",
"Boundary object",
"Crime fighting",
"Hybrid knowledge visualization",
"Knowledge boards"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge work can be characterized by professional specialization, a high degree of uncertainty, ad hoc decision-making needs and the ability to change others through information and incorporate insights from others into one's work (Blackler, 1995; Eppler et al., 1999). Police work is in this sense an intriguing case of knowledge work that can be taken as an inspiration to improve knowledge-sharing and integration in other contexts as well, especially when time pressure, coordination needs and uncertainty are, particularly, present. Making good use of the available knowledge of all involved specialists is a major requirement for today's police officers who strive to effectively fight organized crime or resolve complex criminal acts. As they share this knowledge integration challenge with many management contexts, the authors propose to examine the knowledge visualization practices of a leading regional police force to derive practical and actionable insights for the management world. This practice of \"benchlearning\" from the non-profit sector has already been used in other areas of knowledge management, for example, in the case of making use of local knowledge to improve best practice sharing, as in the \"positive deviant\" methodology (Seidman and McCauley, 2008) taken from the development and healthcare context and applied in the corporate world. Also with regard to knowledge management in the police sector, our case study is not the first one to examine this special context (Seba et al., 2012; Gottschalk, 2006; Bechky and Okhuysen, 2011) and derive insights for other sectors. Our study, however, focuses on a previously neglected area, that of knowledge visualization practices. To harvest insights about these practices, the authors also compare the knowledge visualization approaches of the police force with a related field, namely, that of military operations.\n2. Case context: The case context is that of the Cantonal Police force of Zurich, the largest police force in Switzerland, with a staff of more than 3,500 employees. The police force consists of five major units, namely, the crime unit (the focus of this case study), security police, traffic police, regional police and airport police. The researchers interviewed the leadership staff of this police force, as well as 12 crime fighting experts from the crime unit. They have also held various focus group-type meetings with crime fighting professionals and investigative specialists, as well as a one-day workshop with > 30 members of the leadership team of the police force. In addition, they examined the force's decision and alert rooms, as well as their (digital and physical) visual tools.\n3. Digital knowledge visualization: The criminal analysts and the principal investigators make ample use of maps and diagrams to visualize their insights about a suspect's network of contacts and potential collaborators, actions and whereabouts. To do so, they use a sophisticated set of software applications. Regarding maps, they use a visual GIS application called ArchGIS to map out where criminal acts have been committed (and combine this data with mobile phone tracking data provided by telecom companies).\n4. Physical knowledge visualization: When the authors examined the working location of the crime investigation teams, they noticed the strong presence of physical visualizations. Although they did not see any \"classical\" evidence boards made famous through numerous TV police shows, there were many physical visual formats present. The reason why the criminal investigation unit only uses evidence boards for the first few hours of an investigation becomes clear when one puts each single investigation in context. In 2012 alone, this police force registered 138,291 criminal acts. It is impossible to capture evidence of all of these events on physical boards - there would simply not be sufficient space or boards for this. Thus, physical evidence boards are used only at the site of a crime to gather all evidence and are then converted into database entries at the police headquarter. In some rare cases, they may be used in the police headquarters as well at the very initial stage of evidence gathering, but are then soon replaced by their digital counterparts (the administrative assistant, usually photographs or scans, all gathered evidence after this initial evidence board phase is completed). However, this does not mean that all the rest of the investigation takes place only via digital media. In fact, the research team found numerous physical visualization media in use. These seem to be especially relevant when the police force has to operate under intense time pressure, for example, in the case of violent abductions or kidnappings.\n5. Hybrid knowledge visualization in the military: Armies worldwide work with similar investigative software tools and GIS as police forces. Plate 1 shows a typical software workplace with various visualization screens in use in one European military unit.\n6. Hybrid knowledge visualization from the boundary object perspective: If one analyzes the discovered visual practices from the theoretical perspective of boundary objects, it can be better understood why digital knowledge visualization on its own does, at times, not suffice for effective knowledge sharing and integration and needs to be complemented with physical approaches.\n7. Propositions for practitioners: In this section, implications for practitioners who want to apply a hybrid strategy to knowledge visualization and profit from the dynamic traits of joint, visual boundary objects are highlighted. In addition, a synthetic table to compare and contrast the two modes of representing knowledge graphically encountered in the case is provided.\n8. Conclusion: The key lesson to be learned from this case study is in essence a simple one: amidst all the hype around big data, visual analytics and other high-end software applications, one should not forget about the power of simple, permanent visualizations in the form of physical boards, posters or wall-mounted templates. In combining the advantages of digital and physical modes of visualizing our knowledge, the analytical power to create insights can be combined with visual dialogues to jointly make sense of them.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to study hybrid knowledge visualization in police crime fighting and military operations. Making effective and timely use of all available, relevant knowledge is a major requirement for today's police officers who strive to fight organized crime or resolve complex criminal acts under time pressure. As they share this knowledge integration challenge with many management contexts, the authors have examined the knowledge visualization practices of a leading regional police force (and of a military unit) to derive insights for corporate knowledge management.\nDesign/methodology/approach: To examine the knowledge visualization practices of a leading regional police force, the authors have conducted on-site observations, focus groups and interviews, as well as site, document, tool and software analyses within the police headquarters.\nFindings: As one main result of their empirical investigation, the authors find that the police force's practice of hybrid (i.e. digital and physical) knowledge visualization offers a useful strategy for corporate knowledge management as well. They also show how organizations can apply this dual approach to making knowledge visible, i.e. using sophisticated visualization software in combination with hands-on physical and permanently visible knowledge boards.\nOriginality/value: They discuss how these two modalities can be combined to improve knowledge management and how this hybrid practice can be understood theoretically through the lens of boundary object theory. With this regard, this article also extends the boundary object theory by identifying nine dynamic qualities of collaborative visualizations.\n",
"role": "assistant"
}
] |
Developing the Model for Optimal Learning and Transfer (MOLT) following an evaluation of outdoor groupwork skills programmes
|
[
"Employability",
"Higher education",
"Teamwork",
"Kirkpatrick model",
"Outdoor education",
"Transferable skills"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Groupwork in higher education provides an opportunity to develop key employability skills that are highly valued by students and employers (Wilson, 2012). Groupwork also provides a more efficient way of teaching and assessing students, especially when faced with large cohorts and squeezed budgets (Cumming, 2010). However, groupwork can be challenging and frustrating for educators and students alike (Wosnitza and Volet, 2014). If groups are not managed effectively, negative outcomes can occur such as social loafing, time wasting, unfair grade distributions, dissatisfaction and negative attitudes towards groupwork (Pfaff and Huddleston, 2003; Maiden and Perry, 2011). Despite these risks, there is limited empirical evidence in how to best support and train students to work well in groups, ensuring that groupwork is a positive experience that develops both the task and interpersonal dimensions of groupwork required for success at university and the workplace (Cumming et al., 2014).\nReview of existing literature: To establish existing evidence of transfer from OAE to higher education, Cooley et al. (2014) conducted a systematic search of the literature. In this review, 11 empirical studies were identified that focussed specifically on the transfer of groupwork-related outcomes following short, residential OAE courses. Within these studies, OAE had been implemented to develop transferable skills, build existing student work groups, foster a more positive attitude and confidence towards groupwork and promote integration among students. Evidence was found in support of these different outcome areas, although this evidence was limited by methodological weaknesses. For example, anecdotal quotes from staff and students described OAE, resulting in improved groupwork behaviour back at university. However, these data were often obtained informally through post-course evaluation sheets and discussions, and analysed without adhering to recognised approaches to ensuring trustworthy results (Elkin, 1991; Prince and Dunne, 1998). In the quantitative studies reviewed, survey data revealed perceived improvements in groupwork skills and self-efficacy following OAE (Odello et al., 2008; Ferguson et al., 2001; Juriza et al., 2011). It remained unclear whether this perceived development actually translated into more effective groupwork behaviour, and the impact this may have had on the students' experience of university and entering the job market. In fact, in one study that did include a measure of academic groupwork performance, no significant improvements were found following OAE (Mazany et al., 1997). It is, therefore, both timely and important for a more extensive, longitudinal evaluation of OAE as a method of facilitating groupwork behaviour in higher education.\nPresent study: To ensure a systematic evaluation of both learning and transfer following OAE, the present study included four different levels of training evaluation:\nMethod: Participants\nResults[1]: The resulting themes are presented in two sections: the course outcomes, followed by the processes driving these outcomes. Within each section, themes are placed within the appropriate level of the Kirkpatrick model. The first-level themes are identified by subheadings, and the second-level themes are italicised within the main text.\nDiscussion: The present study took a novel, systematic and multi-perspective approach to evaluating learning and transfer, which was based on the four-level Kirkpatrick model (1994). Key stakeholders reported that a short OAE course for students was an enjoyable and effective way to improve groupwork related attitudes, knowledge and skills (Kirkpatrick Level 1 and 2: Reaction and Learning), and encourage groupwork behaviour, social interactions and more positive approaches to groupwork back at university (Kirkpatrick Level 3: Behaviour). In the longer term, OAE was felt to contribute to more integrated cohorts, degree success, employability and the personal development of students (Kirkpatrick Level 4: Results). This supports the contention of a recent review (Cooley et al., 2014b) that OAE can be used to support groupwork skill development, reflective learning, cross-cultural integration and employability, which are considered crucial aspects of higher education (Wilson, 2012). Our findings expand on existing literature by demonstrating a broader range of outcomes, and longer-term benefits at Levels 3 and 4 of the Kirkpatrick model, from a range of stakeholders. More generally, this study presents an approach to training evaluation that may be replicated in other educational and professional contexts.\nConclusion: In summary, the present study supports OAE as a potential solution to the need for training and developing groupwork skills in higher education. It is the first study to evaluate the long-term impact of OAE in higher education and in doing so, has provided strong evidence of numerous benefits resulting from these programmes. The evaluation also outlined a range of areas to improve reaction, learning, behaviour and results following OAE. An empirical model of these factors has been developed, which can be used to guide future courses and evaluations in other areas of training and development. Finally, the present study demonstrates how a training course can be systematically evaluated using the Kirkpatrick model, a combination of qualitative data collection techniques and multi-perspectives, an approach that can be replicated in other contexts of training evaluation.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to evaluate the perceived efficacy of outdoor groupwork skills programmes for the undergraduate and postgraduate students, and the factors that influence its success. It also illustrates the use of Kirkpatrick's (1994) 4-level model of training evaluation as a framework for qualitative investigation of learning and transfer, from the perspective of key stakeholders.\nDesign/methodology/approach: Over 24 hours of recorded data were collected using a video diary room, one-to-one interviews and focus group discussions. Participants were current students (n = 66), alumni (n = 12), outdoor education instructors (n = 6) and academic staff (n = 5). The data were transcribed, and then analysed by conducting conventional content analysis. Prolonged engagement, triangulation, peer debriefing and referential adequacy were used to establish the trustworthiness and reliability of the analyses.\nFindings: Outdoor groupwork skills programmes were widely viewed as being effective for developing interpersonal skills, attitudes and knowledge that were then further developed and applied during degree courses and later in the workplace. Four of the main perceived benefits were increased social integration amongst peers, academic success, personal development and employability. A range of psychological and environmental factors were reported to influence the extent of skill development and transfer, and are presented in the Model for Optimal Learning and Transfer.\nPractical implications: This study supports outdoor groupwork skills programmes as an effective method of groupwork skills training during higher education, and offers recommendations for promoting learning and transfer following training courses.\nOriginality/value: This is the first study to systematically evaluate the long-term impact of outdoor groupwork skills programmes in higher education. A novel methodological approach is also demonstrated, which can be replicated in other contexts of training evaluation.\n",
"role": "assistant"
}
] |
Making a difference through people development
|
[
"Employee development",
"Coaching",
"Learning",
"Internet"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Panasonic was established in Japan in 1918 and since then has grown to be one of the top global electronic companies. The product range spans from consumer electronics, information and communication equipment to healthcare and aircraft in-flight entertainment systems. Employees currently number some 330,000 globally with 13,000 of those employed within Europe.\nE-learning: E-learning has become a hot topic over the last few years and organisations have adopted it in various different ways. Although everyone may be tired of hearing the words \"blended learning\", for us this has proved to be one of the effective ways of offering leadership development. This is due to the fact that managers from the 39 Panasonic companies operating in Europe are invited to take part.Three main e-learning solutions are being used:1. As pre-course work. Participants go on-line to complete a case study which asks them various questions about the content of the forthcoming development programme. By doing this they not only understand their knowledge gaps, but are also clear about the content of the programme and what the expectations are. The tool used allows the training team to monitor completion of the work to ensure that all participants participate as a pre-requisite for attendance on the forthcoming development programme.2. Virtual classrooms as pre and post work. For some courses the traditional classroom workshops have been blended with virtual classrooms in order to bridge the gap between managing the expectations of the participants while being cost effective. Virtual classrooms can be as long or short as required, as pre work they tend to last for 3 hours and then for post work slightly shorter, perhaps 2 hours. Using online software participants can simultaneously view a presentation whilst dialling in to a telephone conference. The consultant in charge of the programme administers the system and moves the slides in the presentation. Participants introduce themselves whilst viewing each others' photographs and are then encouraged to discuss and give opinions whilst the presentation is being delivered. Some pages allow text and on others there are quizzes, which the participants need to answer. As pre work the virtual classroom allows participants to \"meet\" each other and clarify objectives as well as covering any basic theory and general introduction to the overall programme. Approximately a month after this the participants come together for an interactive workshop where, having already \"met\" each other they can use the time productively to practice techniques and master new skills. The third part of this modular approach to courses is then a further virtual classroom six weeks after the workshop. This consolidates the learning with participants showing their own short presentations explaining their actions and progress since the workshop. We have been running virtual classrooms for over two years and from the vast majority of participants have received a very positive response. As mentioned at the beginning, people have different learning styles and the virtual approach does not suit everyone. Some of the positive comments received have been:* \"Broke the ice and made it possible to start off immediately\".* \"It was a good preparation tool to foresee the content of the course\".* \"It helped with the understanding of what we should expect from the course\".* \"Provided some theoretical and background framework\".The benefit to the company has meant that the workshop can be shorter so staff are not away from their companies for an extended time and obviously this reduces cost. Having a modular approach ties in the learning much more to the workplace, participants have more time to understand the concepts of the programme and time to put them into practice. As the benefits are so obvious it has not been difficult to get buy in from either management or participants!3. Webcasts. These are online virtual events which are delivered by IMD, the business school located in Lausanne, Switzerland. As a business associate of IMD, Panasonic is entitled to join various initiatives that they offer, webcasts being one of these. These consist of online presentations, live speech and video of the presenter and allow discussions between the presenter and the \"virtual\" audience. Topics offered by the webcasts range across a wide spectrum of current business issues, such as brand passion, SCM and key challenges for leaders. Any staff member of Panasonic can join the webcast from an online PC, listen to the presentation and then join the following real-time question and answer session. These can then be used as springboards for internal discussions and development.\nCoaching: This topic has grown in significance over the last few years. Coaching is seen as a tool that can greatly impact the bottom-line success of the business as well as supporting people in their development by allowing them to maximise their own performance. Coaching done properly is a great staff motivator and this is also a cost benefit as there is greater staff retention. We are starting to create more of a coaching culture within Europe and are doing this in three ways; incorporating the concept of coaching as a necessary managers' skill within leadership development options; having stand-alone coaching skills workshops and finally offering executive one on one coaching.Within leadership development programmes, which are modular, depending on the experience of manager attending, coaching is introduced in different ways. The programme for first line managers covers various topics such as cross-culture, finance for the non-financial and coaching at a basic level. This covers the why and how of coaching and gives them a chance to practice on each other. At a middle manager level the participants do some 360-degree feedback and are asked to coach each other on some of the results. Part of the 360-degree feedback also focuses on their ability to coach.The stand-alone coaching workshops focus on giving managers a toolkit that they can take back with them to the workplace and use immediately with their teams. They are highly practical and participants have plenty of time to coach each other, being observed and receiving feedback immediately afterwards. The workshops have been integrated with an e-learning approach so participants first, participate in a virtual classroom before a two-day workshop and then in another virtual classroom four to six weeks later. Again this approach has proved highly popular.While the workshops and leadership programmes are designed to create a coaching mindset amongst line managers in the business, we are also encouraging our senior leaders to consider executive coaching as an effective tool in their own personal development. Aimed at harnessing personal leadership development, the external coaching - provided through an external supplier with Pan-European reach - can focus on some key areas such as enhancing personal presence and influence, improving risk-taking and decision making and boosting creativity and collaboration during times of change.\nManagers' involvement: In order for all this to be successful it is vital that the management buy in to the new approaches and support the training team. Panasonic has a learning culture and a recognised system of training in place which helps support any new initiatives. On an individual basis though, in order for each participant of a training programme to get the most from it and to practice new skills and behaviour he/she must be fully supported by his/her line manager. This is also highly motivational, no one enjoys returning enthused with new ideas from a particular programme to be told to get back to work and carry on as normal. Currently managers nominate their staff for training on an annual basis and it is this link from nomination to working with their staff thorough the development programme which is expected within Panasonic.Once the nomination process is complete the next step for each line manager is to discuss reasons for nomination and objectives with their participant. Prompting for this is via e-mails from the training team and also from local HR. Following the programme the participant is expected to discuss action plans with their line manager and how to use the learning. Managers will have a coaching role to support their staff and realise these plans. Six weeks after the programme has finished participants are prompted to return a summary of their achieved actions to the training team, including comments by their line manager.Broadly speaking managers are very involved in this process and work closely with their staff. As the system of training has been in place for a number of years the process has infiltrated to all layers of the company. Tying this in with the pre-work and an online training portal means that there is transparency and a clear structure to the process. The growing number of virtual teams does mean that the system does not always work smoothly and the training team is working hard to communicate with managers and explain the importance of their involvement and the added value that this gives to their department.We offer a wide range of training and development options to staff and three current initiatives: e-learning, coaching and increased managers' involvement have been covered in more detail in this article. In each of the cases the benefits that can be offered both to the company and the individual are enormous. Changes do not happen instantly however, general consensus is that the benefits for staff development, motivation, staff retention and cost saving can all be seen from increased use of \"blended\" e-learning, coaching - both as a useful managerial skill and one-to-one, and increased managers' involvement in the whole area of staff development.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this article is to explain how different solutions for developing staff within Panasonic have been implemented. The article aims to move away from just the traditional classroom only options, to using different approaches which suit different learning styles. In particular the article proposes looking at the three areas of e-learning, coaching and how to increase managers' involvement.\nDesign/methodology/approach: - The article explains briefly why it has looked at different options and then explains the three areas in more detail. For e-learning three main solutions are offered so these are outlined in more detail; for coaching the article also offers three solutions and so these are also explained in more detail. Lastly the importance of manager's involvement to any development initiative that is set up is explained, and a look at how to try to gain this involvement and keep the managers' as key players.\nFindings: - Results so far have been very positive with both participants and the company benefiting.\nOriginality/value: - This article will be of interest to anyone involved in training and development of staff - giving new ideas for positive development options to anyone struggling with how to best develop, motivate and retain staff.\n",
"role": "assistant"
}
] |
Investigating the impact of information systems on knowledge sharing
|
[
"Information systems",
"System quality",
"Knowledge sharing",
"Service quality",
"Technological factors"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge management (KM) has recently become a significant issue in academic circles and functional domains. Academic and commercial societies contend that thanks to using power of knowledge, organizations can maintain and preserve their long-term advantages in the competitive domains. KM is a set of processes for understanding and using knowledge in organizations as a strategic resource. It is a structured approach that establishes certain methods and ways for identifying, evaluating, organizing, saving and applying knowledge so that needs are supplied and the organizational purposes are fulfilled (Dayan et al., 2017).\n2. Review of the related works: 2.1 The concept of sharing knowledge\n3. Research model and hypotheses: 3.1 Implementing information systems and knowledge sharing\n4. Research methodology: The study reported in this paper is considered to be a survey study which is descriptive-analytic and has an applied purpose. The present survey study was aimed at investigating the features of the statistical population. This descriptive study was intended to specify and describe the features of the targeted variables. Also, it was an applied study as it was aimed at obtaining an understanding and knowledge for determining a tool through which certain needs and requirements can be met. In this study, we tried to discover and acquire new finding which can be applied specifically for a phenomenon or process.\n5. Data analysis and results of the study: For investigating and checking the hypotheses of the study and the presented model, certain statistical tests and analyses were carried out and the collected data was fed into software so that the statistical analyses can be done. More precisely, in this survey study, three statistical tests, i.e. t-test, R2 and goodness of fit (GOF) were used for evaluating the hypotheses and research model.\n6. Conclusion: As mentioned above, the ability and willingness of people for sharing knowledge is a critical issue for organizations. Knowledge sharing is one of the fundamental steps in KM activities. For achieving effective knowledge sharing, employees should be motivated to share their knowledge with their colleagues. For maintaining competitiveness in business, work-knowledge and organizational expertise should be shared so that organizational efficiency and productivity are enhanced. One of the most important responsibilities of ISs is to share information as a crucial factor within each IS. In fact, sharing information leads to the fast flow and movement of information and optimization of organizational efficiency and efficacy and its performance. Thus, in the long term, organizations can achieve the competitive advantage. Consequently, organizations can use various methods for sharing knowledge. One notable instance of these methods is using ISs.\n",
"role": "user"
},
{
"content": "Purpose: Knowledge is the key factor and the strategic resource for acquiring assets and intangible organizational capabilities, which can lead to further growth and development, creation of value and the advantage of competitiveness. The purpose of this paper is to investigate the impact of information systems (ISs) on knowledge sharing.\nDesign/methodology/approach: The statistical society and the intended population of this study were the experts of the registry office in Tabriz, East Azerbaijan Province, Iran. A questionnaire was used as the instrument for collecting data. Also, SMART-partial least square was used for testing the hypotheses of the study.\nFindings: The results gained in this study revealed that IS dimensions, i.e. service quality, system quality and technology, play a significant role in sharing knowledge among the personnel of an organization.\nPractical implications: Nowadays, knowledge is regarded as a notable component of knowledge management process, which contributes to the growth and development of organizations. Sharing knowledge can be considered as a vital phenomenon in managing organizational knowledge. One of the remarkable tasks of ISs is to share information as a key factor. Sharing information can result in fast information distribution and efficient and effective organizational operations and enhanced internal and external performance of the organization.\nOriginality/value: This paper is aimed at introducing and presenting functional dimensions for optimizing the efficacy of ISs. Implementing an effective IS can accelerate the speed of information exchange among the personnel and the improvement of their capabilities.\n",
"role": "assistant"
}
] |
The product strategy for seasonal products
|
[
"Product management",
"Seasonality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Textbooks offer definitions of product strategy that include the continual addition and deletion of products. Most of the emphasis is on developing new products to sell over their lifecycles and then discarding them at the appropriate time. Very few companies plan to resurrect \"dead\" products. If it happens it usually is founded on a resurgence of the original consumer desire. One notable example includes Volkswagen when the company introduced the new improved Beetle. The car was not an exact duplicate of the original 1950s model but was reminiscent in styling, shape and size. Often reintroducing a discarded model requires extensive reengineering and the goal is to tap into the original consumer desire that made the first model a success. The car was extensively improved to meet the evolving safety standards and therefore was more expensive. The introduction was successful in part because a measurable level of awareness of and nostalgia for the original. Volkswagen avoided the kiss of death of product resurrections, a cloud or hint of problems with the product. Despite some acknowledged product defects uncovered by lawyers like Ralph Nader, Volkswagen fixed them during the original's lifetime.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of the paper is to exemplify the product management of seasonal products.\nDesign/methodology/approach: - Comment on existing literature.\nFindings: - The comment synthesizes literature on an uncommon product strategy issue.\nResearch limitations/implications: - As in all case studies, the specific conditions found in one organization may not be found more generally in others. Readers are cautioned that the conclusions drawn in the case may have limited applicability.\nPractical implications: - The comment depicts the process that one firm uses to link product introductions to seasonal events.\nOriginality/value: - The comment describes an approach to product strategy that maintains customer interest and involvement with a complex product line.\n",
"role": "assistant"
}
] |
Hierarchical power and personality in leader-member exchange
|
[
"Leadership",
"Leader-member exchange",
"Power",
"Personality similarity",
"Big Five",
"Supervisory relationships"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Schyns (2015, p. 132) concluded her review of the personality-leader-member exchange (LMX) literature with this statement: \"What has become clear in this review is that we need much more research into leaders' and followers' personalities and their effects on LMX.\" The LMX approach to leadership is unique in its attention to relationships between subordinates and supervisors, yet the bulk of LMX research is focussed on subordinates (see Dulebohn et al., 2012). This is despite the fact that personality and personality similarity have the potential to influence supervisor and subordinate perceptions of LMX differently, and may also have \"different effects on a mutual perspective of LMX\" (Schyns, 2015, p. 119).\nTheory and hypotheses: Personality, power, and LMX\nMethod: Participants and procedures\nData analysis: Most supervisors had more than one direct report (mean n=2.87) making them partners in multiple unique dyads (one with each subordinate). A supervisor with three subordinates reported LMX for three unique relationships, and each of the three subordinates reported LMX with that supervisor. Thus, our data are nested in two ways: subordinates are nested within supervisors, and each subordinate and supervisor pair is nested in a unique relationship. Due to some missing demographic information, our final sample was comprised of 321 dyads with supervisor-reported LMX and 372 dyads with subordinate-reported LMX (total n=693). Per Campbell and Kashy's (2002) recommendations for dyadic data, each person is represented in a dyad twice, once as self and once as partner. In the first case for each pair, the dependent variable is supervisor LMX; in the second case for that same pair, the dependent variable is subordinate LMX. We use the word partner in reference to the other person in each dyad, whereby the subordinate is the partner for supervisor LMX and the supervisor is the partner for subordinate LMX.\nResults: Means, standard deviations, correlations, and scale reliabilities are reported in Table II.\nDiscussion: Schyns (2015) argued that the next frontier in LMX research involves developing a greater understanding of the complex associations of supervisor and subordinate personality in the formation of LMX relationships. We advanced empirical research by examining the effects of trait similarity and dissimilarity for two key relational traits. We found that the best LMX relationships occur when supervisors and subordinates were high on agreeableness, but this effect was stronger for subordinates. Furthermore, agreeableness dissimilarity had a more negative effect on subordinates. Results for extraversion were more complex; subordinate LMX was based on both similarity (when both partners were high) and dissimilarity (one partner was high and the other was low). These effects were weaker (similarity) or not found at all (dissimilarity) for supervisors. We use power theory to explain these differences.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the role of supervisor and subordinate personality in high quality supervisor-subordinate (leader-member exchange; LMX) relationships.\nDesign/methodology/approach: In this field study, 142 supervisors drawn from a leadership development program, and 509 of their subordinates, completed a personality survey and reported on the quality of their relationships.\nFindings: Self and partner personality (agreeableness and extraversion), and the match between them were predictors of LMX. Consistent with approach/inhibition theory of power, personality traits of relationship partners had a stronger effect when the partner held a position of power (i.e. supervisors) than when the partner was a subordinate.\nPractical implications: The results inform organizational selection by showing the importance of supervisory traits for the formation of high quality supervisor-subordinate relationships. They may also help organizations facilitate the development of high quality supervisory relationships. The findings also suggest that it is critical to consider the balance of power when assessing LMX, along with factors that lead to its development, or benefits that ensue from it.\nOriginality/value: To date, there is an assumption that certain traits predict the development of high quality relationships between supervisors and subordinates. The results point out the importance of considering the role of power in LMX relationships. The paper shows that the effects of supervisory traits, which are rarely examined in the context of LMX relationships, more strongly predict LMX than do the traits of subordinates.\n",
"role": "assistant"
}
] |
Revisiting Fama-French's asset pricing model with an MCB volatility risk factor
|
[
"Investor sentiment",
"VIX",
"Contango",
"Backwardation",
"Market volatility",
"Asset pricing model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: It has been well documented that in the past decade, one of the most innovative and important financial products introduced to investors is the set of volatility-based products. The cornerstone of all these volatility exchanged traded products (ETP) is the VIX index, introduced by the Chicago Board Options Exchange (CBOE) in 1993 as a measure of market expectations of 30-day volatility implied by S&P 500 stock index option prices. VIX has been so popular that many investors consider it as the gold standard of market sentiment index, i.e. fear index.\n2. Literature review and hypothesis development: 2.1 Literature review\n3. Data and methodology: 3.1 Data collection\n4. Results of cross-regression analyses: The cross-regression results with monthly data are summarized in this section and reported in Table 2. Panel A of Table 2 reports the regression outcomes deriving the MCB risk factor from the first method, i.e. (VIX3M/VIX)t, and Panel B displays the cross-regression results when MCB risk factor is defined as the second method, i.e. (VIX3M/VIX)t - MA(20). Finally, Panel C shows the results from MCB risk factors defined as the third method, i.e. (VIX3M/VIX)t/(VIX3M/VIX)t-1 - 1.\n5. Robustness check: Asness (2014) argues that the HML factor in the original Fama and French's work may have missed sharp price movement with a six-month lag because the portfolio is rebalanced in June while the book and price are both taken from the prior December 31st. A newly designed factor, HML-Dev, which rebalances portfolio each month with the same book value from Fama and French but uses the latest up-to-date price, can reflect price momentum more timely. His result indicates that while the replacement of HML with HML-Dev factor does not change too much on cross-regression results from the original Fama French's five-factor model, it does appear that HML-Dev factor is significantly relevant when a sixth \"momentum\" UMD factor is added.\n6. Conclusion and future research: Since VIX has been introduced, there have been many studies on its derivatives and pricing structure, and in particular, on the exchange-traded products derived from the VIX index, such as VXX and VXZ. As a de facto benchmark for market risk sentiment, yet little research has been done on the incorporation of VIX-related derivatives or ETFs as factors in asset pricing models.\n",
"role": "user"
},
{
"content": "Purpose: Since the introduction of VIX to measure the spot volatility in the stock market, VIX and its futures have been widely considered to be the standard of underlying investor sentiment. This study aims to examine how the magnitude of contango or backwardation (MCB volatility risk factor) derived from VIX and VIX3M may affect the pricing of assets.\nDesign/methodology/approach: This paper focuses on the statistical inference of three defined MCB risk factors when cross-examined with Fama-French's five factors: the market factor Rm-Rf, the size factor SMB (small minus big), the value factor HML (high minus low B/M), the profitability factor RMW (robust minus weak) and the investing factor CMA (conservative minus aggressive). Robustness checks are performed with the revised HML-Dev factor, as well as with daily data sets.\nFindings: The inclusions of the MCB volatility risk factor, either defined as a spread of monthly VIX3M/VIX and its monthly MA(20), or as a monthly net return of VIX3M/VIX, generally enhance the explanatory power of all factors in the Fama and French's model, in particular the market factor Rm-Rf and the value factor HML, and the investing factor CMA also displays a significant and positive correlation with the MCB risk factor. When the more in-time adjusted HML-Dev factor, suggested by Asness (2014), replaces the original HML factor, results are generally better and more intuitive, with a higher R2 for the market factor and more explanatory power with HML-Dev.\nOriginality/value: This paper introduces the term structure of VIX to Fama-French's asset pricing model. The MCB risk factor identifies underlying configurations of investor sentiment. The sensitivities to this timing indicator will significantly relate to returns across individual stocks or portfolios.\n",
"role": "assistant"
}
] |
Interconnected geoscience applied to disaster and risk: case study from SECMOL, Ladakh, N. India
|
[
"India",
"DRR",
"Community-based disaster preparedness",
"DRR policy and practice",
"Geoscience",
"Interconnected geoscience",
"Ladakh",
"SECMOL College"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nBackground and literature review: This paper forms part of a two-paper exploration of how interconnected geoscience can be applied, in the context of disaster and risk, at a college setting (SECMOL, near Leh), in the Ladakh region of the NW Himalayas, India. Interconnected geoscience is a conceptual model of geoscience application to international development. A definition of interconnected geoscience is:[...] a philosophy that combines geoscience expertise with an equivalent expertise/consciousness in the understanding of developmental situations, conditions, and context, including the integration of diverse world views/wisdom and values', placing development-goals at the heart of the interconnected-approach.(Petterson, 2019, Figure 1)\nMethodology: This methodology for this work is, in part, inspired by workers such as (Cronin et al., 2004; District Disaster Management Plan, Leh, 2011; Dransch et al., 2010; Greiving, 2006; Le Masson, 2015; Hearne and Shilston, 2017; Kaur et al., 2018; Papathoma-KOhle et al., 2016; Shah et al., 2018; UNISDR, 2006). Le Masson (2015) examines a range of social dynamics that impact upon DRR in Ladakh, including the increase in nuclear families, the reduction in polyandry, changing demographics, historical/cultural influences, the presence of Buddhism/Islam, colonial history, and remote geographies. This background forms part of the developmental and regional context for interconnected geoscience approaches (Petterson, 2019). Le Masson (2015) argues that current Ladakhi DRR practice relies too much on scientific and technological approaches typical of the \"dominant approach to DRR\" (Gaillard, 2010). A more balanced approach is advocated, that examines and responds to the root causes of population vulnerability, drawing on local (rather than external) capacity. The Leh District disaster management plan, whilst extensive in descriptive terms for the 2010 Leh flooding disaster, is light on future policy and planning. This work can inform a more future-planning oriented approach. Le Masson (2015) notes that people have re populated high-risk-flood areas of Leh, and notes the limited DRR capacity within Ladakh. This work indicates possibilities for accessing existing Ladakhi capability for DRR. Dransch et al. (2010), Kaur et al. (2018) and Greiving (2006) present advice/methodologies for integrating hazard, risk and vulnerability data into map format, and suggesting methods for calculating vulnerability and risk. This paper has used, in part, some methods described by these authors, particularly working with students in assessing the vulnerabilities of demographic subsets within communities, and infrastructure. Petterson et al. (2019) geohazard assessment results are interweaved into this work. Ideas and methods reported by Papathoma-KOhle et al. (2016), Dransch et al. (2010) (e.g. Table I), UNISDR (2006) and Cronin et al. (2004) have been extensively used in this paper, working with staff and students to articulate individual ideas concerning risk, hazard and vulnerability, using their own maps and world views. The highly-illustrative and pictorial, and field-based-experiential approach of Hearne and Shilston (2017) has been adopted, working with SECMOL communities in the field environment, sharing mutual experiences of hazards and vulnerable elements within and around the SECMOL campus. The work of Shah et al. (2018) is critical, as it includes detailed work with communities across Kashmir and Ladakh, assessing levels of awareness re DRR. Findings were mixed. Communities had an increased awareness of floods following the 2010 Leh floods, but a variable appreciation of other hazards (e.g. earthquakes and landslides). Even the prescient knowledge of floods did not prevent people returning to, and re-settling within, areas of high flood risk. This paper assessed, and worked with an aspiration of increasing awareness of a range of hazard types.\nResults: Examples of results from the various exercises are shown in Figures 3 and Plate 2, summarised in Figure 4, and presented in Tables II-VI. Hazard awareness was low to variable. Knowledge baselines were low-very low for earthquakes, and higher for local aspects of floods/landslides/droughts (similar to Shah et al., 2018). Student awareness levels were variable, reflecting differing individual educational-geographical contexts. Earthquake awareness was largely gained from the internet/TV media, and limited previous school education. Earthquake knowledge was abstract and viewed as unimportant. There was no community knowledge of historical earthquake events within the local environment. Earthquake global distribution and causality were vaguely understood. An ability to apply theoretical knowledge for local use was lacking prior to subsequent teachings. Flood awareness was significantly higher than earthquakes. Although Ladakh is a desert environment, the 2010 major flood event (Petterson et al., 2019; Bhan et al., 2015) was within the student memory, some of whom were affected by the disaster. The students had also gained knowledge from their community elders. Awareness of flood hazard-causes, apart from heavy rainfall, was limited. There was a lack of knowledge of how rainfall events are localised, and water catchment dynamics, except for village irrigation applications. Landslide awareness levels were intermediate between flood/earthquake awareness. Landslides are common on Ladakhi roads, and within high-montane village settings. Students who were well road-travelled or came from high-montane villages possessed the highest landslide awareness. TV/internet media commonly discuss landslides. A wider knowledge of landslide causality, ranges of magnitudes/impacts and spatial distribution was lacking. Droughts were well understood. Ladakhi communities exist within dry-extremely dry environments. There is a high knowledge of water transport engineering and irrigation. A knowledge of droughts compared closely to domestic waste awareness, with earthquake awareness being situated at the other (lower) end of the awareness spectrum (Figure 4).\nSECMOL results analysis: The methodology and approach adopted for this work is generic, and applicable to a range of communities (e.g. school, village, company and government department). The methods are particularly conducive to a school/college environment, as there is a precedent discipline and culture for this genre of pedagogy and research, but an able researcher/facilitator can adapt the methods for other audiences. The exercises provide valuable insights regarding: student DRR awareness and perception including: the articulation/definition of their village/college contexts through simple map exercises; an ability to integrate spatial and theoretical data; and a potential to integrate community knowledge for hazard/risk resilience building purposes. The exercises reveal knowledge/awareness gaps, and areas of misconception. This analysis combined with a multi-hazard assessment of the SECMOL region (Petterson et al., 2019) provide material for developing evidence-informed DRR management strategies.\nGood practice at SECMOL that can be adapted for hazard and risk management: SECMOL students are encouraged to take on a range of organisational/administrative roles within the College (e.g. sanitation, cleaning and timetabling). Students gain experience of people management/organisation. SECMOL teaches a practical mountain-rural development and technology curriculum. The College mantra is \"learning through doing\". As an example, students manufacture high-insulation bricks made from local earth materials. The bricks maximise solar heat retention, resulting in internal building temperatures of c. +5degC when external temperatures are c. -15 to -25degC. This learning culture raises student awareness of practical environmental management. Another practical sustainable development management strategy is waste management (Figure 5). Students participate in all aspects of the waste lifecycle, from creation to disposal. If waste is disposed within a landfill, this is viewed as a disappointment. Practical measures are adopted for the reduction, re-use and recycling of waste. For example, students assist in the design of waste-sorting-systems and waste-stream-management (Figure 5) and waste materials are re-sold for money (e.g. electronic materials), re-used (e.g. in adding insulation to earthen bricks) and composted. The SECMOL pedagogical philosophy (independent learner, life-skills acquisition and practical hand-on learning), combined with the adoption of sophisticated management approaches, encourages the development of generic and transferrable skills. These can be adapted for Ladakhi DRR policy and practice, enhancing local capacity (e.g. Le Masson, 2015).\nInterconnected geoscience: the science-policy-practice platform: Challenges exist connecting science, policy and practice. Research/science can feed directly into policy/practice if has been designed with end-users in mind, and if it is of immediate relevance to policy/practice. An example of research into policy/practice is the identification of people with disabilities being at significantly greater vulnerability to disasters. The Pacific Community (SPC) and the Pacific Islands Forum Secretariat worked collaboratively to produce a Pacific Framework for the \"Rights of Persons with Disability\" that included recommendations for awareness with respect to DRR. SPC further mainstreamed the disability issue within its DRR practical programme (e.g. PIFS, 2016). Research can also be so theoretical, or inappropriate, or expressed in inaccessible specialist language, that integrating results into policy and practice within realistic timeframes can prove difficult/impossible (Petterson, 2019; Choi et al., 2005; DfID, 2014; Joynson and Leyser, 2015; Mazor et al., 2018; Viergever, 2013; Weischelgartner and Kasperson, 2010). Other workers advocate more multidisciplinary/holistic approaches, designed to take account of social, economic and development contexts, in collaboration with end-users and policy-makers (Petterson, 2019; Weischelgartner and Kasperson, 2010; Stewart and Gill, 2017; Steffen et al., 2018; Petterson and Tawake, 2018; Petterson et al., 2008; Pacific Community, 2015; IPCC, 2014; Gill and Bullough, 2017; Di Capua et al., 2017; Cronin et al., 2004).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to apply the concept of \"Interconnected Geoscience\" to a disaster and risk reduction (DRR) case study at SECMOL College, near Leh, Ladakh, N. India. Interconnected geoscience is a model that advocates holistic approaches to geoscience for development. This paper reports research/practical work with Ladakhi students/staff, undertaking community-oriented DRR exercises in hazard awareness, DRR themed village/college mapping, vulnerability assessments and DRR management scenario development. The geoscientific hazard analysis work is published within a separate sister paper, with results feeding into this work. This work addresses aspects of, and contributes to, the DRR research(science)-policy-interface conversation.\nDesign/methodology/approach: Interconnected geoscience methodologies for DRR here are: the application of geoscience for hazard causality, spatial distribution, frequency and impact assessment, for earthquakes, floods and landslides, within the SECMOL area; the generation of community-developed DRR products and services of use to a range of end-users; the development of a contextual geoscience approach, informed by social-developmental-issues; and the active participation of SECMOL students/teachers and consequent integration of local world-views and wisdom within DRR research. Initial DRR awareness levels of students were assessed with respect to earthquakes/floods/landslides/droughts. Following hazard teaching sessions, students engaged in a range of DRR exercises, and produced DRR themed maps, data, tables and documented conversations of relevance to DRR management.\nFindings: Students levels of hazard awareness were variable, generally low for low-frequency hazards (e.g. earthquakes) and higher for hazards such as floods/landslides which either are within recent memory, or have higher frequencies. The 2010 Ladakhi flood disaster has elevated aspects of flood-hazard knowledge. Landslides and drought hazards were moderately well understood. Spatial awareness was identified as a strength. The application of an interconnected geoscience approach immersed within a student+staff college community, proved to be effective, and can rapidly assess/build upon awareness levels and develop analytical tools for the further understanding of DRR management. This approach can assist Ladakhi regional DRR management in increasing the use of regional capability/resources, and reducing the need for external inputs.\nPractical implications: A series of recommendations for the DRR geoscience/research-policy-practice area include: adopting an \"interconnected geoscience\" approach to DRR research, involving scientific inputs to DRR; using and developing local capability and resources for Ladakhi DRR policy and practice; using/further-developing DRR exercises presented in this paper, to integrate science with communities, and further-empower communities; taking account of the findings that hazard awareness is variable, and weak, for potentially catastrophic hazards, such as earthquakes, when designing policy and practice for raising DRR community awareness; ensuring that local values/world views/wisdom inform all DRR research, and encouraging external \"experts\" to carefully consider these aspects within Ladakh-based DRR work; and further-developing DRR networks across Ladakh that include pockets of expertise such as SECMOL.\nOriginality/value: The term \"interconnected geoscience\" is highly novel, further developing thinking within the research/science-policy-practice interface. This is the first time an exercise such as this has been undertaken in the Ladakh Himalaya.\n",
"role": "assistant"
}
] |
The effect of tasting sheet sensory descriptors on tasting room sales
|
[
"Marketing",
"Regression",
"Wines",
"Econometric model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Wine sensory descriptors are used to attract consumers to a particular wine and influence their purchase decision. Consumers perceive wine purchase as a risk, not only because the product is complex, challenging, and intimidating, but also because the sensory experience is the greatest concern. The inclusion of sensory descriptors may reduce risk and help consumers with purchasing decisions, especially when they are unable to sample the wine. In winery tasting rooms, sensory descriptors are often included on tasting sheets to describe a wine's aroma and flavor to customers. Determining the impact of tasting sheet sensory descriptors on overall tasting room wine sales is important, especially to wineries that rely on tasting rooms as the primary source of sales (such as many in New York), because the majority of their sales are made in the tasting room (Stonebridge Research, 2010).\nLiterature review: Sensory descriptors are used to encourage consumer preference and reduce consumer risk when choosing a new wine. For example, studies have established that consumers consider simple taste and smell descriptors to be important label information for wine choice (Charters et al., 2000), and find elaborate back-label taste descriptions to be valuable when purchasing wine for a special occasion (Mueller et al., 2010). In restaurant settings, descriptive menu labels have not only been found to increase sales by 27 percent, but also increase customers' perceptions of the meal and restaurant (Wansink et al., 2001). A second study by Wansink et al. (2005) confirmed that descriptive menu labels resulted in an improved perception of food by customers, who rated dishes as more appealing. Tuorila et al. (1998) found that ratings for unfamiliar foods were enhanced when the products were described with positive information. These studies support the idea that not only sales, but also overall perceived quality of a product, are enhanced by descriptors. If this holds true with wines, such an effect may have major impacts on smaller wineries, such as those in New York, that have little to no national marketing and rely on direct-to-consumer tasting room sales to return a profit.\nMethods: Tasting room selection\nResults: Descriptive statistics of sales performance (dollars and bottles per day) and the corresponding explanatory variables are presented in Table I. Dollar and bottles sales averaged $1,860 and 167 per day, respectively. Each tasting room had the same mean and standard deviation of 0.11 and 0.32, respectively, for an equal representation of each winery in the estimating sample. Friday, Saturday, and Sunday also had the same mean of 0.33 and standard deviation of 0.47 for an equal number of observations for each day in the sample. Festivals occurred in the area about a quarter of the total study period time, resulting in a mean of 0.26 with a standard deviation of 0.44. Rainy and mostly cloudy conditions were experienced about half of the study days, with a mean of 0.53 and a standard deviation of 0.50.\nConclusions: This study examined the effect of tasting sheet sensory descriptors on tasting room wine sales, focusing on New York tasting rooms that rely mainly on direct-to-consumer sales. As these New York wineries rely on tasting room sales to generate the majority of their revenue, and the development of sensory descriptors and production of tasting sheets represents time, effort, and expense, the efficacy of tasting room descriptor sheets on wine sales is of interest. The results suggest that sensory descriptors do not necessarily increase wine sales in the tasting room, in contrast to previous literature examining the effect of descriptors on food and wine products sales in retail outlets such as grocery stores and restaurants. In such outlets, sampling is rarely allowed, so descriptors are the only means consumers have to help make purchasing decisions. In contrast, tasting room visitors are often allowed to sample several wines, perhaps making sensory descriptors less important for decision-making. Complex and unfamiliar sensory descriptors may be intimidating to inexperienced consumers, who may face further frustration if they try a wine based on its sensory description but cannot recognize the same attributes, or if their expectations are not met. Descriptors that are unappealing to certain consumers may deter those who connote the descriptor with a negative sentiment, which result in less liking of the product (Wansink et al., 2000). More experienced wine tasters may have existing sensory expectations based on their knowledge of grape variety or wine style, reducing the effect of sensory descriptors on their choice. In fact, one tasting room involved in the study noted that sensory descriptors seemed to have a greater impact on wines carrying Alcohol and Tobacco Tax and Trade Bureau (TTB) approved fanciful names (in contrast to wines identified by varietal), suggesting that consumers may consult descriptors more when tasting unfamiliar wines. This statement is supported by a study involving apples, which found that when presented with sensory descriptors, panelists favored apples with unfamiliar names which were favored least when identified by name alone (Swahn et al., 2010). This study suggests that consumers may choose an unfamiliar product based on the sensory expectations they create from descriptors. Further work needs to be done to assess the impact of sensory descriptors on tasting choice versus actual wine purchase and liking in the tasting room, and to determine whether sensory descriptors are more effective with specific wine attributes, such as familiarity.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to study the impact that tasting sheet sensory descriptors have on wine sales in tasting rooms that rely on direct-to-consumer sales to sell the majority of their wines, such as those in New York wine regions.\nDesign/methodology/approach: Nine tasting rooms participated in the study that took place on weekends (Friday, Saturday, and Sunday) during a six-week period in July and August 2012. Tasting rooms alternated tasting sheets by weekend, one including sensory descriptors and one omitting sensory descriptors. At the end of each weekend, tasting room managers compiled information on daily wine bottle and (in the case of seven wineries) dollar sales. A multivariate statistical model was created to measure the relationship between the treatment (tasting sheet with or without descriptors) and wine sales, controlling for other variables that could influence wine sales.\nFindings: The authors found that both bottle and dollar sales were higher when tasting sheets without sensory descriptors were used, with dollar sales statistically significant at the 10 percent level. Other variables that impacted wine sales included the specific tasting room, the day of the weekend, and festivals occurring in the area.\nPractical implications: Many tasting rooms, particularly in New York, rely on the tasting room for the majority of wine sales. Determining factors that affect sales can help tasting room managers/owners optimize the tasting room experience for maximized profits.\nOriginality/value: While there have been studies involving the impact of descriptors on sales of food and wine products, these studies have all taken place in a grocery store or restaurant setting where many different brands and varieties are offered. There has been no research studying the impact of descriptors on wine sales in the tasting room, where tasters have a limited selection and an option to sample products before purchasing. There has also been little research studying aspects of tasting sheets.\n",
"role": "assistant"
}
] |
Professional voices on risk and accidents in home care - a Swedish survey study
|
[
"Home care",
"Injuries",
"Risks",
"Home healthcare",
"Occupational therapists",
"Residential care-settings",
"Personal assistants",
"Nurses"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Injuries and risks at home and in residential care-settings have been addressed in a number of ways from the perspective of the individual resident. For example, the UN Convention on the Rights of Persons with Disabilities, the World Health Organization (WHO) and the World Bank's World Development Report, as well as the UN's AGENDA 2030 (United Nations, 2006; WHO and The World Bank, 2011; UN, 2015; Regeringskansliet, 2018) all highlight the need for safe and secure living environment for all people, regardless age or disability.\nStudy context: This article will focus on the experiences of the three professional groups concerning accidents and risks care recipients face in the home and residential care-setting, therefore, it is important to discuss the context of the study in relation to professions within health care and social services.\nLiterature review: In the review of previous research we have focused on both international and national studies from Sweden relevant for our study. Previous research of the home and residential care-settings for people with disabilities and the elderly has focused on security aspects and home modifications. Research has also brought attention to the experiences of residents and family members concerning implemented home modifications (Cumming et al., 1999; Hawkins and Stewart, 2002; Petersson et al., 2009; Kruse et al., 2010; Aplin et al., 2015; Norin et al., 2017; Pettersson et al., 2017; Thodelius and Lundalv, 2018). There is also need for additional research regarding living conditions at various institutions (Thodelius et al., 2017a, 2017b; Lundalv, 2018a; Lundalv, 2018b) and research has indicated a need for social support for people who have been injured (Lundalv, 2018c; Lundalv, 2018d). Previous studies have also focusing on, for example, the situations and health for elderly persons, building design for people with dementia and inclusive design of open space and safety for visually impaired persons (Leung et al., 2013; Fisher et al., 2018; Siu et al., 2019).\nAim and research questions: The basic premise of the study is to reconcile two aspects of health care and social services: health care and social services as a social organization, and health care and social services as a site-related activity. The purpose is to study the knowledge and experience of three professional groups (occupational therapists, personal assistants and public health nurses/nurses) concerning the vulnerability of residents to injuries and risks in the home and residential care-settings. The study is based on the following questions:\nMethods: As our aim was to highlight the experience of injury occurrence in the three different professional groups, namely, occupational therapists, public health nurses/nurses and personal assistants, an online survey was conducted. A total of 832 individuals participated in the study, which was conducted between November 2017 and March 2018, and the sample included permanently employed, employed by the hour and retirees.\nFindings: As seen in Table II somewhat more occupational therapists (38.1 per cent) participated in the study compared with personal assistants (30.5 per cent) and public health nurses/nurses (31.3 per cent). The vast of the professionals (48.6 per cent) work exclusively with care recipients over the age of 65. Only one in four study respondents work with care recipients under the age of 65 (24.8 per cent) and the remainder state that they routinely work with both age groups (26.6 per cent). However, internal variation was noted among the various professionals in relation to the age group with which they worked; personal assistants worked to a greater extent with care recipients younger than age 65, compared with the other two professional groups.\nExperience of injuries and risks: As seen above, a majority of the respondents of all three groups have personal experience of accidents among individuals under their care. To enable a further understanding of which factors that are of importance to the occurrence of accidents, the open questions (free text in the survey) was analyzed qualitative. The response rate on these open questions was quite surprising, a total of 610 respondents wrote comments (237 public health nurses/nurses, 178 occupational therapists and 195 personal assistants). The analytic work considering risk factors are here presented as three themes: individual circumstances of the residents, how the work was organized and design of the physical environment.\nDiscussion: Several countries are undergoing demographic shifts, including a substantial increase in the proportion of elderly people in society. People with dementia are often insecure and the design of their homes has largely been ignored as an area of research with regard to the consequences of dementia (van Hoof et al., 2010). While it is a fact that the proportion of elderly individuals in society is increasing, the substantial care needs of people with disabilities in different age groups are rarely discussed (Hjalmarsson and Osterman, 2017). Moreover, the need for support interventions from society for people with various disabilities and for the elderly will also increase. In addition, staff in various professions are subjected to the changes, requirements and stresses that are associated with working as a personal assistant, occupational therapist or public health nurse/nurse (Kahlin et al., 2018; Womack et al., 2018). Nor will the costs and suffering that arise when accidents occur in the residential care-settings decline over time.\nConclusions: The combination of the physical environment, social factors and the organization of the healthcare system is important for both risk of injury and accidents at home and in the residential care-settings. The study shows that most accidents occur in bathrooms/toilet facilities and in bedrooms, as well as in a combination of these two spaces, which means that proximity of bedroom and bathroom is important when planning new housing where the elderly can live in comfort and dignity.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to find out what knowledge and experience of occupational therapists, personal assistants and public health nurses/nurses in Sweden can contribute concerning the vulnerability of residents to injury in different residential care-settings.\nDesign/methodology/approach: This study is based on an online survey. A total of 832 individuals responded to the survey. The data were analyzed from a mixed-method approach, using descriptive statistics, correlations and textual-analysis.\nFindings: More than one in four representatives of these professions had witnessed accidents. The results show that bedrooms and bathrooms are the rooms in which accidents are most likely to occur in homes.\nResearch limitations/implications: One limitation of the study is that the impact of educational initiatives on the different professions was not investigated, so it is not possible to ascertain what effect this may have had on risk identification and accident prevention measures in residential care-settings.\nOriginality/value: To the best of the authors' knowledge, no prior study of these issues has been conducted. This study is deemed to have significant social benefit because of the steadily increasing need for care in residential settings. No other study has addressed the importance of the physical environment in this context. Collaborations involving researchers from various disciplines, professional organizations and public and private sector employers involved in personal assistance have contributed specific knowledge.\n",
"role": "assistant"
}
] |
Effects of working capital management on SME profitability
|
[
"Working capital",
"Profit",
"Small to medium‐sized enterprises"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The corporate finance literature has traditionally focused on the study of long-term financial decisions. Researchers have particularly offered studies analyzing investments, capital structure, dividends or company valuation, among other topics. But the investment that firms make in short-term assets, and the resources used with maturities of under one year, represent the main share of items on a firm's balance sheet. In fact, in the sample used in the present study, current assets of small and medium-sized Spanish firms represent 69 percent of their assets, and at the same time their current liabilities represent more than 52 percent of their liabilities.Working capital management is important because of its effects on the firm's profitability and risk, and consequently its value (Smith, 1980). Specifically, working capital investment involves a tradeoff between profitability and risk. Decisions that tend to increase profitability tend to increase risk, and, conversely, decisions that focus on risk reduction will tend to reduce potential profitability. Gitman (1974) argued that the cash conversion cycle was a key factor in working capital management. Actually, decisions about how much to invest in the customer and inventory accounts, and how much credit to accept from suppliers, are reflected in the firm's cash conversion cycle, which represents the average number of days between the date when the firm must start paying its suppliers and the date when it begins to collect payments from its customers. Previous studies have used measures based on the cash conversion cycle to analyze whether shortening this cycle has positive or negative effects on the firm's profitability. Empirical evidence relating working capital management and profitability in general supports the fact that aggressive working capital policies enhance profitability (Jose et al., 1996; Shin and Soenen, 1998; for US companies; Deloof, 2003; for Belgian firms; Wang (2002) for Japanese and Taiwanese firms). This suggests that reducing working capital investment is likely to lead to higher profits.These previous studies have focused their analysis on larger firms. However, the management of current assets and liabilities is particularly important in the case of small and medium-sized companies. Most of these companies' assets are in the form of current assets. Also, current liabilities are one of their main sources of external finance because they encounter difficulties in obtaining funding in the long-term capital markets (Petersen and Rajan, 1997) and the financing constraints that they face (Whited, 1992; Fazzari and Petersen, 1993). In this respect, Elliehausen and Wolken (1993), Petersen and Rajan (1997) and Danielson and Scott (2000) show that small and medium-sized US firms use vendor financing when they have run out of debt. Thus, efficient working capital management is particularly important for smaller companies (Peel and Wilson, 1996).In this context, the objective of the current work is to provide empirical evidence about the effects of working capital management on profitability for a panel made up of 8,872 small to medium-sized enterprises (SMEs) during the period 1996 to 2002. This work contributes to the literature in two ways. First, no such evidence exists for the case of SMEs in earlier studies. A sample of Spanish SMEs was studied that operate within the so-called continental model, which is characterized by its less developed capital markets (La Porta et al., 1997), and by the fact that most resources are channeled through financial intermediaries (Pampillon, 2000). All this suggests that Spanish SMEs have fewer alternative sources of external finance available, which makes them more dependent on short-term finance in general, and on trade credit in particular. As Demigurc-Kunt and Maksimovic (2002) suggest, firms operating in countries with more developed banking systems grant more trade credit to their customers, and at the same time they receive more finance from their own suppliers. The second contribution is that, unlike the previous studies (Jose et al., 1996; Shin and Soenen, 1998; Deloof, 2003; Wang, 2002) in the current work robust tests for the possible presence of endogeneity problems have been applied. The aim is to ensure that the relationships found in the analysis carried out are due to the effects of the cash conversion cycle on corporate profitability and not vice versa.Our findings suggest that managers can create value by reducing their inventories and the number of days for which their accounts are outstanding. Similarly, shortening the cash conversion cycle also improves the firm's profitability.From this point, the work is structured as follows: in section 2 the theoretical foundations of the study are presented; in section 3 the sample and variables used are described; in section 4 the methodology employed is outlined, and in Section 5 the analyses carried out and the findings are presented; finally, the main conclusions are discussed.\n2. Theoretical foundations: Working capital management is a significant area of financial management, and the administration of working capital may have an important impact on the profitability and liquidity of the firm (Shin and Soenen, 1998). Firms can choose between the relative benefits of two basic types of strategies for net working capital management: they can minimize working capital investment or they can adopt working capital policies designed to increase sales. Thus, the management of a firm has to evaluate the trade-off between expected profitability and risk before deciding the optimal level of investment in current assets.On the one hand, minimizing working capital investment (aggressive policies) would positively affect the profitability of the firm, by reducing the proportion of its total assets in the form of net current assets. However, Wang (2002) points out that if the inventory levels are reduced too much, the firm risks losing increases in sales. Also, a significant reduction of the trade credit granted may provoke a reduction in sales from customers requiring credit. Similarly, increasing supplier financing may result in losing discount for early payments. In fact, the opportunity cost may exceed 20 percent, depending on the discount percentage and the discount period granted (Wilner, 2000; Ng et al., 1999)On the other hand, and contrary to traditional belief, investing heavily in working capital (conservative policy) may also result in higher profitability. In particular, maintaining high inventory levels reduces the cost of possible interruptions in the production process and of loss of business due to the scarcity of products, reduces supply costs, and protects against price fluctuations, among other advantages (Blinder and Maccini, 1991). Also, granting trade credit favors the firm's sales in various ways. Trade credit can act as an effective price cut (Brennan et al., 1988; Petersen and Rajan, 1997), incentivizes customers to acquire merchandise at times of low demand (Emery, 1987), allows customers to check that the merchandise they receive is as agreed (quantity and quality) and to ensure that the services contracted are carried out (Smith, 1987), and helps firms to strengthen long-term relationships with their customers (Ng et al., 1999). However, these benefits have to offset the reduction in profitability due to the increase of investment in current assets.Most empirical studies relating to working capital management and profitability support the fact that aggressive working capital policies enhance profitability. In particular, Jose et al. (1996) provide strong evidence for US companies on the benefits of aggressive working capital policies. Shin and Soenen (1998) analyze the relation between the net trade credit and profitability for a sample of firms listed on the US stock exchange during the period 1974-1994. Their results also show strong evidence that reducing the net trade credit increases firms' profitability. However, this relationship is not found to be very strong when the analysis is at the level of a specific industry (Soenen, 1993). More recently, Deloof (2003) analyzes a sample of large Belgian firms during the period 1992-1996. His results confirm that Belgian firms can improve their profitability by reducing the number of days accounts receivable are outstanding and reducing inventories. Moreover, he finds that less profitable firms wait longer to pay their bills. Finally, Wang (2002) analyzes a sample of Japanese and Taiwanese firms from 1985 to 1996, and finds that a shorter cash conversion cycle is related to better operating performance.Theses results can be partially explained by the fact that there are industry benchmarks to which firms adhere when setting their working capital investment policies (Hawawini et al., 1986). Thus, firms can increase their profitability by reducing investment on accounts receivable and inventories to a reasonable minimum, indicated by the benchmarks for their industry. Furthermore, as pointed out by Soenen (1993), cash conversion cycle management tries to collect cash inflow as quickly as possible, and to postpone cash outflow as long as possible. The result will be the shortening of the cash conversion cycle.\n3. Data and variables: 3.1 Data\n4. Methodology: The effects of working capital management on SME profitability was tested using the panel data methodology. Specifically, estimates were obtained for the following equations: Equation 1 Equation 2 Equation 3 Equation 4 Where ROA measures the return on assets, AR the number of days accounts receivable, INV the number of days inventories, AP the number of days accounts payable, CCC the cash conversion cycle, SIZE the company size, SGROW the sales growth, DEBT the debt level, and GDPGR the annual GDP growth. ei (unobservable heterogeneity) measures the particular characteristics of each firm. The parameters lt are time dummy variables that change over time but are equal for all the firms in each of the periods considered.This methodology presents important benefits. These include the fact that panel data methodology assumes that individuals, firms, states or countries are heterogeneous. Time-series and cross-section data studies not controlling for this heterogeneity run the risk of obtaining biased results. Furthermore, panel data give more informative data, more variability, less collinearity among variables, more degrees of freedom and more efficiency (Baltagi, 2001).Estimating models from panel data requires the researchers first to determine whether there is a correlation between the unobservable heterogeneity ei of each firm and the explanatory variables of the model. If there is a correlation (fixed effects), it would be possible to obtain the consistent estimation by means of the within-group estimator. Otherwise (random effects) a more efficient estimator can be achieved by estimating the equation by generalized least squares (GLS). The normal strategy to determine whether the effects are fixed or random is to use the Hausman (1978) test under the null hypothesis E(ei/xit)=0. If the null hypothesis is rejected, the effects are considered to be fixed, and the model is then estimated by OLS. If the null hypothesis is accepted, there would be random effects, and the model is then estimated by GLS. In this way the analysis can achieve a more efficient estimator of b.\n5. Results: 5.1 Univariate analysis\n6. Conclusions: Working capital management is particularly important in the case of small and medium-sized companies. Most of these companies' assets are in the form of current assets. Also, current liabilities are one of their main sources of external finance. In this context, the objective of the current research has been to provide empirical evidence about the effects of working capital management on the profitability of a sample of small and medium-sized firms. To this end, a sample of Spanish firms was used to conduct a study with panel data for SMEs. Data on a panel of 8,872 SMEs was collected, covering the period from 1996 to 2002.The results are similar to those found in previous studies that focused on large firms (Jose et al., 1996; Shin and Soenen, 1998; Wang, 2002; Deloof, 2003), and the analyses carried out confirm the important role of working capital management in value generation in small and medium-sized firms. There is a significant negative relation between an SME's profitability and the number of days accounts receivable and days of inventory. We cannot, however, confirm that the number of days accounts payable affects an SME's return on assets, as this relation loses significance when we control for possible endogeneity problems.Finally, SMEs have to be concerned with working capital management because they can also create value by reducing their cash conversion cycle to a minimum, as far as that is reasonable.\n",
"role": "user"
},
{
"content": "Purpose: - The object of the research presented in this paper is to provide empirical evidence on the effects of working capital management on the profitability of a sample of small and medium-sized Spanish firms.\nDesign/methodology/approach: - The authors have collected a panel of 8,872 small to medium-sized enterprises (SMEs) covering the period 1996-2002. The authors tested the effects of working capital management on SME profitability using the panel data methodology.\nFindings: - The results, which are robust to the presence of endogeneity, demonstrate that managers can create value by reducing their inventories and the number of days for which their accounts are outstanding. Moreover, shortening the cash conversion cycle also improves the firm's profitability.\nOriginality/value: - This work contributes to the literature in two ways. First, no previous such evidence exists for the case of SMEs. Second, unlike previous studies, in the current work robust test have been conducted for the possible presence of endogeneity problems. The aim is to ensure that the relationships found in the analysis carried out are due to the effects of the cash conversion cycle on corporate profitability and not vice versa.\n",
"role": "assistant"
}
] |
Leading after the boom: developing future leaders from a future leader's perspective
|
[
"Leadership",
"Leadership development",
"Belief",
"Curiosity",
"Core values"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: As I sit down to write this paper, I reflect on the challenges that I face as a Leader. I have spent the last six years working on, developing and honing in on my own personal leadership skills only to feel great success in the smallest moments and failure in the majority of the other moments. What could I have done better? How could I have challenged and asked questions differently? What would have helped the journey be more successful? Yet in every failure is a learning opportunity. I look to leaders with more experience for advice in those moments, and I am now realizing that I am becoming that experienced leader to those leaders who are starting on their own leadership journey. This raises a whole new challenge for me. How will I begin to shape and lead the new leaders of the up and coming generation? As leadership is beginning to shift, there is an opportunity to develop new leaders in a different capacity than they have been developed in the past. By asking the right questions to develop a leader's sense of curiosity will help to teach young leaders to seek out new and alternate answers to what has been done in the past. This will increase their adaptability and effectiveness as a leader. By challenging new leaders to know what values they stand for and helping them to discover their own personal set of beliefs, it will help them to identify what work will be most satisfying in their careers. This paper will discuss curiosity and belief and how they are worthy traits to begin teaching new leaders.\nIntroduction: For the purpose of this paper, let us look at leadership as a relationship between the potential of new leader and their ability to adapt and grow into different roles and in different environments. As a leader, you have to stay in-tune to the people and the environment that surround you. Anyone has the potential to lead in any setting with the right combination of innate leadership traits and learned leadership skills and abilities. Every leader will have his or her unique combination of innate and learned skills for success. But what are those skills that every leader seeks? What is the prescription for a good leader? That depends. Not that answer that any leader looks for. Multiple factors go into good leadership: personality, emotional intelligence, ability to gain buy in and the list can continue. To add to that, leadership is constantly evolving. Leadership is being looked to, more than before as we look to leaders to help guide a new generations in the workforce. Article after article about the new generations, what they need and want from work and from a leader continue to be a topic of discussion. The idea that a combination of leadership traits that create an effective leader is as unique as a fingerprint combined with the reality that leadership continues to evolve to fit the unique blend of needs and wants of a new generation creates a level of unsatisfying ambiguity. By understanding what sets the up and coming generations apart from generations prior, developing the need for curiosity and personal belief, or core values, of new leaders will be increasingly important as they seek out their relationship with leadership.\nSetting the millennial generation apart: As research on the previous generation continues to explain their impact on the workforce, new research explaining the new generation begins to surface. According to Lester (2011), \"the latest generation to come of age is the Millennial Generation\" (p. 7). This generation was born in the early 1980s into the early 2000s (p. 7). Lester (2011) goes on to explain \"this group was born into the technology era and utilizes all the means of communication at their disposal including social networks, cellular phones, text messages and blogs, as well as have a command of the internet\" (p. 8). This technological difference is a clear defining aspect of this generation and the previous ones. With companies that employ both of these generations leaders \"will be tasked to integrate different work ethics, communication styles and capabilities, expectations of management styles, wages and benefits, diversity and values\" (p. 8). The ability of good leaders to lead the next generation will have to be savvy in the fields of technology, adaptable to different learning styles and be foundational in doing what is right. There has also been a generational shift of needs and wants over the past 100 years in regards not only to life but career expectations. Generations prior to X, Y and the millennial generation looked for stability, security, control over personal space and hard work that will lead to reward in their work and life. They earned respect and position through hard work and mastery of a skill (Bennis and Thomas, 2007, p. 41). A career path was only one direction with one, maybe two companies and loyalty to a company was held in a serious regard (p. 42). For the past ten years, there has been a shift to what current and future generations are looking for when it comes to career and work. The current and future generations need and want identity, meaning and significance, developmental experiences, balance and to make a difference in their work and life (p. 32, p. 58). These generations loyalty lies to the greater community around them, not just the company in which they work. They want to place their energy and attention into making a sustainable difference in the part of society in which matters most to them (p. 60).\nThe next generation of leaders: The millennial leader will change leadership, just as the generation is beginning to change what it needs from companies, careers and leaders. For many millennials, their first job is out of college, leaving the list of job experience little to none. The common practice currently in promoting leaders is that the right leader is someone who has the \"right\" list of experiences combined with the \"right\" personality characteristics. So what will a successful leader look like? Good leadership and how the definition of a good leader can be elusive, but in almost every article, book, journal entry, magazine and blog leadership success if defined by the ability of a leader to identify, sustain and inspire other talented people (Bennis and Thomas, 2007, p. 10). With that being the foundation of success, curiosity and belief will lend themselves to continued success in current leaders as well as be the traits that can be passed on to the leaders of the future.\nCuriosity: Curiosity drives leaders to know or learn something. Curiosity is inquisition and fascination of not only what drives a leader, but also fascination and inquisition of what will make a successful future leader. Fuimano (2005), points out that leaders should not and do not need to have all the answers. The most important think they need to have is the knowledge to access answers by being resourceful. They need to tap into their curiosity to help discover ideas and create possibilities that may have never been considered before (Fuimano, 2005, p. 8). Curiosity in leadership will provide a continuous evolution for the future leaders. Whether they seek out information, a new way to approach a situation, or provide a new solution to an old idea, the leaders' ability to be curious about all aspects of their leadership will create an environment of continues improvement and adaptation in their field.\nBelief: The new generations will seek out work that will be fulfilling to them. A leader driven by belief gives the \"certain core values that are enduring\" (Rath, 2007, p. 57). Belief does not necessarily mean religion in this case but rather a core set of values. Shaw (2006) explains that \"work must be meaningful; it must matter to you (p. 57). New generations will seek out work that is important and that meshes with their values. They have confidence in their beliefs and they have a \"clear perspective about who [they] are and what [they] aim to be doing in the future\" (p. 11). Belief will change the way in which the leaders will pick and choose where they work and where they lead. Organizations will need to dig into their values to create workplace environments that have a strong core values that are attractive to up and coming leaders.\nConclusion: Current and future leaders alike will need to start asking questions, seeking out new information and forming their own opinions. They will need to develop their ability to be curious and seek out different solutions to every problem. As current leaders, we will need to ask questions to develop that ability to be curious of the leaders we develop. As Lyons (2012), describes we will need to \"help fire [their] passions, sense of community, ownership, and worth\" (p. 87). The new generations of leader will be \"trying to create [their] own values\" (p. 87). Leaders will need to understand their own identity and beliefs. They should challenge themselves, be open to change and be ready to be surprised. They will continue to grow themselves personally and know what is important to them and where they can make a sustainable difference. They should challenge those beliefs that they are developing to get close to what they stand for and strive to create a leadership experience that embodies those beliefs, which help them to find their passion, sense of community, ownership and worth.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims for an open discussion of new traits which are important to the success of future leaders. By understanding what is important to the new generations coming into leadership roles and how that differs from previous generations, a new set of leadership traits can be developed to help future leaders find success. The purpose of this paper is to challenge the current practices of leadership traits that are being developed and defined for making successful leaders.\nDesign/methodology/approach: This paper was designed to raise questions regarding what traits are being developed in current leaders to make them successful and provide two suggestions for new traits to be developed in future leaders. The bulk of the research was from books that analyzed leadership characteristics and the generational needs of the previous and current generations. The information was accompanied with personal experience working as and with the new generation of potential leaders.\nFindings: This paper initiates the need for a discussion on why it is important to start discussing the development of the future generation of leaders. It suggests that the traits of curiosity and belief will be beneficial to the success and fulfillment of future leaders.\nResearch limitations/implications: Because of the newness of this concept in regards to curiosity and belief being leadership traits, the research comes from theories and a real life, relevant understanding of leadership development. Therefore, it is encouraged to continue to build upon the thoughts and ideas presented in the paper.\nPractical implications: This paper includes implications that there is a need for leadership development and that through the development of future leader traits, based on what they want from the workforce, will help to make them successful and sustainable future leaders.\nOriginality/value: This paper fulfills a need to understand what areas future leaders will need to be developed to create a new generation of effective and successful leaders.\n",
"role": "assistant"
}
] |
QFD 3D: a new C-shaped matrix diagram quality approach
|
[
"House of Quality",
"Quality function deployment",
"Quality management",
"Matrix diagrams",
"QFD 3D",
"Quality management tool"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Quality function deployment (QFD) originates at the end of 1960s and it is a worldwide-known quality design methodology (Lochamy and Khurana, 1995). As QFD's father Yoji Akao states: \"QFD provides specific methods for ensuring quality throughout each stage of the product development process, starting with design. In other words, QFD is a method for developing a design quality aimed at satisfying the consumer and then translating the customer's demands into design targets and major quality assurance points to be used throughout the production stage\" (Akao, 2004).\n2. Methodology: The possibility of correlating three actors in a QFD approach has its practical translation in two cases, according to the type of actors involved: two customers and one provider; or one customers and two providers. In the following subsections we will deepen these two cases and face the QFD 3D methods linked to each of them.\n3. How QFD 3D works? An example of use: 3.1. Validation of the proposed quality tool\n4. Discussions: strengths and limitations of the proposed tool: Here, follows a list of the main pros of the proposed tool:\n5. Conclusion: The use of matrices, such as the HoQ, in modern QFD approaches wanes. In our opinion, this is mainly a consequence of practitioners' difficulty in manipulating complex matrices on 2D paper-based supports. Such difficulties generate time-consuming activities and led to frustration and abandonment. Moreover, the QFD typical customer-provider vision is quite simplistic compared to actual innovation models, such as Leydesdorff's triple helix.\n",
"role": "user"
},
{
"content": "Purpose: Quality function deployment (QFD) is a worldwide-known, design for quality approach, which gathers several design quality methods. Among them, the House of Quality (HoQ) correlates the Voice of Customer and the Voice of Company thanks to L-shaped (2D) Matrix Diagrams (MDs). This paper theorizes, as logically possible, the extension from a bi-dimensional representation (a customer and a provider) to a higher N-dimensional representational freedom without altering the typical QFD's customer-provider posture. The purpose of this paper is to present QFD 3D: the extension of the Relationship MD in the HoQ toward a third dimension (two customers and a provider or two providers and a customer).\nDesign/methodology/approach: The proposed method is based on an existing well-known quality management tool such as MDs. The authors extend its representational capability by substituting the current L-shaped MDs (2D) with C-shaped ones (3D). The mathematical validity of this extension is described to demonstrate the correctness of the approach.\nFindings: The paper presents the logical validity and a case study concerning a three-dimensional extension of the Relationship MD in the HoQ, typical of the QFD approach.\nResearch limitations/implications: The results are limited to a three-dimensional extension. Situations where more than three actors are simultaneously involved are theoretically possible, but they are out of the scope of the current research. The difficulty in manipulating 3D representations on traditional supporting tools will be completely reversed on new computer-supported tools. The proposed method is meant to be a useful and efficient instrument for correlating the needs and the services in multi-actors-based scenario, using a QFD design quality approach fueled by IT support tools.\nPractical implications: This paper can be used as guideline for further researches on N-dimensional extensions of HoQ. The proposed method can be used in a scenario based on the Triple Helix of Innovation. It gives a clear correlation between different needs and services, facilitating the decision-making process and the constitution of a more comprehensive view of the scenario under a quality management approach.\nOriginality/value: Thousands of articles propose different QFD case studies all based on a bi-dimensional correlation between a customer and a provider. This paper proposes a method to extend the pertinence of QFD to scenarios where more than two actors are simultaneously correlated. Considering in particular three actors, the resulting three-dimensional Relationship MD in the HoQ is a totally new design quality tool for correlating customers' needs and providers' services. This can result in a significant gain of time and representational ability for quality managers who have to correlate multiple customers with a service provider and conversely a customer to multiple service providers. This approach open the doors for new QFD tools fueled by IT.\n",
"role": "assistant"
}
] |
Assessing health care quality in adolescent clinics, implications for quality improvement
|
[
"Health care quality",
"Quality",
"Patient safety",
"Patient perspectives",
"Management of clinical performance",
"Patient education"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Adolescents (10-19 years) are the world's future leaders and human resource (Ki-moon, 2016). However, according to the World Health Organization (WHO), many adolescents die prematurely, while others are living with diseases (WHO, 2015). This is partly because adolescents are more exposed to risky health behaviors like unsafe sex, alcohol and substance abuse (Bearinger et al., 2007; WHO, 2015).\nLiterature review: The WHO has developed eight global standards for assessing adolescent health care quality. The organization suggested some of these standards can be adopted for use depending on the context (Nair et al., 2015). Since adolescent health care quality improvement has not received much attention in developing countries like Ghana, five of the standards which are applicable to our context were adopted. These include facility characteristics, adolescent's health literacy, appropriate package of services, provider competencies, and equity and non-discrimination (Nair et al., 2015; WHO, 2015).\nMethods: Study setting\nResults: From Table I, it is shown that the majority of the respondents were females (69 percent), students (61 percent) and had junior high school education (72 percent). The mean age was 16 years and the majority of the respondents were between the age of 17 and 19 (53 percent). It was not surprising to find that the majority of the respondents were schooling. In addition, it was interesting to find that all the respondents had some level of formal education. Moreover, it was informative to find that the adolescent clinic at the health center recorded higher attendance than the other clinics. This may be as a result of its proximity to some junior high schools.\nDiscussion and implications: Adolescents are more exposed to risky health behaviors than other age groups. However, adolescent health care quality is below expectation. Having evidence-based information on adolescents' perceptions of quality is essential for designing effective health care quality improvement interventions. Adopting WHO standards, this study assesses perceive the quality of care in adolescent clinics.\nLimitations: Despite the practicality of these findings, the study was not devoid of limitations. Key among the limitations was the sampling. The sampling was not fairly distributed across adolescent clinics. This was largely due to low attendance in some of the adolescent clinics. This did not allow the authors to compared adolescents perceptions of quality of care across adolescent clinics. Subsequent studies should consider an equal sample size across adolescent clinics. In addition, generalization of the findings should be done with caution, because, it was not a nationwide study. The study was conducted in an urban setting which has different characteristics from the rural settings. Also, it would have been useful to combine both quantitative and qualitative methods. However, due to resource constraints the qualitative aspect was not explored. Future studies may consider adopting a qualitative research approach. For similar reasons, the study was limited to adolescents at the clinics only and did not include health providers and adolescents in the communities. Hence, future studies may consider including these populations.\nConclusion: The objectives of this study were to assess the quality of care in adolescent clinics and identify the factors that are salient to gauging the quality of care from the perspectives of adolescents. The findings revealed the respondents' perceived quality of care in the clinics to be good. Out of the five quality standards, four including adolescent's health literacy, provider competencies, appropriate package of services, and equity and non-discrimination were found significantly associated with adolescents' perceptions of quality of care. Thus, these are the salient factors that adolescents in Ghana look out for when determining the quality of care. In the quest to improve the quality of care for adolescents, more resources should be channeled into improving these standards.\n",
"role": "user"
},
{
"content": "Purpose: Adolescents are more exposed to risky health behaviors. However, many adolescents do not seek health care due to the poor quality of care. The purpose of this paper is to assess health care quality in adolescent clinics in Tema, a suburb of Ghana.\nDesign/methodology/approach: Cross-sectional survey design was adopted to collect data from 365 adolescent respondents. Data were analyzed with the aid of Statistical Package for Social Science (version 20) using descriptive statistics and multiple linear regression.\nFindings: The results demonstrate that adolescents perceived quality of care in adolescent clinics to be good. The significant predictors of adolescents' overall perceptions of quality of care were provider competencies (b=0.311, p<0.01), adolescent's health literacy (b=0.359, p<0.01), appropriate package of services (b=0.093, p<0.05), and equity and non-discrimination (b=0.162, p<0.01).\nResearch limitations/implications: The study was conducted in an urban setting. Therefore, the generalization of findings must be done with caution.\nOriginality/value: Adolescent health care quality in Ghana is below expectation. However, it has received little attention from researchers. This study provides empirical evidence for adolescent health care quality improvement in developing countries like Ghana.\n",
"role": "assistant"
}
] |
A longitudinal analysis for informativeness of earnings announcements in Borsa Istanbul
|
[
"Event studies",
"Information and market efficiency",
"International financial markets"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Earnings announcements are the main scheduled and periodic corporate announcement for listed firms in a stock market as they provide a wealth of financial and operational information with regard to the latest fiscal period. As such, an analysis of whether these announcements are valuable in terms of their content is of interest for companies, investors as well as regulatory bodies. Beaver (1968) was the first to empirically examine the information content of earnings announcements. From a valuation point of view, an earnings announcement could be considered to have information if it leads to change in market participants' expectation for future returns. Accordingly, Beaver (1968) used abnormal volatility and abnormal volume at the time of announcement as gauges for information value as the former reflects the resulting change in expectations of the market as a whole while the former reflects changes in the expectation of individual investors. Landsman and Maydew (2002) used similar proxies to those in Beaver (1968) for information content and concluded that there is an increase in the informativeness of quarterly earnings reports in the USA for the 1972-1998 period.\n2. Literature review: Using a US sample of 506 annual earnings announcements for the 1961-1965 period, Beaver (1968) established that earnings announcements have information content as evidenced by significant abnormal volume and abnormal volatility around earning announcement dates. Following this seminal work, several studies with more extensive data sets and different methodologies such as Morse (1981), Kiger (1972), Bamber (1986) and Ziebart (1990) reached similar conclusions on the information content of earnings. A related research question is how the informativeness of earnings announcements has evolved over time. Using a broad sample of 92,613 firm-quarter observations for the 1972-1998 period, Landsman and Maydew (2002) found that the magnitude of abnormal volume and volatility increased over time which they took as evidence of increased informativeness of earnings announcements. Francis et al. (2002) look at three different explanations for Landsman and Maydew's (2002) finding of increased information content of earnings. These are increase in the absolute amount of unexpected earnings conveyed in the announcements, increase in the intensity of investors' average reaction to unexpected earnings and increase in the amount of information released at earnings announcements. Their findings do no support the first two explanations and they conclude that the observed increase in information content is driven by the over-time expansion in the amount of concurrent information disclosed in firms' earnings announcements. In particular, they find that the length of earnings announcement press release grew from an average of 517 words in 1980 to 2,427 words in 1999 while disclosure of detailed income statement grew monotonically from 8.5 to 77.8 percent. Collins et al. (2009) confirm that expansion in the amount of concurrent information disclosed by firms is a driver of increase in information content. However, they also find that the market's increased reliance on non-GAAP Street earnings which are typically disseminated concurrently with firms' earnings announcements by analyst estimate clearinghouses such as IBES is also a factor in explaining informativeness.\n3. Variables and sample selection: 3.1 Defining measures for information content of earnings announcements\n4. Results: Figure 1 graphically illustrates a gradual increase in both AVOL and AAR at earnings announcements over the sample period of 2007-2017. In order to understand the dynamics behind this increase, we employ a regression-based approach similar to Landsman and Maydew (2002) and regress AVOL and AAR against the time trend variable, TIME, the annual earnings dummy, ANDUM and the interaction between this variable and TIME (see specifications at 3 below). The Columns 1 and 2 in Table II report the results of these regressions for which T-statistics are corrected for heteroskedasticity and serial correlation using the Huber-White maximum likelihood estimation procedure. The time trend coefficient for both AVOL and AAR regressions is significantly positive with t-statistics of 9.1 and 6.5, respectively. This provides strong evidence that the information content of earnings as proxied by AVOL and AAR has increased over time in our sample period. On the other hand, the evidence on whether earnings informativeness is higher or lower for annual earnings announcements is mixed. The coefficient on ANDUM at Column 1 is positive and significant at the 1 percent level, while it is significantly negative at Column 2. Hence, AVOL is higher for annual earnings announcements, while AAR is lower. An interesting finding is that the coefficient on TIMExANDUM at Column 1 is negative and significant at the 1 percent level indicating that the time trend in abnormal volatility is weaker for annual announcements compared to interim announcements. We obtain qualitatively similar results when we define AVOL and AAR using earnings announcement windows of [-1, 0] and [0, 1] rather than the [-1, 1] window utilized in this study.\n5. Conclusion: In this paper, we study the informativeness of earnings announcements and its time trend in Borsa Istanbul using abnormal volume and abnormal absolute return as measures for information content. We show that earnings announcements are significant information events with both measures being significantly positive around the three-day window surrounding the earnings announcement. Furthermore, we find a positive time trend in informativeness which is most pronounced in growth companies and announcements with high absolute surprise. Analyzing interim earnings announcements in addition to annual earnings announcements allow us to observe how the information content of annual announcements compares to information content in interim announcements. We find higher AVOL, but lower AAR at annual earnings announcements which suggest that annual announcements are not strictly superior to interim announcements in terms of their informational value for investors. Finally, we examine the introduction of the online disclosure platform KAP as an improvement in information dissemination technology. We find an increase in AAR after the introduction of KAP, but no change in AVOL. We think a fruitful venue for further research could be conducting a comprehensive study to see which of the competing explanations such as increase in concurrent information, improvement in access to information or change in investor sophistication best explain the increasing informativeness of earnings announcements.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to assess the information value of earnings announcements for the 2007-2017 period in Borsa Istanbul.\nDesign/methodology/approach: Abnormal volatility (AVOL) and abnormal absolute return (AAR) in the three-day window around the earnings announcement are used as proxies for information content. A pooled regression of AVOL and AAR is conducted to test for the existence of information content and analyze its time trend along with its determinants.\nFindings: The authors find significantly positive AVOL and AAR which shows that earnings have information content for investors during the sample period. Furthermore, both proxies demonstrate a positive time trend after controlling for various firm characteristics and surprise measures. The authors take this as evidence that overall informativeness of earnings has increased over time. The authors observe that this increase is most prevalent for growth companies and earnings announcements with high absolute surprise. This study provides partial support for the hypothesis that value of earnings announcements has increased after an improvement in information dissemination technology with the inception of the online disclosure platform, KAP.\nPractical implications: Understanding information value of earnings announcements is of interest for companies which prepare earnings reports, regulators who set standards on their content and frequency and investors which make investment decisions based on information released at these announcements.\nOriginality/value: There had been few non-US studies related to information value of earnings announcements. The overwhelming majority of these are conducted using limited data sets from the latter part of the last century and only analyze annual earnings announcements. The authors aim to shed light on the subject using a broad and recent sample of quarterly earnings announcements from a major emerging market, Turkey.\n",
"role": "assistant"
}
] |
Food policy in Bangladesh 2010: impacts of domestic wheat productivity growth
|
[
"Wheat",
"Food crops",
"Food industry",
"Productivity rate",
"Box Jenkins",
"Bangladesh"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Agriculture in the economy of Bangladesh: The economy of Bangladesh is predominantly based on agriculture. Bangladesh has some of the world's most fertile agricultural lands accounting for abundant growth of various crops. Agriculture sector is the largest and the most important source of income and employment in the country. Agriculture is not only its main industry but it is also the major sector of Bangladesh economy, which contributes about 24 percent of GDP and employs nearly 62 percent of the population. In the past, Bangladesh had to import most of its food. But improved farming techniques and increased use of irrigation and high yielding varieties (HYVs) has greatly increased agricultural production during the last three decades. In Bangladesh, since most of the cropped area even now does not have any assured irrigation, the monsoon assumes a crucial role in influencing agricultural production. Agriculture has also to provide food and is the major supplier of some basic raw materials for the domestic industry while at the same time it makes substantial contribution to the nation's trade balance, particularly through export earnings.During the past 32 years, the agricultural sector of Bangladesh has experienced a great deal of turbulence. Immediately after the achievement of independence in 1971, a series of disastrous harvests (attributable in large part to unfavourable weather) led to widespread food shortages, forcing the government to issue urgent appeals to the international community for emergency relief assistance. The recovery turned out to be temporary. Since population growth continued to outpace growth in food production, regular imports of cereals became necessary to meet chronic shortfalls. The industrial sector has been expanding, particularly cotton, leather, food processing and paper production. The garments and knitwear sectors have been growing strongly for the last seven years. In this paper is to examine and discussing the future expected food requirements and the techniques how to meet up increasing population demand for food in Bangladesh by increasing wheat productivity.\n2 The economic framework and methodology: 2.1 Sources of data, area and population of Bangladesh\n3 Results: 3.1 Agricultural land utilization patterns\n4 Conclusions and policy implications: Food gap based on actual consumption figures that take food distribution into consideration will be much higher in Bangladesh than one arrived at by matching availability with requirements at the aggregate levels. A realistic estimate of food requirement to meet the demand of the needy is essential to increase the productivity per hectare land. In the existing situation, the national level average yield of wheat is 1.9 MT/ha that was lower than any other stations. The reasons are late sowing, coupled with lack of quality seed, excess moisture at sowing, lack of fertilizer at reasonable price and timeliness at the farmers level and lack of capital. It is very important to note that if government of Bangladesh takes decision to apply into the farmer's level of wheat production models underlined in this paper, then there is a possibility to increase the domestic wheat supply by 56.84 percent in model I, 115.79 percent in model II, 247.37 percent in model III and 321.58 percent in model IV, much higher than the existing level of production.To enhance food production, efficient agronomic research and extension in Bangladesh on improving efficiency and profitability of wheat is necessary to help ensure sustainable use of Bangladesh's natural resource and to help diversify diets. If the population growth rate continues then the population will be about 151.831 million in 2010. If government can meet the challenges here, then there is a hope for the future. If government fails, then not only food problem will be acute for the people but also many other problems will arise that will endanger peace and stability of the country.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of the paper is to estimate wheat productivity in Bangladesh and forecast the future expected population and food requirements in the country by 2010.\nDesign/methodology/approach: - This paper reaches the objectives using total factors productivity approach, Box Jenkins approach, and sensitivity analysis for wheat farms in the country. The study used data on wheat during 1972-2002.\nFindings: - In the existing situation, the national average level wheat yield was 1.9 MT/ha that was lower than any other stations. The reasons are late sowing, coupled with lack of seed quality, excess moisture at sowing, lack of fertilizer at reasonable price and timeliness at the farmers' level, and lack of capital. The total productivity grew at an average annual rate of 1.35 percent.\nPractical implications: - The results show that the Bangladeshi government could increase the domestic wheat supply by 56.84, 115.79, 247.37, and 321.58 percent depending, respectively, on the applied model I-IV, that is much higher than the existing level of production.\nOriginality/value: - This paper brings together diverse views and fusing them together providing a future path for research and taking suitable policy for wheat production to meet the demand for food.\n",
"role": "assistant"
}
] |
Regularities in the consumption of a subscription service
|
[
"Stochastic modelling",
"Binomial distribution",
"Marketing",
"Periodical subscriptions"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Introduction: The Ehrenbergian school of thought uses stochastic models to describe stable and law-like patterns in aggregate consumer purchase behaviour (Ehrenberg et al., 2004; Uncles et al., 1995). As individual consumer behaviour varies irregularly and occurs as if at random, the Ehrenbergian school sees decision models that attempt to predict determinants of individual purchase behaviour as likely to perform poorly (Ehrenberg et al., 2001, 2004). By contrast, it prefers descriptive stochastic models, and of these the negative binomial distribution or NBD (Ehrenberg, 1959; Morrison and Schmittlein, 1988) is one of the best known. The NBD is the focus of this research.Studies applying stochastic modelling to consumer behaviour overwhelmingly restrict their analyses to consumer packaged goods, such as detergents or instant coffee (Bhattacharya, 1997; Ehrenberg et al., 2004; Uncles et al., 1995). As Ehrenberg (1959, p. 26) suggests, the NBD concerns \"ordinary private consumer's purchases of non-durable consumer goods ... characterised by being marketed in pre-packed and branded form.\" With these goods, inter-purchase periods are short, and consumers may choose a different brand or buy multiple brands at each purchase incidence. Moreover, devoting all purchases to a single brand is rare, with consumers spreading their category requirements over multiple brands (Lee and Mizerski, 2006; Sharp et al., 2002).Consumers of subscription services, such as utilities or mobile telecommunications, normally use one brand and for long duration (Sharp et al., 2002). If they switch, they switch totally from the brand before adopting another. With such exclusive sole-loyalty and long inter-purchase period, some researchers contend that subscription services may be a boundary condition for stochastic modelling (Lee and Mizerski, 2006; Sharp et al., 2002). To this end, the first research question of this research concerns whether the NBD is generalisable beyond consumer packaged goods to a long-tenure subscription service like mobile telecommunications. A model fit would extend the scope of the NBD, and further attest to its robustness in modelling consumer behaviour.While the NBD is simple yet elegant, unobserved heterogeneity in consumer behaviour may introduce bias into the model and its results. For example, in consumer good purchases, heterogeneity in SKU purchases (pack sizes or flavours), the household member for whom purchases are made, or usage occasion could all bias the results obtained from modelling the simple construct of brand purchase occasions. Similarly, in this research, unobserved heterogeneity, such as customers' choice of calling plans or subscription tenure, may introduce bias. In addressing the first question, we assume that the simplified purchase construct we use is robust to this bias.An emerging stream of research, however, has attempted to shed more light on the impacts of heterogeneity by incorporating marketing covariates into the NBD (Banelis et al., 2005; Kwok et al., 2006). These studies examine whether model fits can be improved by incorporating into the model marketing-covariates pertinent to purchase behaviour. Extending this under-researched area, the second question of this paper tests whether customer tenure - the duration that customers stay with a mobile-telecommunication operator - may be a covariate to the NBD model in the first question. If customer tenure is a significant covariate, then an NBD model that accounts for tenure would increase the model's applicability by allowing managers to explain variations in mobile service consumption beyond those accounted for by the base NBD model.This paper proceeds by first providing a conceptual overview of the NBD. It then fits the model to a random sample of mobile service consumption, followed by testing for the significance of tenure as a covariate to the model. The paper concludes with discussions of the implications and avenues for future research.\n2. Conceptual background about the NBD model: Ehrenberg (1959) first used the NBD model to describe consumer purchases of a product category or brand. Two assumptions about purchase incidences underpin the NBD model (Dunn et al., 1983; Uncles et al., 1995). The model assumes that consumer purchases of a product or category follow a Poisson distribution, while individual purchase rates are gamma distributed across the consumer population. Collectively, the two assumptions mean that over time, consumers tend to have steady long-run purchase rate (e.g. a consumer buys ten times in a year), but not regular purchase pattern (i.e. the ten purchases occur randomly within the year). NBD also requires stable markets, where the number of buyers and rate of purchase in each period of analysis is approximately equal (Morrison and Schmittlein, 1988; Uncles et al., 1995).2.1 The k and a parameters of the NBD\n3. Methods: A public-listed mobile-telecommunication operator provided the data set. To produce the final random sample, the firm first extracted every nth customer from the entire customer base, yielding 1,546 customer records. The firm's identity and value of n were withheld for privacy. The monthly billed amount for each individual in the sample was the average of actual bills charged for three months (January to March 2006). Using a random rather than convenience sample, and actual rather than estimated or intended consumption, should improve the accuracy of the modelling (Malhotra et al., 2008).3.1 Phase 1 - fitting the NBD for subscription services\n4. Results: 4.1 Phase 1 - Fitting the NBD model for subscription services\n5. Discussions and conclusion: Using the negative binomial distribution (NBD) as theoretical platform, two research questions underpin this paper. First, whether NBD modelling fits a long-tenure subscription service, a potential boundary condition for the model. Second, whether adding tenure as a covariate will improve the fit (and thus explanatory power) of the model.5.1 Fitting the NBD to a subscription service\n6. Future research: This study is an initial attempt to test the applicability of stochastic modelling in a market perceived to be a boundary condition. Given its exploratory nature, future research can build upon the findings and address limitations of this study. Future research should investigate mobile service consumption using both cognitive and stochastic modelling simultaneously. This approach would allow a direct comparison on whether present mobile service consumption in developed markets relates more to underlying cognitive determinants or is simply habitual.While this study suggests that tenure covaries with mobile service consumption, questions remain on the existence of other behavioural (e.g. voice versus text messages), demographic (e.g. occupation), or psychographic (e.g. lifestyle) covariates that may underpin consumption - another area for future research. The presence of significant covariates would mean that NBD profiles differ across segments defined by the covariates.Linking habitual consumption to brand choice in this subscription market is also of interest. Would the habitual consumption of mobile telecommunication services affect brand choice when the buyers face a brand decision upon subscription expiry? Future research of similar phenomena in other subscriptions services such as the internet and cable TV may help extend the generalisability of stochastic modelling across both goods as well as services.Finally, NBD research mostly analyses purchase behaviours in a single period. Schmittlein et al. (1985) suggest that understanding how buyers should behave from one period to the next is the most managerially relevant outcome of NBD modelling. Such analysis with subscription services is another future research avenue.\n",
"role": "user"
},
{
"content": "Purpose: - The purchase distribution of consumer packaged goods has been extensively modelled by the negative binomial distribution (NBD). As the characteristics of packaged goods differ from those of subscription services, the latter may be a boundary condition for NBD. In this study, the authors aim to test whether NBD extends to a subscription service. Also, NBD assumes non-heterogeneous purchase behaviour. The authors determine whether augmenting the model with a covariate (customer tenure) to account for consumer heterogeneity will produce a better fitting model.\nDesign/methodology/approach: - Phase 1 fitted a base NBD model using a random set (n=1,546) of mobile-telecommunication consumption (monthly billed amount). Phase 2 extended the base model by incorporating customer tenure as a covariate, and re-fitting the model.\nFindings: - NBD does apply to subscription markets. Also, accounting for heterogeneity in customer tenure produces a better model fit, and shows that with increasing tenure individual consumption declines but total consumption increases due to fewer light buyers.\nResearch limitations/implications: - The NBD fit suggests that mobile-telecommunication consumption is habitual, and casts doubt on the effectiveness of marketing programs such as loyalty programs to spur the consumption. The fit also establishes that NBD can serve as a benchmark against which the effects of marketing actions can be evaluated. A potential limitation is the continuous, rather than discrete, nature of the service consumption data.\nOriginality/value: - By extending NBD's boundary condition, this study further attests to the model's robustness in modelling purchase behaviour. It illustrates the technique of augmenting the model with marketing covariates to improve model fits.\n",
"role": "assistant"
}
] |
On the stability of endogenous growth models: An evaluation of the agents' response to output fluctuations
|
[
"Consumer behaviour",
"Fiscal policy",
"Investments"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The simple version of the AK growth model, analyzed, for instance, in Rebelo (1991), considers a capital accumulation process that lacks transitional dynamics. The steady state is not reached, unless the representative agent provokes a discontinuous jump in the level of consumption placing it on its long-term equilibrium value or on a saddle trajectory. In this paper, we ask how this outcome is modified when economic agents react to business cycles. We assume three types of agents, who adopt a pro-cyclical or a counter-cyclical behavior concerning the deviations of effective output relatively to its potential level. First, households respond pro-cyclically to output fluctuations; at this level, we follow empirical evidence related to consumers' sentiment. Second, government action is introduced through the idea that fiscal policy is counter-cyclical: periods of recession are fought with expansionary measures, while periods of expansion are attenuated through restrictive measures concerning public expenditures. Third, firms, as consumers, tend to act pro-cyclically; for a positive output gap, firms will expect demand to rise, and thus they invest more, while negative output gaps will imply a contraction of the amount and extent of undertaken investment projects.Each one of the above topics has been thoroughly discussed in the literature, through multiple viewpoints. In what follows, we just refer to some relevant work in these areas. We begin by addressing consumers' sentiment.In planning how to allocate their disposable income, households tend to observe and to take in consideration macroeconomic fluctuations. Sentiments of optimism/pessimism are the result of the way GDP performance is perceived, implying that a higher/lower share of consumers' income is effectively directed to consumption. This logical reasoning is confirmed empirically in the studies of Bram and Ludvigson (1998), McNabb and Taylor (2002), Goh (2003), Doms and Morin (2004), Souleles (2004), Dion (2006) and Officer (2007), among others. These authors find evidence of two essential links: first, business cycles influence consumers' sentiment; this can be easily confirmed by looking at reports on consumer confidence throughout the world, namely in the developed countries. Second, consumers' sentiment triggers changes on the savings rate. It is well accepted, as demonstrated in the studies referred above, that the precautionary motive to save is dependent on sentiments about short-term economic growth. Thus, it seems unquestionable, from an empirical point of view, that the marginal propensity to consume rises with the expansionary phase of the cycle and falls otherwise, under this two step mechanism: first, agents form sentiments by looking at the behavior of the national income time series and, second, these sentiments are reflected in the consumption-savings decisions.From a theoretical point of view, it is worth citing the model by Westerhoff (2007), who uses the above empirical evidence to support a framework where changes on consumption expenditures resulting from varying levels of confidence will lead to a perpetuation of business cycles in the long term. These endogenously generated business cycles confirm the Keynesian view of self-fulfilling prophecies: because households take seriously the information overall economic fluctuations transmit, fluctuations will persist in time.In what concerns fiscal policy, this is a theme largely discussed in its relation to growth and fluctuations. The logical principle is simple: governments have the power to use their budget to stabilize the economic system. In periods of recession, public expenditures rise faster to stimulate growth, while in periods of expansion, the growth rate of public consumption is attenuated (this implies, of course, the assumption that the government has the flexibility to increase and lower their expenditures to respond to fluctuations).Empirical evidence seems to support the above argument. For instance, Gali and Perotti (2003) find evidence of a fiscal policy that is becoming more counter-cyclical over time in the countries that form the European Monetary Union, and they remark that this is a trend that other developed countries are also following. In turn, Hein and Truger (2006) have some doubts concerning the ability of the European governments in maintaining such kind of policy, since evidence points to some countries being not capable to keep public deficits below the 3 percent of the GDP target, in a moment when the European economy performs poorly. These authors contrast the inability of the Euro area governments with the well succeeded North-American fiscal policy, which has been in fact strongly counter-cyclical. Other evidence on counter-cyclical fiscal policy worldwide can be found in Budina and Wijnbergen (1997), Perry (2003), Afonso et al. (2006)), Mackiewicz (2006), Kandil (2006), Kollias and Paleologou (2006) and Staehr (2007), among others.From a theoretical perspective, the standard result consists on finding positive effects of a counter-cyclical fiscal policy. For instance, the paper by Martin and Rogers (1997) highlights the positive effects of this type of stabilization policy over growth, human capital accumulation and welfare. Others, however, find arguments to question the virtues of the referred policy; in particular, Gordon and Leeper (2005) argue that in phases of recession a counter-cyclical policy is likely to change agents' expectations, since this type of policy increases public indebtedness, which raises future debt service payments; as a result, agents will perceive that taxes must rise in the future and this tends to change savings rates accordingly. Thus, responses to expected future policies may turn recessions deeper and lengthier. In fact, the authors call the attention for the fact that this expectations channel may create business cycles that simply would not exist if the counter-cyclical policy was not adopted.Concerning fiscal policy, we keep two ideas in mind. First, in an ideal world, where agents' expectations do not distort public authorities' intensions, the counter-cyclical policy has clear stabilization effects; second, although the counter-cyclical policy is favorable in terms of long term economic performance, not always the governments have the ability or the desire to undertake such policy. Alesina and Tabellini (2005) provide an explanation for why some countries, namely the ones in developing regions, follow pro-cyclical fiscal policies; the argument is based on political distortions and appropriation of rents by less-than-benevolent governments. To this explanation, we can add the pressure that many governments suffer (both in developed and developing countries) in times of recession to increase expenditures to mitigate the effects of unemployment and slower growth.Finally, with respect to firms' investment decisions, these are clearly conditioned by expectations about future demand. If firms associate demand expectations with economic performance, then it is logical that investment decisions react pro-cyclically to output, and thus the growth of investment will depend on the extent of the (positive or negative) output gap.Evidence on the pro-cyclicity of investment is also easy to find in the economic literature. Such references include, just to cite a few, Backus and Kehoe (1992), Bergman et al. (1998), Baxter and King (1999) and Dosi et al. (2006). In this last paper, also a theoretical model is constructed; this assumes reasonable hypothesis about firms' behavior (in particular, that investment decisions are lumpy and that firms are constrained by their financial structures), to reach the result of a pro-cyclical investment movement over time.Following sections study three alternative endogenous growth models, where each type of agent adopts a rule of reaction to observed business cycles. As a result we abandon the one-equation AK simple model that lacks transitional dynamics, to study three-dimensional models with relevant transitional dynamics properties. Namely, we will observe that saddle-path stability holds in all the cases, being possible to derive stable trajectories. These stable trajectories define a convergence to the steady state relation between consumption/public expenditures/investment and capital accumulation.The remainder of the paper is organized as follows. Section 2 studies the growth model under a households' reaction to fluctuations. Section 3 focuses on the role of counter-cyclical public expenditures. Section 4 studies the version of the model with pro-cyclical investment decisions. In section 5, a numerical illustration of the various models is proposed. Finally, section 6 concludes.\n2. Endogenous growth and consumers' sentiment: 2.1 The AK model\n3. Endogenous growth and a counter-cyclical fiscal policy: 3.1 The fiscal policy rule\n4. Pro-cyclical Investment: 4.1 Firms' response to output fluctuations\n5. Numerical evaluation: In this section, we re-consider the three models previously discussed, through a numerical example. In all the analyzed models, only four parameters were considered: b, A, d and g0. For the discount rate and the depreciation rate we take values that are considered in the literature as representing well real economic conditions; in particular, we take the values by Guo and Lansing (2002): b=0.962 (this guarantees a discount rate around 4 percent) and d=0.067. Relatively to the technology index, we choose a value that is compatible with a reasonable economic growth rate of 3 percent when effective output equals potential output, i.e. 1+g=1.03=b * (A+1-d)=A=0.138. Finally, we consider three different cases for the value of g0; this allows to explore how the extent of the reaction of agents to output fluctuations impacts on the problems' results. The chosen values are g0=(0.025; 0.02; 0.015), that is, we allow the growth rate to deviate from the reference level g, 0.5, 1 and 1.5 percentage points.We begin by recovering the consumers' sentiment model. For this, the Jacobian matrix is now, Equation 24 For matrix J above, we compute the eigenvalues and present them in Table I.Table I allows to confirm the signs of the eigenvalues one has found in section 2. A unique eigenvalue inside the unit circle is computed, independently of the value of g0. Moreover, one regards that the stable eigenvalue rises with a decrease in g0; this also occurs for one of the eigenvalues above unity.The stable trajectory is, in this case, c^t=c^*+1.0093 * k~t-0.00011 * k~t-1 (for g0 =0.025), c^t=c^*+1.0094 * k~t-0.00023 * k~t-1 (for g0=0.02) and c^t=c^*+1.0096 * k~t-0.00035 * k~t-1 (for g0=0.025). Thus, in the convergence to the steady state, a change in last period capital stock provokes an almost insignificant variation in today's consumption, while a unit change in contemporaneous capital stock occurs with an almost identical variation in the level of consumption. No significant changes are observed as we vary parameter g0.Consider now the fiscal policy problem. One has concluded that the countercyclical government policy transforms the AK model by introducing a two dimensional stable trajectory.One additional element is necessary to analyze the stability properties of the model under the numerical example; we have to establish a relation between the steady state values of private and public consumption. Let c^*=2g^*; this relation and the defined array of parameter values, allows to present the Jacobian matrix, Equation 25 Table II presents the eigenvalues of the above matrix, for different values of g0,We confirm that the stable manifold is two-dimensional, with all eigenvalues above zero. We observe, as well, that one of the stable eigenvalues rises with g0, while the other one falls.The stable trajectories can be seen in Table III.Table III confirms that convergence towards equilibrium implies that public expenditures increase as capital in periods t and t-1 increase. The trajectories for each case are distinguishable in terms of the weight of each period's capital change on the variation of gt; in concrete, we can have a more significant impact of contemporaneous capital accumulation over the change in public expenditures, or both period's capital stock may have a similar impact over the evolution towards equilibrium of the policy variable.Finally, consider the investment case. As before, we present the Jacobian matrix, Equation 26 For this particular model, we have found through a generic analysis that two eigenvalues are inside the unit circle, but we were unable to state without any doubt if these were positive or negative values. For the example in appreciation, Table IV presents the eigenvalues.In Table IV, all eigenvalues are positive, and we confirm the result of a dimension 2 stable arm. The stable trajectories are shown in Table V.The computed stable trajectories all point to a relation of opposite sign between capital accumulation (present and past) and today's investment, as the system converges to the steady state. Intuitively, we may explain this relation under the idea that the economy invests more as a response to a decline in the accumulated levels of physical capital. The weight of the variations on past accumulated capital over investment is higher when the second trajectory is followed; the opposite occurs for the contemporaneous stock of capital. Again, no significant qualitative changes arise from considering different values of g0.\n6. Conclusions: The simple AK endogenous growth model was re-evaluated at the light of a non optimal behavior assumed by different economic agents.The optimal control growth problem with a constant returns to scale production function does not allow for a transitional dynamics analysis; only in the circumstance in which the representative agent is able to choose an initial level of consumption that is already in the steady state, the steady state will indeed represent the long run locus of the economy.Departing from the idea of full rationality, we have identified and explored three cases in which transitional dynamics arise, and where a saddle-path stable equilibrium is present. The three cases are analyzed under a same notion that agents look, in each time moment, to the performance of the economy relatively to its potential to grow. Agents take in consideration the business cycle, and take their decisions accordingly. Instead of fully rational agents, we have agents that change the growth rate of the economic aggregates they control as the economy performs better or worse than expected.The previous reasoning can be applied to households, government and firms, concerning the variables each agent controls. We have explored the three scenarios under simple logical arguments regarding the reaction to the business cycle. First, one has assumed that consumers react pro-cyclically to the economic activity; this seems logical and has empirical support. Second, a counter-cyclical fiscal policy was taken into consideration; although in practice governments not always adopt such kind of policy, this seems in theory the type of stabilization policy that should be pursued. Third, we assume that firms invest pro-cyclically; this is apparently logical as well, in the sense that forecasting a larger demand as a result of an expansionary phase, firms will be stimulated to invest more.In all the three cases, we have identified the presence of a saddle-path equilibrium. In the consumers' sentiment case, the stable path is one-dimensional. In the other two cases, a two-dimensional stable trajectory was computed. In each case, it was possible to represent the stable paths analytically and therefore to characterize the process of convergence to the steady state.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to present three modified versions of the simple AK endogenous growth model.\nDesign/methodology/approach: - Such frameworks stress the role of consumers' sentiment, the impact of fiscal policy and the effect of non-optimal investment decisions made by firms. In all the cases, today's decisions take into consideration the economic performance of the previous period; in the first case, households react pro-cyclically to the output path; in the second case, a counter-cyclical fiscal policy is considered; and in the third case, firms adopt a pro-cyclical behavior concerning investment choices.\nFindings: - The author studies the stability properties of the three models and concludes that, on each one of them, a saddle-path stable equilibrium exists.\nOriginality/value: - The paper accentuates the relevance of the reaction of the economic agents relatively to the business cycle. By assuming that the behavior of consumers, government and firms is a behavior of reaction to economic fluctuations, we find interesting and relevant results in what concerns the conventional intertemporal optimization growth model.\n",
"role": "assistant"
}
] |
Comparing the impact of management on public and private nurses in Bangladesh
|
[
"Organizational behaviour",
"Public sector management",
"Well-being",
"Private sector organizations",
"Work harassment",
"Public–private sector"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Public sector management models do not easily translate to best practice in developing countries, mainly because the conditions are too different. For example, in contrast to developed countries, there are more doctors than nurses in Bangladesh and the demand for nurses' services exceeds supply whilst continuing to grow in line with population growth; hence, they work in high-pressure environments (World Health Organisation, 2018; World Bank, 2016). As a consequence, this paper examines whether Bangladeshi nurses perceive work harassment similarly to other public and private nurses in developed countries (Brunetto, Xerri, Trinchero, Farr-Wharton, Shacklock and Borgonovi, 2016) and whether this affects their turnover intention. Dick (2010) refers to work harassment as a form of workplace bullying but suggests that it differs from bullying. Specifically, an employee is exposed to work harassment when their employing organisation unreasonably monitors their actions, sets excessively high workloads, and has stringent accountability standards (Brunetto, Xerri, Trinchero, Farr-Wharton, Shacklock and Borgonovi, 2016). As such, the organisation and its managers are the instigators of work harassment.\nBackground: Conversation of resources theory\nVariables under examination: Work Harassment (resource caravan passageway)\nMethod: Sample and procedure\nResults: Descriptive statistics and intercorrelations\nDiscussion: This paper examines the impact of line management on work harassment, well-being and turnover of public and private nurses in Bangladesh using a COR framework. One factor which has been identified as having a negative impact, especially in the public sector, is work harassment and in developed countries, the manager either alleviates the situation or makes it worse (Brunetto, Xerri, Trinchero, Farr-Wharton, Shacklock and Borgonovi, 2016). In contrast, Farr-Wharton et al. (2019) also identified work harassment for private sector nurses in the UK and USA. However, research in Brazil found that management did not have the same impact compared with Anglo-American and/or European countries such as UK and Australia (Brunetto et al., 2015), hence, we examined the impact of management on public and private nurses' perceptions of work harassment in Bangladesh. Additionally, since, private sector nurses operate in hospitals that have attempted to implement some typically western HR systems, it seems likely that turnover should be less because the influence of culture (corruption etc) should be less.\nConclusion: The findings suggest that work harassment is prevalent in the Bangladesh public and private nursing profession. This study used a COR theoretical framework to compare the impact of management practices (LMX) and work harassment on the well-being and turnover intentions of Bangladeshi public and private nurses. Using COR theory, we found that LMX is a positive resource caravan passageway that can build well-being for public sector employees; however, the association was not significant for private sector nurses. Hence, COR theory Principle 2 offers one explanation of r public sector employees' behaviour in that it assumes that the manager provided enough of a \"cushion\" for them to form a barrier against the negative influence of work harassment on their well-being (Hobfoll, 2001). Similar to what past Anglo-American and European research has shown, the consequences of poor environmental conditions, such as work harassment, is that it does affect how public sector nurses in Bangladesh are able to take advantage of the relationship with their manager. However, there was no evidence that employees who perceive high support from their line managers are less likely to be negatively influenced by work harassment (Hobfoll, 2011). Therefore, whilst COR theory was somewhat useful in explaining some behaviour of public sector Bangladeshi nurses, it was far less relevant for explaining the behaviour of those in the private sector. This is an important finding because theories and concepts developed and validated in Anglo-American contexts assume underlying cultural similarities that may not be accurate in developing countries. Future research should perhaps test other well-established theories (and concepts) across multiple contexts to verify their validity.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to use conservation of resources (COR) theory as a lens for comparing the impact of line management on Bangladeshi public and private nurses' perception of work harassment, well-being and turnover intentions where Anglo-American and European management models have been super-imposed on an existing different culture.\nDesign/methodology/approach: Survey data were collected from 317 Bangladeshi nurses' (131 from the public sector and 186 from the private sector). Structural equation modelling was used for analysis.\nFindings: High work harassment was associated with low-being, and together with management practices, it explained approximately a quarter of private sector nurses' well-being. In total, management, work harassment and employee well-being explained approximately a third of the turnover intentions of public sector nurses, whereas only work harassment explained approximately a third of private sector nurses' turnover intentions. The findings suggest a differential impact of management on work harassment across the public and private sector.\nResearch limitations/implications: Cross-sectional data are susceptible to common method bias. A common latent factor was included, and several items that were explained by common method variance were controlled. Further, the findings are limited by the sample size from one sector and the use of only one developing country.\nPractical implications: It is a waste of resources to transplant Anglo-American and European management models to developing countries without understanding the impact on nurses' outcomes.\nOriginality/value: Anglo-American and European management models are not easily transferable to the Bangladesh context probably because of the impact of ties and corruption. Line management is a positive resource that builds employee well-being for public sector employees only.\n",
"role": "assistant"
}
] |
Approaching digital equity: is wifi the new leveler?
|
[
"Communication technologies",
"Small to medium‐sized enterprises",
"Wireless",
"Innovation",
"User studies",
"United States of America"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The \"digital divide\" is a term coined to address the inequalities of computer ownership and internet access and use (Mossberger et al., 2003; Servon, 2002). The size of the digital divide, or the inclination to adopt and use new technologies, has varied widely over the last 25 years. For instance, it took 18 years for 50 percent of Americans to adopt the personal computer (Bauer et al., 2002). In contrast, it took only nine years before 50 percent of Americans were using high-speed internet in their homes. The high demand for wifi access has increased largely because of the growth in sales of laptops and personal mobility products (Burness et al., 2003). Internet access via wireless computers increased to their highest levels in 2008 (Center for the Digital Future, 2008). This is not surprising when one considers the exponential growth of satellite and wireless lines from 65,615 in 2000 to 11,872,309 in 2006 (Federal Communications Commission, 2007). The National Telecommunications and Information Administration (1997) study predicted that where wifi access existed, it would close the digital divide among and between diverse populations. Dr Juliet V. Garcia, President of the University of Texas - Brownsville, agreed, stating \"Digital access is a great leveler\" (Committee on Commerce, Science, and Transportation, 2002, p. 58).\nReview of the literature: The digital divide\nMethods: Sample\nResults: Descriptive statistics\nLimitations and directions for future research: One major limitation of the study is the low numbers of Asian-American and Black/African-American SME owners in the data employed in this study. The limited numbers of responders restricted our ability to identify whether the use of wifi technology had any influence on the reduction of the digital divide in these minority-owned businesses. Thus, our findings may only be generalizable to White and Hispanic-owned SMEs. There is a great need to collect data from a wider range of diverse SME owners in future research in order to address this limitation.A second limitation is the lack of individual economic data from the respondents. Many studies have noted that variations in income have a major impact on the ability to access, use, and adopt new technology (Chinn and Fairlie, 2007). Low socio-economic status has been found to correlate with ethnicity. Still other research has found that ethnicity is a significant negative predictor of technology adoption after controlling for economic variables (Dupagne and Salwen, 2005). Blacks and Hispanics still exhibit lower rates of technology access and the skills required to use the technology above and beyond any economic factors (Mossberger et al., 2006). This study asked the annual gross revenues of the firms represented in the study. Evidence from government statistics suggest that the SME sample in this study represented a range of middle-class incomes. However, future studies could include questions seeking to identify the socio-economic status of the individual respondents. Answers to the socio-economic questions could be used either as a direct measure of the respondent's ability to access, adopt, and use new technology or a control variable used to examine the unique effects of ethnicity.A final limitation centers on the use of one RC area for data gathering purposes. Future research should include data gathered from a multiplicity of RCs. Data should also be gathered from respondents who do not reside or work in an RC. Comparisons of data from both RCs and non-RCs will ensure the researchers' ability to compare and contrast diverse technology users.\nConclusion: President Barack Obama is an outspoken advocate of access to universal broadband for all Americans (Gross, 2009; Hesseldahl, 2009). The national stimulus package passed by Congress after his inauguration included $350 million to create a national broadband map. The map will be used to shape the government's broadband policy and ascertain where best to invest government funds. Nevertheless the USA is currently in fifteenth place in broadband penetration among developed nations who have high-speed wireless access, having fallen from fourth place in 2001 (Organization for Economic Cooperation and Development, 2007). Multiple business models have been suggested for providing public access to high speed internet that is universally available and affordable (Hudson, 2010). Universal broadband access could add nearly $500 billion dollars to the US economy, while giving users a greater choice in the wireless devices they use (Lowry, 2007). Yet SME owners continue to worry about the costs of investing and participating in e-commerce (Temponi and Cui, 2008). SME owners are much more likely to invest in technology adoption if they have access to external credit (Franquesa and Brandyberry, 2009). Internal resource constraints clearly impact SMEs' ability to invest in innovative technology. Further concerns about security and the need for knowledgeable manpower remain. Financial and marketing disparities among minority-owned and non-minority-owned businesses continue. Access to an expanded broadband system and its use would unquestionably impact SMEs' ability to reduce SME owners' concerns and improve their productivity and performance.The business leaders in the Hispanic community clearly agree. They recently formed a Hispanic Entrepreneurs Task Force (HETF) seeking access to high-speed networks for Hispanic small businesses (TelecomWeb News Digest, 2006). The HETF's position is that access to wifi technology and high-speed internet is essential for the economic and educational advancement of Hispanic SMEs. Other efforts have been made that give Hispanic SME owners access to business training and other services in the Spanish language (Hispanic PR Wire, 2006). Our findings suggest there is still a critical need for training that develops e-literacy and e-commerce skills for Hispanic and other minority-owned SMEs. Policies that shape behavioral intentions and effect continued use must be targeted to specific group needs (Hsieh et al., 2008; Vega et al., 2008). Training in the use of wifi and broadband technologies could provide digital access to growing minority markets online. Minority SME owners need to be able to connect with customers and suppliers in unique and innovative ways via the new technologies.We would argue that, based on the findings of this exploratory study, access to wifi is critical not only for economic growth, but for the continuing reduction of the digital divide. Advantages accrue to those SMEs who have the technical competencies that enable them to bridge the digital divide and move toward improved performance. The results of this study suggest that wifi access, adoption, and use can serve as the next great levelers in the race to reduce the digital divide.\n",
"role": "user"
},
{
"content": "Purpose: - Recent reports suggest the shape and size of the digital divide may be changing. This exploratory study aims to examine the relationships between demographic and situational variables and small and medium-sized enterprise (SME) owners' intentions to adopt and use free public wifi technology.\nDesign/methodology/approach: - A survey study of 158 SME owners in the urban renewal community (RC) of a large Southwestern metroplex investigated the influence of known digital divisors - gender, ethnicity, age, education, and experience - on the adoption and use of wi-fi technologies. Partial least squares was used to test the hypotheses.\nFindings: - Contrary to previous research, gender, age, education, and experience were not related to the intention to adopt wifi technology. Ethnicity was significantly related to intent to adopt, but the relationship was small. Ethnicity and age were found to be somewhat related to wifi usage. In marked contrast from earlier research, education, gender, and experience were not significant predictors of usage.\nPractical implications: - The findings suggest that access to high-speed wifi has the potential to reduce and/or eliminate the digital divide. The projected changes in the digital divide can be expected to make a significant impact on SME productivity and performance.\nOriginality/value: - This study is one of the first to examine the influence of the adoption and use of public wi-fi technologies by SME owners on the digital divide.\n",
"role": "assistant"
}
] |
Why do hotels outsource? An investigation using asset specificity
|
[
"Hotels",
"Outsourcing",
"Transaction costs"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Asset specificity has been widely used to explain the boundaries of the firm, and has been adopted in this paper as a framework to consider hotel outsourcing[1]. Asset specificity is one of three transaction-related attributes central to transaction cost economics (TCE) theory (Williamson, 1979, 1985, 1996)[2]. Asset specificity relates to the extent to which investments in assets (either human or physical) are specific to a particular trading relationship. TCE holds that the greater the asset specificity, the more likely a transaction will be internally managed (i.e. not outsourced).Lohtia et al. (1994, p. 261) describe asset specificity as \"an asset, either tangible or intangible, that has little value outside of a particular relationship\". Milgrom and Roberts (1992, p. 49) note when large asset specific investments are made, parties will organise themselves \"in ways that safeguard those investments\". Where an asset is highly specific to a particular trading relationship, its value is determined by the continuance of the trading relationship since the asset has limited value in an alternative use. Palay (1984, p. 277) remarks:... the more specialised the investment, the lower its value in its next best use. This heightens the risk of opportunistic behaviour during both the performance and renegotiation of the contract.Williamson (1985) sees the significance of asset specificity as difficult to exaggerate. Despite this, no study applying either the asset specificity notion or the broader TCE theory has been found in the hospitality management literature. In addition, there is limited prior research concerned with outsourcing in the hotel sector (Goldman and Eyster, 1992; Hallam and Baum, 1996; Hemmington and King, 2000; Espino-Rodriguez and Padron-Robaina, 2004; Lam and Han, 2005). This lack of research interest in hotel outsourcing is surprising, as outsourcing has become a significant facet of modern hotel management (Sieburgh, 1992; Rowe, 1994; Hottman and Adams, 1996).The next section of the paper describes asset specificity and overviews prior research that has drawn on this construct in attempting to explain outsourcing. The study's research method is then described followed by the main findings.\nOverview of asset specificity: Coase (1937) was the first to consider the question of what drives organisational form, and the theoretical framework he developed has become widely referred to as TCE. He developed the idea that \"economising on transaction costs would determine the organisation of economic activity, and the division of activity between firms and markets\" (Milgrom and Roberts, 1992, p. 51). The relevance of this idea to outsourcing underlines its pertinence to this study. Supporting this view, Lohtia et al. (1994, p. 261) note:Between the extremes of market and hierarchies, firms use a variety of hybrid governance mechanisms ... . Given the wide array of options... the guidance offered by the TCE framework is found very useful.Coase (1937) suggests that a firm will replace the market when the costs of transacting within the firm are less than the costs of transacting through the market. The manner in which a transaction is organised depends on particular attributes of the transaction. Most TCE commentators (Williamson, 1979, 1985, 1996) discuss three critical transaction attributes:1. asset specificity;2. uncertainty; and3. frequency.As noted above, the focus of this study is restricted to the asset specificity dimension of TCE. Asset specificity is fairly complex, comprising six dimensions: human, physical, site, dedicated, brand capital, and temporal asset specificity (Williamson, 1985). The dimensions comprising this typology have been used to structure this study's findings, and are described below.Human asset specificity\nResearch method: The role of asset specificity in hotel outsourcing has been appraised by conducting 15 interviews with managers from 11 large South East Queensland (Australia) hotels (Table I). Interviewees comprised nine financial controllers (FCs), three general managers (GMs), two food and beverage (F&B) managers, and one project engineer. The first column of the table outlines the referencing system used to identify each interviewee in the paper.Most interviewees had more than ten years experience in the hotel industry. As a result, they frequently drew on observations made throughout their career, rather than just experience gained with the current employer. Selecting interviewees from a small geographic area enhanced the reliability of data collected. This is because many interviewees commented on outsourcing issues arising in hotels where other interviews were being conducted. A high degree of networking between GMs and FCs in competing hotels was apparent. This facilitated a richer understanding of the dynamics at play.Although a schedule of interview questions was developed, a flexible interview approach was taken. Interviewees were invited to talk in general terms about their hotel's degree of outsourcing and approach to outsourcing. To avoid prompting interviewees, probes relating to asset specificity and other constructs were only introduced towards the end of the interview. Interviews varied in length between one and two and a half hours.\nField study findings: Williamson's (1985, 1988, 1991) typology of the six asset specificity dimensions is used for presenting the field study observations.Site specificity\nDiscussion and conclusion: Understanding hotels' motivation for outsourcing is important, as organisation is as important as technology, cost, and demand in determining success (Milgrom and Roberts, 1992). The over-riding conclusion of this study is that asset specificity is useful when attempting to understand hotel outsourcing. Whilst a discussion on outsourcing could have been made without drawing on a theoretical framework, theories are useful as they help to bring organisation to the complex reality in which we live.Adoption of the six dimensions of asset specificity (Williamson, 1985, 1988, 1991) provided a valuable framework for exploring and understanding hotel outsourcing. The two asset specificity dimensions that appear most pertinent in this context are site specificity and brand capital. It is noteworthy that no observations signifying the existence of physical asset specificity have been made.Site specificity appears to be a particularly useful construct for sharpening our appreciation of factors affecting subcontracting arrangements. It was noted in several cases that hotels seek subcontractor commitment by requiring subcontractors to make asset specific investments. In turn, hotels are willing to extend longer contracts to compensate subcontractors for site-specific investment. The significance of brand capital is unsurprising given the considerable importance attached to branding in the industry, and many interviewees were unwilling to outsource restaurants, food and beverage, and linen ownership due to a concern over brand capital maintenance.As no theory is all-encompassing, it is not surprising that one example fell outside TCE's asset specificity framework: the brand capital example of the hotel that outsourced the management of a restaurant to a chain with a high quality reputation. From the hotel's perspective, the need of the restaurant operators to maintain their own reputation provided a safeguard that the restaurant would be managed in a way that would not damage the hotel's reputation. In this case, the restaurant's reputation facilitated the outsourcing arrangement. This example presented an extension to TCE's \"brand capital\" dimension of asset specificity. TCE's brand capital perspective focuses on the hotel's reputation and how insourcing may result to protect that reputation, whereas this example demonstrates how the supplier's \"brand capital\" may facilitate outsourcing. This extension can be seen as a contribution to a deeper understanding of the TCE framework.This study suffers from the normal shortcomings associated with fieldwork based on a limited sample of observations. Rather than attempting to provide a basis for generalising, the study focused on exploring the ways asset specificity might manifest itself in hotel outsourcing.It should also be noted that this study considered asset specificity in isolation of the other two TCE attributes: frequency and uncertainty. This should not be taken as a suggestion that these two attributes are unimportant. Asset specificity has been focused on exclusively in an attempt to provide a rich consideration of this construct in the hotel context. Abernethy and Brownell (1999) note that a focus on breadth may assist in refining and/or extending well-established theory but limits one's ability to build theory as the lack of depth signifies a lack of richness.Future research that focuses on the design and choice of commitments in hotel contractual settings is to be welcomed. Findings provided by this study would be usefully extended by research which considers contracts undertaken and how credible commitments are designed to mitigate the potential for opportunistic behaviour. Further, to enable a more complete understanding of outsourcing in the hotel industry, other variables that may also be worthy of consideration include the issue of trust (Zaheer and Venkatraman, 1995; Nooteboom et al., 1997; Seal and Vincent-Jones, 1997; Das and Teng, 1998, 2001), strategy (Langfield-Smith, 1997; Williamson, 1999; Nickerson et al., 2001), institutional environment (Roberts and Greenwood, 1997), and social embeddedness (Granovetter, 1985).\n",
"role": "user"
},
{
"content": "Purpose: - To apply Williamson's six dimensional typology of asset specificity as a theoretical framework for appraising the nature of outsourcing activities in hotels.\nDesign/methodology/approach: - Interviews with senior managers in large hotels.\nFindings: - Site specificity and brand capital appear to be the most pertinent dimensions of asset specificity in the sample investigated. Most observations support the transaction cost economics (TCE) prescription that high asset specificity results in insourcing.\nResearch limitations/implications: - This study suffers from the normal shortcomings associated with fieldwork based on a limited sample of observations. Rather than attempting to make generalisable assertions, the study provides an exploration of the ways that asset specificity might manifest itself in hotel outsourcing decision making.\nPractical implications: - Asset specificity represents an important construct that should be considered when considering whether to outsource. It also provides a valuable context when considering the motivations of parties entering into a subcontracting arrangement.\nOriginality/value: - No study applying either the asset specificity notion or the broader TCE theory has been found in the hospitality management literature. Also, there is a lack of prior research concerned with outsourcing in the hotel sector.\n",
"role": "assistant"
}
] |
Effects of service experience on customer responses to a hotel chain
|
[
"Scepticism",
"Service experience",
"Customer emotions",
"Hotel chain",
"Service perceptions"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the past decade, the topic of customer experience with a brand has gained importance in management. The development of new methods and practices in businesses together with the appearance of recent academic works with original proposals and metrics are fostering the interest of researchers and practitioners (Brakus et al., 2009; Oh et al., 2015). To manage the customer experience, brand managers carefully track the different touchpoints between the brand and the customer. The aim is to align all these touchpoints with the brand promise to build a specific brand experience (Iglesias et al., 2011).\n2. Literature review: To understand the effects of the individual hotel on the hotel chain, the model proposed in this paper stems from customer experience. This term refers to a complex construct that may encompass many variables related to the customer's sensory experience, feelings during the experience and concurrent and subsequent appraisals (Klaus and Maklan, 2012, 2013; Kranzbuhler et al., 2017; Liu et al., 2017). As a whole, customer experience involves the creating of experiential values (Gallarza et al., 2016, 2017) and goes a step beyond classical constructs like service quality, which is more transaction specific and less emotional (Khan et al., 2015).\n3. Methodology: To test the hypotheses, an empirical study was carried out in Spain in the first quarter of 2017. The aim was to analyse consumers' opinions about their most recent stay in a hotel with three stars or more that belongs to a chain. Data were gathered by means of a telephone survey carried out by a market research company. The sample was composed of 300 individuals, with 298 valid responses. Characteristics of the final sample can be seen in Table I.\n4. Results: Once the data were gathered, the first step was to analyse the validity and reliability of the scales before testing the model. Because of the multidimensional nature of service perceptions, this construct was operationalised as a second-order factor model following a two-step approximation method (Chin, 2010). The construct was composed of three factors: core service, employee service and servicescape. The model was tested using partial least squares (PLS) regression with SmartPLS software. In comparison to traditional covariance-based structural equation modelling, this methodology is appropriate when the interest of the study focuses on prediction and on theory development, rather than on strong theory confirmation (Reinartz et al., 2009). This is in accordance with the scope of this research, which focuses on predicting different dependent variables such as emotions elicited by and satisfaction with a specific hotel, attitude to the hotel chain and other outcomes. Furthermore, the use of PLS is preferable when the model, as in our case, includes a large number of indicators and latent variables and when the data is non-normally distributed (Chin, 2010; Ringle et al., 2012). Regarding convergent validity, all the factor loadings were above the common threshold of 0.5 (Table III). In relation to the reliability properties, the results show that the values of Cronbach's alpha, composite reliability and average variance extracted (AVE) for all the constructs were above the widely accepted limits of 0.8, 0.7 and 0.5, respectively.\n5. Discussion: 5.1 Conclusions\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to analyse customer experience in a hotel and its impact on customer attitudes to both the individual hotel and the hotel chain. Specifically, the study focuses on the effects of service perceptions on emotions, satisfaction, attitude to hotel chain, intention to return, scepticism towards negative information and Word of Mouth (WOM).\nDesign/methodology/approach: An empirical study was carried out on a sample of 300 individuals. Data were analysed through structural equation modelling.\nFindings: Service perceptions and emotions elicited by an individual hotel influence the customer response towards the hotel chain. Among all the outcomes considered, the strongest effects are found on WOM.\nOriginality/value: This paper develops and empirically tests an original model that integrates the customer experience in an individual hotel and the customer response to the hotel chain. This model includes variables that have recently been considered in the literature, such as scepticism towards negative information, in combination with more traditional outcomes such as intention to return or WOM.\n",
"role": "assistant"
}
] |
Remote monitoring in industrial services: need-to-have instead of nice-to-have
|
[
"Internet of things",
"Services",
"Customer information",
"Remote monitoring systems"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In today's business-to-business (B2B) environment, a manufacturing firm cannot be successful by only focussing on its products; it needs to complement its products with various services. The implementation of advanced information and communication technology (ICT) to improve service delivery has received increased attention in manufacturing firms' operations. An important managerial issue is how manufacturing firms can create business value through these advanced technologies. Business value includes not only the productivity payoff from the technology but also its impact on critical business activities, such as production, sales and marketing and customer services (Mukhopadhyay et al., 1995; Tallon et al., 2000). This study focuses on remote monitoring systems (RMS) as representative of advanced ICT in manufacturing firms. Previous RMS research has primarily focussed on its technical enablers and condition monitoring. There is a need to go beyond its technical implications and to understand RMS' broader implications on other business activities in manufacturing firms.\nLiterature review: Manufacturing firms moving towards service business\nMethodology: Research design\nFindings: Business value of using data collected through remote monitoring systems\nDiscussion: Business value of remote monitoring systems for manufacturing firms\nConclusion: Research implications\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to better understand the efficient use of remote monitoring systems (RMS) to create business value for industrial services in manufacturing firms. A business view to RMS is a key prerequisite for the successful application of the Internet of Things (IoT) in industrial services.\nDesign/methodology/approach: A qualitative multiple-case study was conducted in six engineering companies. The main source of data was semi-structured interviews with 16 managers.\nFindings: The findings highlight the role of RMS in enabling manufacturing firms to collect data from customers to complement their limited knowledge about their customers. The study demonstrates the business value of using RMS in industrial services and the necessity of capturing the business value through advanced IT technologies.\nResearch limitations/implications: The qualitative research design and choice of six target companies limit the findings to business-to-business manufacturing firms. Further, the focus is on the manager's viewpoint. The findings imply new business value through an efficient use of RMS to complement direct customer contact.\nPractical implications: The study draws attention to the skilled use of advanced RMS and information and communication technology as a prerequisite for the successful application of the IoT in manufacturing firms that provide services for complex solutions and customers dispersed globally.\nOriginality/value: The research shows that using information collected through RMS is an important factor in creating business value in a manufacturing firm's customer relationships. The study contributes by integrating RMS into the customer information collection process to increase the amount, validity and quality of data.\n",
"role": "assistant"
}
] |
Getting-to-know: Inquiries, sources, methods, and the production of knowledge on a videogame wiki
|
[
"Wikis",
"Knowledge production",
"Ethnography",
"Social media",
"Practices",
"Videogames"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: When Ward Cunningham developed the first instantiation of wiki technology in the mid-1990s, his stated aim was to create \"[t]he simplest online database that could possibly work\" (Cunningham and Leuf, 2002). Whereas the basic functionalities of current wiki platforms are described in largely similar terms, research has shown that wiki-based knowledge production is considerably more regulated and ritualized than the open-ended characterization of the wiki as a collaborative content management system seems to necessitate (e.g. Jemielniak, 2014; Mittell, 2009; Reagle, 2010; Sundin, 2011; Toton, 2008). The apparent discrepancy between the modularity with which the wiki platform has become synonymous, and the distinctly regimented ways in which knowledge is produced on Wikipedia and other wikis may be explained by insights attained in the sociocultural study of knowledge[1], where knowledge production is colloquially viewed as bounded and particular (Becher, 1989; Knorr-Cetina, 1999; Latour and Woolgar, 1979). The production of knowledge is here understood as the result of patterned activities connected to the aim of making \"organized statements of facts or ideas\" (Bell, 1973, p. 41, as cited in Knorr-Cetina, 1999, p. 6) in a specific and regulatory context, be it professional or leisure, university science labs, or wikis (cf. Hartel, 2010; Law and Lynch, 1988; McKenzie and Davies, 2010). Here, the overarching purpose of this paper emerges. The paper aims to delve into how wikis function as bounded cultures of knowledge production, and to seek answers to how the formation of the wiki as a documentary resource of recorded information connects to the practices of knowledge production present on the site.\n2. Literature review: As videogames and videogaming have gained cultural and economic significance on an unprecedented scale in the 2000s (e.g. Hunter, 2011; Malaby, 2007; Pearce, 2009; Shaw, 2010), they have also become the object of considerable academic attention. This paper positions itself in the growing body of work focused on investigating knowledge and information in the intersection of videogames and player-authored social media resources like wikis, blogs, and discussion forums. The findings of previous research efforts can be systematized into three principal groupings, ordered by their level of abstraction.\n3. Theoretical framework: This paper employs a theoretical framework informed by \"documentalist\" LIS research (Frohmann, 2004, p. 10) and the sociocultural study of knowledge (e.g. Knorr-Cetina, 1999). The documentalist approach implies that information and knowledge are phenomena irrevocably tied to the practices, technological means, and social groupings of people - and that they should be studied as such. Researchers have used documents as an analytic notion in the study of personal information management (Hartel, 2010), the definition and organization of tasks (McKenzie and Davies, 2010), and the workings of organizations (Davies, 2008). Studies have moreover shown how documents play important roles in the emergence and maintenance of social groups and shared practices (Brown and Duguid, 1996; Trace, 2007), and in the scientific production of knowledge (Frohmann, 2004; Shankar, 2009). The document perspective is useful for the purposes of this paper because it provides a way to operationalize the notions of information and knowledge in the empirical context of the DSW, which is characterized by a high degree of mediated interaction and communication (e.g. Jenkins, 2006; Steinkuehler, 2007; Wirman, 2007). Information is here conceived of as recorded information, i.e. documentation (cf. Buckland, 1991, pp. 3-6). Interactions with information in the empirical setting of the DSW are defined as the activities that produce, modify, generate, or use documentation - texts, images, and comments - or documentary traces - logs and other indicators of activity (cf. Geiger and Ribes, 2011). The document perspective also works to more accurately describe and analyze the modes of communal knowledge production in focus of this paper. Following Buckland (1991, p. 3f), knowledge is defined as that which is imparted when someone is informed. In context of the DSWians' venture to investigate and document DS2 and its workings, the fundamental operational mechanic by which knowledge is produced is activities of documentation: to create, discuss, and circulate texts, images, and videos on the site.\n4. Materials and methods: 4.1 Research setting\n5. Results: Well, it looks like the wiki is about to get busy again [with the release of DS2].\n6. Discussion: The study of the DSW shows several results that accord with observations made by earlier research. The findings support that interactions and competencies relating to information are of considerable importance and explanatory potential in the area of videogaming (Adams, 2009; Harviainen and Savolainen, 2014; Martin and Steinkuehler, 2010; Steinkuehler, 2007). This includes the analytic, synthetic, and reflective capabilities observed by Barr (2014) and Steinkuehler and Duncan (2008) in their studies of knowledge production in game-related social media services. Knowledge practices aside, a multiplicity of related practices also took place on the DSW, including narrative practices with connections to identity formation (Albrechtslund, 2010), collaborative learning (Barr, 2014), and writing (Hunter, 2011). In accordance with the findings of Barr (2014) and Golub (2010), the most commonly mentioned drivers of engagement on the DSW were the enjoyment of playing DS2 and the satisfaction of participating in the knowledge practices of the wiki.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the sociocultural underpinnings of wiki-based knowledge production in the videogame domain, and to elucidate how these underpinnings relate to the formation of wikis as resources of videogame documentation.\nDesign/methodology/approach: The paper is based on a three-month ethnographic investigation of knowledge practices on the Dark Souls Wiki (DSW). In focus of the analysis were the boundaries and knowledge aims of the DSW, together with how its contributors organized inquiries and used various sources, methods of investigation, and ways of warranting knowledge claims.\nFindings: The principal result of the paper is an empirical account of how the DSW functions as a culture of knowledge production, and how the content and structure of the wiki connects to the knowledge practices of its contributors. Four major factors that influenced knowledge practices on the wiki were identified: the structures and practices established by the community's earlier wiki efforts; principles and priorities that informed wiki knowledge practices; the characteristics of the videogame in focus of the site's knowledge-building work; the extent and types of relevant documentation provided by videogame industry, the videogaming press included.\nOriginality/value: Previous research has shown interest in investigating the mechanisms by which community-created knowledge and online resources of documentation emerge, and how these are utilized in play. There is, however, little research seeking to elucidate the sociocultural structures and practices that determine and sustain collaborative online videogame knowledge production.\n",
"role": "assistant"
}
] |
Factors influencing consumer loyalty: evidence from the Ghanaian retail banking industry
|
[
"Service quality",
"Corporate image",
"Loyalty",
"Structural equation modelling",
"Customer satisfaction",
"Trust"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the past three decades, customer loyalty has been empirically and anecdotally linked to business performance and sustainability (Hasiri and Afghanpour, 2016; Leninkumar, 2017; Ofori et al., 2017). For this reason, investigations into the nature and drivers of customer loyalty have been gaining increasing attention in the field of marketing and customer relations by academics and practitioners (Ofori et al., 2017). The improvement in our understanding of the linkage between customer loyalty and corporate profitability has resulted in a shift from the conventional transactional one-off interactions between organizations and their customers to a long-term relationship.\nRetail banking in Ghana: Over the years, there has been progressive transformation of the financial sector in Ghana brought about by reforms in the legal framework, financial operations, institutional arrangements and the development of relevant supporting infrastructure. Following the Financial Sector Adjustment Programme in 1983, liberalization of the banking sector in 1998 and the 2003 banking sector reforms, the Ghanaian banking sector has evolved into a Universal Banking System wherein banks are licensed to offer a range of banking services, as against issuing of separate licenses and categorization of banks into development, merchant and commercial banks. The banking sector comprises of local and foreign banks that compete under the supervision of the Bank of Ghana to provide a wide range of contemporary financial services and products to both retail and corporate clients (Garr, 2013).\nLiterature review and hypotheses development: Customer loyalty\nMethodology: Instrument development\nResults and analysis: Data collected from the survey were analyzed using the partial least squares structural equation modeling (PLS-SEM). SEM allowed the researchers to test causal relationships between latent variables in the proposed research model. There are two approaches to SEM (Hair et al., 2014): the covariance-based SEM which requires that the data exhibit multivariate normality and the variance-based approach PLS-SEM which does not require multivariate normality. A preliminary study of the data collected showed that the data were non normal, hence confirming our choice of PLS-SEM.\nDiscussion: Over the years in the context of Africa, the banking sector has only concerned itself with delivering the conventional service of deposit and credit creation. However, following the sporadic development within the sector alongside customers' increased awareness about the services offered, there is increased competition among the banks. In order to increase profitability, there is increased concern about customer loyalty. In terms of customer loyalty to banks in retail services, numerous studies have explained the determinants of loyalty, most especially in the developed countries. Moreover in Africa and Ghana, specifically, some studies have explained loyalty in terms of some identified variables. By investigating the factors influencing consumer loyalty within the Ghanaian retail banking industry, this study seeks to provide empirical evidence that will serve as a guide to retail bankers in Ghana on the factors that can be used to retain the loyalty of their customers. Further, the study provides insights into the body of empirical literature on customer loyalty. Therefore, this study examined the relationships between service quality, corporate image, values, trust, customer satisfaction and customer loyalty. The PLS-SEM estimations were based on 11 relationships, out of which 10 were supported.\nTheoretical and managerial implications: This study has important theoretical implications. As an expanded study, it integrates and examines the relationships among variables in a way that has not been previously undertaken in the context of the setting. It expands our understanding of the nature of the influencing relationships among the key factors, namely, service quality, corporate image, value, trust, satisfaction and loyalty. In doing so, this research closes the theoretical gap that existed among the relationships of the variables tested in the retail banking industry in the context of Ghana. Additionally, the paper partially responds to the observations by George et al. (2016) that as Africa is expected to be the focus of much of the growth in the world over the next decade, it is important that researchers expand the scope and complexity of research, particularly in the growing industries in order to expand our understanding, and response to the business issues that emerge.\nLimitations and future research directions: This study has been able to present empirical findings to further our understanding of how retail banks can drive satisfaction and loyalty in their customers. There are, however, some limitations. First, following the adopted model which includes service quality, corporate image, value, trust customer satisfaction and customer loyalty, this study covers only the retail banking sector which is a subset of the general banking system. So, future empirical studies may replicate this for the banking sector as a whole. Second, this study cannot be said to be all encompassing, as there may be other variables, measurements, or constructs, situation-specific and individual-specific variables that can be used to explain loyalty intentions of customers. In essence, future research is necessary to investigate other possible variables that can be influential in customer loyalty within the Ghanaian banking industry. Third, this study is based on cross-sectional analysis, which is a one-off investigation. Longitudinal analysis of cross-sections at different periods with the same set of instruments can provide a better insight to this study. Hence, future research in this area is encouraged to be based on longitudinal analysis. Finally, this study is country specific on retail banking loyalty in Ghana and might not be applicable to other countries. Therefore, studies in other countries for the purpose of comparisons should be encouraged.\n",
"role": "user"
},
{
"content": "Purpose: The increasing number of banks in the Ghanaian banking industry has brought about intense competition in the industry. The purpose of this paper is, therefore, to examine the factors that influence retail banking customers' loyalty intentions.\nDesign/methodology/approach: In order to validate the proposed research model, the study adopts a survey design. Data were collected from 565 customers of the top performing banks in terms of customer deposits. Data analysis employed the partial least squares structural equation modeling (PLS-SEM) using SmartPLS version 3.\nFindings: Results from the PLS-SEM analysis indicated that satisfaction, service quality and trust had significant effect on loyalty, with satisfaction having the most significant effect. Interestingly corporate image was found to have a significant effect on both satisfaction and trust but not on loyalty. In all, the proposed model accounted for 63.3 percent of the variation in loyalty.\nResearch limitations/implications: The current study samples customers from only the top performing banks in Ghana. The use of cross-sectional data makes it impossible to study how customers' perceptions change over time. Results from this study could, however, help managers of banks in designing strategies aimed at improving customer loyalty in order to consolidate their market share.\nOriginality/value: This paper adds to existing works that focus on loyalty in the retail banking sector, especially from the context of a developing economy. The study draws attention to the interrelationship among service quality, perceived value, satisfaction, image, trust and loyalty.\n",
"role": "assistant"
}
] |
Visioning "smart city" across the Gulf Cooperation Council (GCC) countries
|
[
"Vision",
"GCC",
"Smart cities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: As urbanization is growing across the globe, the number of cities is increasing and they are encountering many challenges from within and without. To counter the challenges, there is a need for laying down an IT-focused urban management of cities, thereby laying down the foundation of \"smart cities\" (Buckman et al., 2014). This would facilitate in meeting the needs of the residents in an efficient and effective manner and would improve the quality of life of the residents. It has been underscored that with the establishment of \"smart cities\", the economic growth of the region improves and there are opportunities for sustainable development. Conceding that the Gulf Cooperation Council (GCC) region is facing the oil prices crisis presently, it is posited that one of the channels of economic diversification could be through digitalization of economy.\n2. Related literature: Extant research has defined \"smart city\" from diverse perspectives. While some studies have underlined the defining features of a \"smart city\", others have contextualized \"smart city\" initiatives. For laying down the edifice of a \"smart city\", governments ensure that sophisticated services are provided to the citizens using state-of-the-art technologies and sustainable solutions (Ghosh et al., 2016; Kendel and Lazaric, 2015; Li et al., 2015; Lombardi and Vanolo, 2015). There are four fundamentals governing \"smart city\" definition: technology (to collect and manage new data sources, conduct analyses and use networked tools and technologies to manage cities), sustainability (adoption of a diversity of urban growth management policies where the dimensions of environmental, economic and social sustainability are being emphasized), human and social capital (knowledge exchange, creativity and innovation through collaboration among stakeholders to create \"smart solutions\") and governance (institutional preparation and community governance) (Chauhan et al., 2016; de Wijs et al., 2016; Fernandez-Guell et al., 2016; Monfaredzadeh and Berardi, 2015; Tranos and Gertner, 2012). Furthermore, in a \"smart city\", there is emphasis upon furthering businesses and private sector besides energy and other resources (Pan et al., 2011). In this section, we will review some of the major studies which have invoked the \"smart city\" concept to substantiate their research objectives.\n3. Research method: We use a case study approach where we probe select \"smart city\" initiatives across the six GCC countries. A case study seeks to investigate a phenomenon in its natural setting and helps a researcher to deploy a range of data collection methods from entities like people, groups or organizations (Yin, 1994). It helps to deepen our understanding about a phenomenon (Cavaye, 1996). As a qualitative study where the aim is to ascertain the subjective dimensions of the research objective, case study is an appropriate research methodology. In this section, we will cover the research method adopted for generating evidences to substantiate our study; in this regard, we will provide a brief sketch of documentary analysis and interview approaches followed by a brief description of the six GCC countries. Thereafter, we will scan the \"vision\" statements of the GCC countries in terms of their impetus on launching \"smart city\" initiatives with a view to further their ongoing economic diversification strategy amidst the crisis of plummeting oil prices. Besides, we will provide perspectives of the key officials and representatives whom we contacted in line with our study's aims.\n4. Conclusion: Our study shows that with the ongoing oil prices crisis and the increased impetus on economic diversification into non-hydrocarbon sectors, the GCC countries have been initiating plans of developing \"smart cities\" where digital tools and technologies are being liberally used for providing better public services and boosting trade and tourism in the region. Further, our study corroborates the assertion that the notion of \"smart city\" is catching up in the GCC, but in an \"unevenly\" manner with \"different manifestations\" (Wiig, 2015, p. 258). Conceding that participatory governance is important for improvising public services, our study shows that while deploying IT-led \"smart city\" initiatives, a roadmap should be clearly drawn to include the concerns of relevant stakeholders for ensuring sustainable results. All the interviewees averred the role of the private sector in boosting the \"smart city\" initiatives, for instance. Finally, it is commendable that the GCC region is spearheading its ICT initiatives to institute a digital economy which would open up new economic sectors and facilitate the economic diversification moves of the six countries. The key insights from the interviews are summarized in Table II.\n5. Study implications: There are six clear lessons for GCC countries for ensuring that their \"smart city\" initiatives lead to improved quality of life of the citizens: first, all the \"smart city\" plans and policies should adopt a systematic approach for their implementation with their institutionalization; second, they should ensure that a robust IT infrastructure is in place wherein the hardware and software requirements are fully met; third, that all \"smart\" initiatives should be \"sustainable\" and \"competitive\" wherein the social, economic and environment dimensions are adequately taken care of during implementation of innovative \"smart city\" solutions; fourth, that \"smart city\" initiatives must take into account the local factors which might impact the outcomes of implementing these initiatives; fifth, there should be inclusion of relevant stakeholders in the policy-making and policy-implementation processes vis-a-vis \"smart city\" initiatives; and sixth, benchmarks should be laid down for assessing the degree to which the initiatives meet the desired objectives of these initiatives. In sum, the lessons for the \"smart city\" initiatives of the GCC countries are as follows: \"A Smart City should consider the human dimension of the city by respecting its complexity and diversity; it should assume its unpredictable behavior by learning to manage uncertainty; it should enhance traditional and well-tested solutions by urban planners; and it should promote an advanced governance model by involving as many stakeholders and citizens as possible\" (Fernandez-Guell, et al., 2016, p. 48). Given the emphasis upon digitally innovative and sustainable solutions in \"smart city\" framework, it is important that value creation is attained at every stage of the implementation of a \"smart city\" solution. To sum up, an assessment metric is required for gauging the implementation of \"smart city\" initiatives across the GCC.\n6. Study limitations and directions for further research: As a case study veering around six GCC countries, generalizability is wanting in our study. Therefore, the socio-cultural and economic context of the countries is important while conceiving of \"smart city\" applications. Furthermore, our study requires replication using an empirical method soliciting views from different stakeholders, including citizens, to attain triangulation. As a multi-faceted concept, \"smart city\" research should encompass the different dimensions which may underline the manner in which public value may be created in the long run. Furthermore, a broad-based comparative perspective is required wherein the GCC countries may derive lessons from those developed countries which have succeeded in implementing \"smart city\" initiatives. Also, the extent of readiness for furthering \"smart city\" initiatives in the GCC countries is required. Implicitly, further research is required to ascertain the role of human and social capital besides tapping the socio-technical dimensions in the GCC context. Further research is called for to assess the extent to which \"smart cities\" in the GCC are successful in networking with international cities and shaping their policies accordingly to secure competitiveness and economic success. Finally, further research is required to develop a metric for assessing the \"smart city\" initiatives in the GCC countries. It is important that with the ongoing economic diversification moves of the GCC countries, the role of pushing forth the \"smart city\" agenda be appraised and improvised, wherever required.\n",
"role": "user"
},
{
"content": "Purpose: The study aims to underscore the initiatives taken by the Gulf Cooperation Council (GCC) countries in spearheading their drive towards creating \"smart\" cities.\nDesign/methodology/approach: The study uses a qualitative approach by invoking documentary analysis supplemented by responses provided by 13 interviewees from public and private sector.\nFindings: All the six GCC countries (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia and United Arab Emirates) are keen on building upon their infrastructure to push their \"smart city\" agenda which would go a long way in furthering the economic diversification objective of their region besides improving the quality of public services.\nOriginality/value: Hitherto, research has been focused on appreciating the \"smart city\" initiatives of developed countries; this study seeks to build upon the literature on \"smart cities\" by contextualizing the research setting in the developing countries. Second, the study shows that with the ongoing oil prices crisis in the GCC, the \"smart city\" initiatives of the countries are conceived as possible avenues of economic diversification and competitiveness.\n",
"role": "assistant"
}
] |
Catastrophe effects on stock markets and catastrophe risk securitization
|
[
"Japan",
"Stock markets",
"Securities",
"Financial risk",
"Disasters"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Insurance-linked securities such as catastrophe bonds and life bonds are an alternative risk management technique for ceding insurance-related risks (catastrophe or non-catastrophe) to capital markets (Swiss Re, 2006). Proponents often claim that catastrophe risk securities show no or little correlation with stock and bond markets and offer significant attractions to investors, providing a good diversification of risks (Canter et al., 1997; Cox et al., 2000; Froot, 2001; Cowley and Cummins, 2005; Lin and Cox, 2007). This study examines the correlation between catastrophe risk securities and other equities by analyzing catastrophe effects on the Japanese stock market. It aims to provide important insights to insurance/reinsurance companies about catastrophe property and mortality risk securitization, and to investors who are interested in further improvement of their portfolio risk/return profile [1].Many researchers have examined effects of specific US catastrophes on market value of property-liability insurers (Angbazo and Narayanan, 1996; West, 2003). Shelor et al. (1990) examine the impact of the 1989 San Francisco earthquake on the real estate industry and find that the market differentiates the real estate industry in San Francisco from those of other areas in California. While a significant negative stock response is observed for those San Francisco real estate-related firms, there are no significant effects for firms in other areas. In another related study on the insurance industry, Shelor et al. (1992) document a significant positive impact on insurance firms in San Francisco. They suggest that markets perceive more positive effects than negative effects for insurance firms. Aiuppa et al. (1993), using a different sample, confirm the increased stock price of exposed insurers in California. Lamb and Kennedy (1997) examine the market value of insurance firms after the 1994 Los Angeles earthquake and again, they observe a significant positive stock response for exposed insurance firms. A study on Hurricane Andrew (Lamb, 1995) indicates that investors selectively sell those insurer stocks that have direct premium written in Florida or Louisiana. Lamb (1995) concludes that the market efficiently interprets this information based on magnitude of risk exposure in the two states. In a similar study on Hurricane Hugo, Cagle (1996) reports significant negative abnormal returns of exposed insurance firms. In a more recent study, Worthington and Valadkhani (2004) examine the impact of natural disasters on the Australian equity market and indicate that bushfires, cyclones and earthquakes have a major effect on market returns, whereas severe storms and floods do not.The Pacific-Asia area suffers from more extreme catastrophes, both natural and man-made (see, for example, \"Natural Catastrophes and Man-Made Disasters\" in Sigma published by Swiss Re). As the largest economy in this region, Japan is likely exposed to catastrophes frequently (Green, 2003). Then one interesting question arises: as one of the most catastrophe-prone countries in the world, how do catastrophes affect the whole stock market and individual industries in Japan? Yamori and Kobayashi (1999) investigate the impact of the 1995 Hanshin-Awaji earthquake on Japanese insurers' value. Contrary to the results from some US studies, they find significant negative stock reactions. The mixed results indicate the importance of catastrophe-effect analyses in different market settings. While most of previous studies examine effects of one specific catastrophe on some specific firms, this current study complements the literature by examining how catastrophes affect the whole Japanese stock market and some individual industries. Particularly, this research is designed to provide some insights in analyzing the potential of catastrophe risk securitization by examining whether catastrophe risks have a low or no correlation with portfolios consisting of other equities. For catastrophe mortality risk securitization, this study also analyzes the impact of catastrophe casualties such as death, injury and missing persons, in addition to that of catastrophe loss amount.Specifically, the first issue examined in this study is whether there is a significant correlation between the whole stock market and the loss amount from a catastrophe. One could argue that firms will experience stringent capital constraints after a major catastrophe (Cummins and Danzon, 1997). Because of substantial cross-holdings among the Japanese public firms, the capital shock to insurers can be quickly conveyed to other firms. Therefore, the whole market may react negatively since the depleted capital situation is widespread. The second issue is whether there is a significant relationship between the whole stock market and catastrophe casualties. Because of the high-population density in Japan, many catastrophes incur human death, injury or missing. \"Catastrophes impose a big potential problem for a life insurer's solvency since fatalities from natural and man-made disasters may be tremendous\" (Lin and Cox, 2007). The third issue relates to catastrophe effects on some catastrophe-sensitive industries. The puzzling findings (Cagle, 1996; Lamb, 1995; Shelor et al., 1990, 1992; Aiuppa et al., 1993; Lamb and Kennedy, 1997) from different US catastrophes motivate this study to examine the insurance industry's equity response to catastrophes in Japan. Other industries of interest are real estate and construction (Shelor et al., 1990). Real estate and construction sectors possibly become more attractive after catastrophes.\n2 Data and research design: The Japanese catastrophe data are retrieved from \"Natural Catastrophes and Man-Made Disasters\" in Sigma published by Swiss Re (www.swissre.com). The Sigma publications annually summarize catastrophes worldwide, including the Pacific-Asia regions, European, and North America. For each catastrophe recorded, the data include the catastrophe's date, peril, loss amount, and death, injury and missing persons. This research analyzes 63 Japanese catastrophes recorded in the Sigma publications from January 1, 1990 to December 31, 2000. If more than one catastrophe happen at the same date, loss amount and other information about these catastrophes is combined to examine stock market reactions. Using the Japanese consumer price index from the Pacific-Basin Capital Markets (PACAP) database as the proxy of inflation rate, this research adjusts the loss amount to the base year of 1995. Loss amount is sometimes recorded only in terms of the US$. These data are converted to the Japanese yen according to the daily exchange rate from the PACAP database.The final sample consists of 61 catastrophes from 1990 to 2000. A summary of the characteristics of these catastrophes is presented in Table I.In addition to loss amount (AdjLoss), this research also analyzes the Japanese stock market reactions to some other catastrophe variables including death, injury, and missing persons by incorporating a death dummy variable (DeathDm, 0 if no death), an injury dummy variable (InjuryDm, 0 if no injury), and a missing persons dummy variable (MissDm, 0 if none are reported missing)[2]. Some catastrophes involve some neighboring countries such as Korea, and if so, those affected countries are also recorded in the data. For a large catastrophe with more than one country affected, investors will likely adjust expectations on the regional future growth downward. To analyze this effect, this research adopts another dummy variable JapanDm with a value of 0 when no other countries involved. Sigma publications separate catastrophes into \"natural\" and \"man-made\" based on perils. This research investigates whether there is a significant difference between stock market reactions to natural and man-made catastrophes by adopting a dummy variable CATDm (value equals 0 for a natural disaster). Most natural disasters more or less are forecastable, but this is not the case for man-made catastrophes such as a terrorist attack (Bouriaux and Scott, 2007). A man-made catastrophe causes more psychological damage and will cause more uncertainty among investors. This study analyzes the impact of all catastrophes (the all-catastrophe model) and also examines whether equity markets respond more negatively to large catastrophes than small ones (the large catastrophe model and the small catastrophe model). In this research, small catastrophes are those with an adjusted loss less than 10 billion Japanese yen, while large catastrophes are more than 10 billion yen. Table II displays the correlation among the variables.To investigate the Japanese stock market reactions to catastrophes, this study applies a methodology similar to the one presented in Brown and Warner (1985). For the whole Japanese stock market and each industry sector, stock returns are collected from the PACAP database from January 1, 1990 to December 31, 2000. This study analyzes the stock return on the trading day immediately following the catastrophe date. The return data from 20 days prior to a catastrophe to 20 days following the catastrophe are excluded and the remaining stock return data are used to estimate the market model: Equation 1 where Ri,t is the value-weighted return for industry i at day t, and Rm,t is the value-weighted return for the whole Japanese stock market at day t. Then for each industry i, the abnormal return at day t (Ai,t) can be obtained: Equation 2 Both equity and abnormal returns on the trading day immediately after a catastrophe are then regressed on the independent variables (loss amount, death, injury, human missing, JapanDm, and CATDm) for the entire stock market and three catastrophe-sensitive industries (insurance, construction, and real estate). The regression equations are: Equation 3 Equation 4 where i denotes a specific industry or the entire stock market and t is the trading day immediately following a specific catastrophe c. Note that there is no abnormal return regression analysis for the overall equity market model.\n3 Empirical results: 3.1 Overall stock market responses\n4 Conclusion: Insurance-linked securities such as catastrophe bonds and life bonds are an alternative risk, management technique for ceding insurance-related risks (catastrophe or non-catastrophe) to capital markets. Proponents often claim that catastrophe risk securities show no or little correlation with stock and bond markets and offer significant attractions to investors providing a good diversification of risks. This study examines the correlation between catastrophe risk securities and other equities by analyzing catastrophe effects on the Japanese stock market.The results indicate that there is no significant correlation between the overall equity market and catastrophes in Japan, whether a catastrophe is man-made or natural, large or small, involves Japan alone or also other countries, or creates casualties. There is no significant correlation between catastrophes and equity returns in the real estate industry, but a significant negative/positive correlation exists between the insurance/construction industry and loss amount of a catastrophe. No other independent variables (including mortality) have an impact on the insurance/construction market. This research also indicates that while there is a significant correlation between loss amount of a catastrophe and the insurance/construction market for large catastrophes, such correlation does not exist for small catastrophes, and mortality has no impact, regardless of a catastrophe being small or large.This research provides important insights to insurance/reinsurance industries for catastrophe property and mortality risk securitization and to investors who are interested in further improvement of their portfolio risk/return profile. The findings imply that for investors with a well diversified portfolio, catastrophe risk securities are in fact a further diversifier in the Japanese financial market, and catastrophe risk securitization in Japan should have a great potential. For investors with most of their investment in the insurance industry, catastrophe risk securities create more diversification, especially when the trigger levels of catastrophe risk securities fall in the large catastrophe category, and even though the mortality risk has no impact on the insurance market, this also applies to catastrophe mortality risk securities because of the negative effect of the loss amount of the same catastrophe if it is large enough. However, for investors with investment in the construction/real estate industry, it may still be a good idea to include catastrophe property and mortality risk securities in their portfolios because of a positive or no correlation between the construction/real estate industry and catastrophe loss amount and mortality risk.\n",
"role": "user"
},
{
"content": "Purpose: - Conventional wisdom states that catastrophe risk securities show no or little correlation with stock and bond markets, and offer significant attractions to investors providing a good diversification of risks. This study examines the correlation between catastrophe risk securities and portfolios of other equities by analyzing catastrophe effects on the Japanese stock market.\nDesign/methodology/approach: - Using catastrophe data from SwissRe Sigma publications and stock returns from the Pacific-Basin Capital Markets database, this paper analyzes stock and abnormal returns in the Japanese stock market using event study methodology.\nFindings: - For the Japanese stock market as a whole, there is no significant catastrophe effect. The results indicate a significant negative correlation between catastrophe loss amount and the insurance industry's equity returns and abnormal returns, a significant positive correlation with the construction industry, but no significant correlation with the real estate industry. This paper also analyzes the impact of catastrophe causalities. The results show little evidence on the significance of these variables.\nOriginality/value: - This study provides important insights to the insurance/reinsurance industry in the Japanese risk market for catastrophe property and mortality risk securitization and to investors who are interested in further improvement of their portfolio risk/return profile by including catastrophe risk securities.\n",
"role": "assistant"
}
] |
Construct validation of a Theory X/Y behavior scale
|
[
"Management theory",
"Leadership",
"Organizational behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In his remarkable article and subsequent book, both entitled \"the human side of enterprise,\" McGregor (1957, 1960) advanced one of the most important and influential theories in the history of management and organizational behavior (Bedeian and Wren, 2001; Crainer and Dearlove, 2006; Miner, 2003). According to McGregor's Theory X and Theory Y, the assumptions that a manager holds about the nature of his/her employees tend to be self-fulfilling. Consequently, the manager who holds the pessimistic Theory X mind-set, or cosmology as McGregor labeled it - namely, that employees are basically lazy, untrustworthy, lack ambition, and offer little in the way of useful ideas - will manage in such a controlling and commanding fashion that these beliefs are \"brought to life\" by employee behaviors. And, per McGregor's supreme sense of irony, the manager will subsequently turn to a colleague and righteously lament that \"nowadays you can't hire good workers.\" The assumptions undergirding Theory Y are the converse of Theory X -namely that employees: can be motivated to work hard and find work enjoyable; are capable of self-direction and self-control; often seek to grow and accept responsibility; and can be the source of many useful ideas.\nMethod: Sample\nResults: Basic statistics and correlations among the variables are shown in Table IV. We examined the underlying structure of the 26-item X/Y behavior scale using exploratory factor analysis (EFA). We randomly divided our participants into two sub-samples of 247 cases in order to determine if the factor structure was replicated. While there are no clear guidelines for the number of cases needed for EFA, each of two sub-samples exceeded the rules of thumb of five cases per item (e.g. Hatcher, 1994; Bryant and Yarnold, 1995) and of 200 cases in total (Gorsuch, 1983).We first factor analyzed each of the two sub-samples using principal component analysis with Varimax rotation with an eigenvalue criterion of 1.0 for extraction of factors. Seven factors emerged in each of the two sub-samples, accounting for 56.8 per cent and 52.9 per cent of the variance. Examination of the scree plots suggested a two-factor solution in each sub-sample, with variances explained of 33.1 per cent (21.3 per cent, 11.8 per cent) and 28.8 per cent (18.6 per cent,10.2 per cent). To develop a shorter scale, we conducted a second set of EFAs using principal component analysis with Varimax rotation in which a two-factor solution was imposed. Items loading 0.50 or higher on the same factor in both sub-samples were retained, yielding eight Theory Y and five Theory X items (marked in bold in Table I).Cronbach alpha for the 13-item X/Y behavior scale was 0.77 for both administrations. The test-retest correlation was 0.65. The 13 excluded X/Y behavior items were treated as a conceptually identical measure and included in the analysis that follows. Accordingly, it might be noted that Cronbach alphas for the 13 excluded items were 0.50 and 0.51 for the two administrations. Test re-test reliability was 0.51.Correlations between the 13-item X/Y behavior scale and conceptually identical, proximally related, distally related, and unrelated variables were entirely consistent with the theorized nomological network. More specifically correlations were 0.60, 0.43, 0.19, and -0.02 (mean of unrelated variables) - see Table IV. These relationships held after we controlled for the potential effects of age, sex, salary, and job tenure by entering these variables in step one in a series of hierarchical regressions. Specifically, the corresponding Betas were 0.61, 0.40, 0.16, and 0.01 (mean of unrelated variables) - see Table V.Managers were more Y oriented than non-managers with respect to the 13-item X/Y behavior scale (m=3.83, SD=0.49 versus m=3.70, SD=0.44, d=0.25, p<0.01). However, managers did not report attitudes that were more Y oriented than non-managers (m=3.26 versus m=3.21, d=0.13, p=0.15).Women were more Y oriented than men with respect to both X/Y behaviors (m=3.84, SD=0.45 verus m=3.69, SD=0.48, d=0.32, p<0.001) and attitudes (m=3.30, SD=0.38 versus m=3.17, d=0.33, p<0.001). These results are consistent with the often noted phenomenon of women engaging in more transformational and participative leadership practices than men (Eagly et al., 2003).\nDiscussion and conclusion: Summarizing results, on performing EFAs, a 13-item Theory X/Y Behavior scale emerged that demonstrated good psychometric properties. Internal consistency reliability estimates (Cronbach alphas) were 0.77 for both administrations, exceeding the conventional benchmark of 0.70 (Nunnally, 1978) and test re-test reliability was 0.65. The pattern of associations was in conformance with the a priori nomological network. Correlations between the 13-item X/Y behavior scale with measures of conceptually identical, proximal, distal, and unrelated variables were: 0.60, 0.43, 0.19, and -0.02 (on average), respectively. In addition, the differences with respect to sex were consistent with prior research involving related constructs. The reliability of the 13-item X/Y Behavior scale was almost as high as the 26-item pool of X/Y behavior items (0.77 versus 0.79); the Spearman-Brown prophecy formula would have estimated that a doubling of scale length would have resulted in an alpha of 0.87. Further, the pattern of associations remained virtually unchanged after controlling for four demographic variables.The content of our X/Y behavior scale also corresponds closely to recent normative, prescriptive pieces that argue for empowerment, participation, and positive organizational behavior. For example, Coleman (1996) explicitly noted the importance of eight of the 13 behaviors, including encouraging employees to: take initiative, share their ideas, and take responsibility for their performance; and encouraging managers to: share information, delegate authority, and collaborate in building trust.The X/Y behavior scale could be useful for theoretical research. Field studies that use valid measures of managerial X/Y attitudes and behaviors, along with measures of employee attitudes and work behaviors, and individual and group-level performance data would provide a solid basis for drawing inferences about the substantive validity of McGregor's theorizing.With regard to practice, the 13-item X/Y behavior scale might be used for self-diagnostic purposes. Indeed, Heil et al.(2000) argue that McGregor's most important concern was that managers should question their assumptions and beliefs about managing. If managers were provided with their X/Y attitude and behavior scores, as well as the average scores of other managers - and ideally, the average scores of high-performing managers - this might provoke some useful introspection. If completed by employees and sent anonymously to an independent party, the instrument might be useful for diagnostic purposes as a first step in the process of organizational development. Because X/Y behavior items, in comparison to attitudinal items, may be less prone to a social desirability bias, their use for self-assessment and development purposes may be beneficial. Further, behavioral items translate attitudinal statements into practical terms that are useful for individual and organizational developmental intervention.From a practitioner perspective, a benefit of a Theory X/Y diagnostic measure is that McGregor's concepts have broadly penetrated the management lexicon and may have considerable face validity and resonate with managers. Bennis 40 years ago was convinced:... that the popularity of McGregor's, The Human Side of Enterprise, was based on his rare empathy for a vast audience of managers who are wistful for an alternative to the mechanistic concept of authority, (i.e. he outlined a vivid utopia of more authentic human relationships than most organizational practices today allow.) (Bennis, 1969, p. 22).Theory X/Y continues to be seen as relevant to issues related to leadership and organizational development, and a measure of X/Y behaviors might facilitate substantive research in a number of areas. For example, Pastor and Mayo (2008) suggest that it is better to try to change X/Y assumptions/attitudes than X/Y behaviors. To be sure, this would accomplish what Argyris (2002) referred to as double-loop learning, where behavior is changed by altering governing values. Yet Schein (1975) asserted that most managers may be unable to change their assumptions, and that even if their assumptions did change, managerial behaviors might not. It is now possible to test whether it is more fruitful for leadership development programs to attempt to change:* assumptions alone;* managerial behaviors alone; or* assumptions and managerial behaviors combined.Similarly, questions related to the relationships between Theory X/Y and outcomes such as employee well-being might be explored using our measure along with a measure of attitudes (see Larsson et al., 2007; Quick and Quick, 2004). It might be fruitful to examine relationships between managerial behaviors and ethical evaluations (see, Neuliep, 1990, 1996). While McGregor suggested that Theory Y attitudes would generally lead to improved organizational performance, even he acknowledged that this would not always be the case. The use of our scale might enable researchers to explore the conditions under which Theory Y produced better results and when it did not (Bobic and Davis, 2003) and to assess Theory X/Y orientation as a moderator of relationships between specific practices (such as Scanlon plans) and outcomes.As is the case in most research endeavors, there are a number of limitations and possible weaknesses that should be considered. First, the X/Y behavior items pertain to the endorsement of specific managerial practices. In a sense these items reflect attitudes about behaviors, not actual managerial behaviors, per se. We recognize that endorsed behaviors are often discrepant from self-reports of enacted behaviors, which in turn are often veridically inaccurate (Argyris and Schon, 1974). Future research, ideally, should collect multi-source and multi-level data pertinent to enacted as well as espoused behaviors. The network of relationships should also be expanded to include other leadership and in-role/extra-role performance constructs (such as, for example, the Multifactor Leadership Questionnaire (Bass and Avolio, 1990).Second, we recognize that all data in the present research were collected from a common source and method, which could lead to overstatement or understatement of our results (Podsakoff et al., 2003). However, characteristics of our design likely mitigated some sources of bias. Responses were anonymous, X- and Y-phrased items were mixed within sections of the questionnaire, and the wording of X and Y items were not mirrors of each other. Empirically there was evidence that biasing effects were minimal. As noted above, managers scored higher than non-managers on X/Y behaviors but there was no difference on X/Y attitudes. Likewise a post hoc analysis indicated that respondents in the for-profit sector scored higher than their non-profit/government counterparts on X/Y attitudes (d=0.35, p<0.01) but lower than their counterparts with regard to X/Y behaviors (d=0.29, p<0.05). This inversion suggests that such potential sources of bias as social desirability, consistency, and implicit theories were not controlling. Moreover, examining X/Y behavior scores at Time 1 and scores on identical, similar, distal, and unrelated measures at Time 2 yielded a pattern of results similar to that obtained with contemporaneous data.Third, the reverse scoring of X behaviors would appear to assume a continuum. Conceptually, strong disagreement with a Theory X behavior may not be tantamount to complete endorsement of a corresponding and parallel Theory Y behavior. To disagree with the practice of limiting the responsibility of employees does not necessarily equate to endorsing job enrichment. Yet, the 13 items that survived show good internal consistency and Cronbach alpha was reduced when subscales were created comprised only of X or Y behaviors.Fourth, the present sample was relatively young with a mean age of roughly 28 years (35 per cent of participants were between 20 and 24 years old versus the 10 per cent of the employed labor force who were in that age group in the USA (US Department of Labor, 2008). Accordingly, in order to examine the generalizability of the present results, associations were examined separately for respondents:* above and below median job tenure;* with and without managerial experience; and* younger and older than the median age.With regard to the two categories of job tenure, correlations on average differed by 0.09, and the pattern of correlations across the nomological network was nearly identical in both categories. Likewise the mean difference in correlations for long and short tenure groups was 0.07 with the pattern of correlations being very similar.. The mean differences in correlations among younger and older respondents, was also 0.07, again with a very similar pattern of correlations.,.Thus, there is evidence that the present results are not an artifact of the composition of the present sample.In conclusion, in light of the importance of McGregor's theorizing about individual differences in managerial attitudes and behavior, we believe that it is time to develop measures of both attitudes and behaviors, thereby permitting a thorough test of substantive validity. Theory X and Theory Y have long been compelling ideas. After 50 years it is perhaps time to see if McGregor was correct.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to discuss the historical importance and current relevance of Douglas McGregor's Theory X and Y, and to suggest that the paucity of related empirical research is, in part, attributable to the lack of validated measures. The present research seeks to describe the development and construct validation of a measure pertinent to Theory X/Y behaviors.\nDesign/methodology/approach: - Surveys completed by 512 working adults provide the present data. A total of 26 initial Theory X/Y behavior items are reduced to 13 through factor analysis. Convergent and discriminant validities are examined through correlational and regression analyses with measures of proximal, distal, and unrelated constructs. Test re-test reliability is assessed using longitudinal panel data from a subset of respondents.\nFindings: - The results provide evidence of the construct validity of the new measure.\nResearch limitations/implications: - Respondents are relatively young and drawn from one region of the USA. Future research should collect multi-source and multi-level data.\nPractical implications: - The 13-item scale may be useful as a diagnostic tool for individual and organizational development.\nOriginality/value: - This paper represents the first research endeavor that focuses on construct-validating a measure of managerial X/Y behaviors, as distinct from attitudes. The scale can be used in substantive research, including a more robust test of McGregor's theorizing.\n",
"role": "assistant"
}
] |
Understanding perceived risks in mobile payment acceptance
|
[
"Perceived value",
"Perceived risk",
"Consumer acceptance",
"Mobile payment",
"Perceived uncertainty"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The past several years witnessed the popularity of mobile commerce applications, such as mobile shopping, mobile travel service, online to offline (O2O) consumption, etc. The prosperity of mobile commerce heavily relies on the consumer acceptance of mobile payment (m-payment), conducted via mobile terminals, e.g. a mobile phone or personal digital assistant. Currently, the huge potential market for m-payment has attracted various service providers in China to offer m-payment services. For example, Alipay, the most popular third-party online payment platform, recently released its Alipay Wallet. WeChat, the main competitor of Alipay, also incorporated m-payment into its system. Major telecommunication operators in China, including China Mobile, China Unicom, and China Telecom, have developed their own m-payment services as well.\n2. Literature review: 2.1. Perceived risk theory\n3. Research model and hypotheses: A research model was developed based on perceived risk theory, prospect theory and perceived value theory. Prospect theory and perceived value theory provide the rationale for the relationships between perceived risk, perceived value, and acceptance intention. Perceived uncertainty-perceived risk relationships were inferred from perceived risk theory and relevant research. The research model is depicted in Figure 1.\n4. Research methodology: 4.1. Measurement development\n5. Discussion and conclusions: 5.1. Discussion\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine and quantify how various uncertainties result in different perceived risk dimensions that hinder mobile payment (m-payment) acceptance.\nDesign/methodology/approach: An uncertainty-risk-value framework was proposed based on perceived risk theory, prospect theory, and perceived value theory. Structural equation modeling method was used to test the research model.\nFindings: Perceived information asymmetry, perceived technology uncertainty, perceived regulatory uncertainty, and perceived service intangibility are confirmed as the main determinants of perceived risk, while perceived performance risk, perceived financial risk, and perceived privacy risk were found to have strong negative effects on perceived value and acceptance intention.\nPractical implications: The findings may help businesses and policy makers better understand the sources of perceived risk and help support the development of appropriate strategies to mitigate the risk concerns consumers have regarding m-payment.\nOriginality/value: Although the hindering effects of perceived risk regarding m-payment acceptance have been confirmed in previous studies, the sources of perceived risk were rarely investigated. By examining the determinants of perceived risk in m-payment acceptance, this paper offers insights into how consumers perceive risks when adopting new innovations. Additionally, it bridges the gap between the antecedents and consequences of perceived risk.\n",
"role": "assistant"
}
] |
A new national design code for indoor air environment of sports buildings
|
[
"Air",
"Design for quality",
"Buildings",
"Stadia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: In recent years, more sports buildings are being built in China. Owing to the lack of relevant design codes or regulations for the sports buildings many problems have incurred after construction, especially for indoor air environment. These environmental problems could potentially cause serious health issues or limitations in sporting events. In the coming years, many sports facilities will be built for the Beijing Olympic game 2008. The design of these sports buildings should not only meet the requirements of the International Olympic Committee and other international sports federations but also international or national design codes and regulations, in order to reach the top level for serving Olympic game.Currently, there are design codes for sports buildings in China, which were implemented in 2003, These codes were approved by the Ministry of Construction and the Chinese Sports General Bureau which was \"the first and the only one.\" The codes were being composed from 1984, after several times modifications and revisions, which based on numbers of investigations, measurements, analyses, and some international experiences. With achievement on structure form and technical details, the \"Design code for sports building\" (MOC and CSGB, 2003) was finally published and implemented officially on October 1, 2003. As one branch of the national standard system, the \"Design code for sports buildings\" is regarded as a guide or standard for architectural design, to increase design abilities of sports buildings.There are total ten chapters and 25 sections of this code, in which have two chapters that deal with indoor air environment. The typical styles of domestic sports building are concerned on technical items, which are integrated with the requirements of international athletics organizations and the situation of domestic facilities.\n2 Requirements for indoor air: 2.1 Competition demands\n3 Parameters of code: Compared with the international standards and design guidelines, the Chinese new design code has more detailed parameters for high level sports game. In the design code, sports buildings, include gymnasium, natatorium and stadium, are divided into four classes according to employed needs, that is, gymnasium of class A could be employed for international sports game like Olympic game or Asian game, class B could be employed for national sports game, class C and class D could be employed for province and local sports game. The parameters of indoor environment in the code are separated for each class. Mainly, the most stringent is class A, and lightened to class D. Different area needs different class sports building, therefore, sports building could be built of different class in various areas by their own conditions, and corresponding parameters could be determined. For the increasing demand of comfort and health, it is necessary to design new building with advanced guidelines. It is not hard to achieve by modern building techniques and equipment.Whereas the designs of sports building are built for Olympic game 2008 and other international sports games in recent years of China, the parameters of class A are always paid more attention to. According to requirements of sports federations and committee for Olympic game, the new standards parameters of indoor air on sports buildings are determined as below.3.1 Air movement\n4 Conclusions: This paper introduces the first design code for sports building of China which includes an important chapter on indoor air environment. The parameters deal with indoor air velocity, temperature, humidity and fresh air volume. This code could be a guide to advice the designing of indoor air environment of sports building, especially for the Beijing Olympic game in 2008, and is useful to help advanced HVAC design and analysis on the similar large space buildings.There also may be some imperfection in this code. With technology developing, it is recognized that good indoor environment means multi elements controlling instead of the single CO2 only. So, some necessary parameters appended to the code might be the further progress for the next version.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose is to present parameters of a new design code, which is intended to control the indoor air environment for sports buildings in China.\nDesign/methodology/approach: - Based on requirements of international sports federations the characteristics of indoor airflow of sports buildings in China, include the parameters of indoor air environment of gymnasiums and natatoriums, air velocity, temperature, humidity and fresh air volume. Some illustrations and considerations are shown too.\nFindings: - Compared with the international standards and design guidelines, the Chinese new design code has more detailed parameters for high level sports game.\nResearch limitations/implications: - Not all the indoor air parameters of the code are mentioned in this paper. The designs of sports building are built for high level sports games in recent years of China, the parameters of class A are paid more attention to in the paper.\nPractical implications: - The contents of paper could be the major designs in heating, ventilation and air conditioning system in sports building.\nOriginality/value: - This paper could be a guide to advice the designing of indoor air environment of sports building, especially for the Beijing Olympic game in 2008, and is useful to help advanced HVAC design and analysis on the similar large space buildings.\n",
"role": "assistant"
}
] |
The power of explanations in mitigating the ill-effects of service failures
|
[
"Service failures",
"Restaurants"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers can be found at the end of this article.\nIntroduction: Organizational justice theories are widely applied in the service recovery literature (Tax et al., 1998); because these frameworks enable us to better understand the dynamics of various service recovery efforts. Prior work has demonstrated that interactional justice strongly influences customer perceptions of service failures (Boshoff, 1997; Tax and Brown, 2000; Goodwin and Ross, 1992; Sparks and McColl-Kennedy, 2001; Smith et al., 1999). These previous studies have mainly focused on understanding the role of politeness, courtesy and empathy in shaping consumers' fairness perceptions and post-recovery satisfaction judgments. However, research examining the power of explanations in mitigating the negative effects of service failures is scant. The lack of interest in this area is surprising because attribution theories suggest that people tend to seek causal explanations for an event that is either surprising and/or negative (Folkes, 1988; Weiner, 2000).It is important to study causal accounts or explanations in a context of service failures for three reasons. First, cognitive appraisal of injustice should mediate people's emotional and behavioral responses to the situation (Lazarus, 1982). Second, consumers may make their own inferences about the fairness of the process if kept in the dark. Providing a causal account eliminates a worst-case reading of the service provider's motives and intentions (see Bies and Shapiro, 1987). Third, explanations can be a simple and cost-effective tool for managing customer perceptions.The goal of this study was to examine whether providing a causal explanation for service failure has a positive impact on customer evaluations. Specifically, this study investigated whether the timing of the explanation (prior to service delivery vs. after the failure has occurred) matters to customers. Typically, customers expect to be compensated for the inconvenience caused by the service failure (Smith et al., 1999; Tax et al., 1998). To tease out the effect of compensation on customer perceptions, we studied service failures under two conditions: no recovery and tangible compensation.\nCausal accounts and service failures: Social or causal accounts are often used in organizational settings because offering legitimate reasons can be an effective way of managing bad news (Rousseau and Aquino, 1993). Causal accounts can be defined as explanations for a person's responsibility for his/her actions (Bies and Shapiro, 1987, p. 201). Echoing these findings, the marketing literature postulates that explanations may have a positive impact on customer perceptions (Conlon and Murray, 1996; Bitner et al., 1990; Sparks and Callan, 1996). Yet, Tax et al. (1998) showed that causal accounts after a service failure could lead to either positive or negative reactions. Consumers had favorable feelings when the explanation helped them to understand the service failure and to obtain a fair resolution while their reactions were negative when the explanation was used to mitigate the organization's accountability.In this study, we argue that the timing of the explanation might influence customer perceptions. Previous work in organizational behavior suggests that two types of explanations can be used to manipulate impressions: anticipatory or retrospective excuses (Higgins and Snyder, 1989). An excuse can be defined as an explanation in which the offending party admits the behavior was bad, but denies responsibility for the behavior (Scott and Lyman, 1968). Anticipatory excuses refer to offering an explanation for poor performance in the future whereas retrospective excuses involve explanations for poor performance that just occurred (Greenberg, 1996). Both of these types of excuses have been found to be effective impression management tools in organizational settings (e.g. Crant and Bateman, 1993; Bell and Tetlock, 1989; Gioia, 1989; Greenberg, 1996;). However, we argue that explaining the cause for the failure after the fact is better suited to a service consumption setting. Most service delivery processes involve a sequence of events (e.g. ordering the meal, consumption of food, payment; see Verhoef et al., 2004). The well-known recency effect suggests that the last step in the process is the most important in the evaluation process (Anderson, 2000). Hence, offering an explanation at the end of the consumption episode should have a larger positive impact on customer perceptions than offering an explanation at the beginning of the process.Similar to organizational settings, customers evaluate perceptions of fairness with the service recovery by three factors: outcomes, procedural fairness and interactional treatment (Goodwin and Ross, 1992; Tax et al., 1998; Smith et al., 1999). In other words, both what is done (tangible compensation) and how it is done (employee interaction with the customer) influence customer perceptions of service recovery (Levesque and McDougall, 2000). The most recent conceptualization of justice suggests that interactional justice can be further separated into two dimensions: interpersonal treatment and informational fairness (Colquitt, 2001; Colquitt et al., 2001). Informational justice taps into the perceived adequacy and truthfulness of information explaining the causes for unfavorable outcomes (Colquitt, 2001). Robust findings suggest that explanations exert a positive impact on people's fairness perceptions (see Shaw et al., 2003 for a recent review). In this research, we propose that offering an explanation for service failure should have a positive impact on customers' perceptions of informational justice whereas offering tangible compensation should influence customers' distributive justice perceptions. Moreover, explanations prior to the event (anticipatory explanations) might be perceived as an attempt to manipulate the customer, and hence such actions are likely to result in lower perceptions of informational justice, in particular when combined with tangible compensation for service failure. Taken together we propose the following:H1. Compared to a no compensation condition, offering tangible compensation will lead to higher perceptions of distributive justice.\nMethodology: To test the above propositions, the present research used a 3 (explanation: absent, anticipatory, retrospective) x 2 (service recovery effort: no tangible compensation or 20 percent off the total bill) between-participant design. In the no explanation condition, no explanation for the poor service was provided. Conversely, in the anticipatory excuse condition, the waitress tells the customers that they are short-staffed due to the fact that several employees called in sick at the last moment. This same explanation was offered at the end of the meal in the retrospective excuse condition. In the tangible compensation condition, the waitress gave the customer a 20 percent off as indicated on the bill. Simulated dining experiences were enacted and videotaped to represent the six conditions. The subject pool was composed of undergraduate students enrolled in a large state university in the Northeastern US. Casual dining restaurants were selected as the target service category for the following reasons. First, students are real-life consumers of restaurant services. Second, service failures are commonplace in the restaurant industry (e.g. Tax et al., 1998).Participants and procedures\nResults: Manipulation checks\nDiscussion: The findings of this study add to the evidence that offering an explanation for a service failure can positively influence customer perceptions. Attribution theories postulate that people typically seek causal explanations for negative events (Folkes, 1988; Weiner, 2000). Hence, offering an explanation for a service failure can reduce consumers' own inferences about potential causes for poor performance. In this study, we examined the joint effects of tangible compensation and explanations on customer reactions in a restaurant setting.Today's consumers have learned to expect some form of compensation following a service failure (e.g. Tax et al. 1998). These high expectations were evident among our sample of college students. Nearly three quarters of the sample indicated that they expect to be compensated if the wait person brought them a wrong dish. Our results indicating a positive impact of tangible compensation on distributive justice are highly congruent with previous work in service recovery (Smith et al., 1999; Levesque and McDougall, 2000; Mattila, 2001). We extend previous findings by showing that offering tangible compensation for the failure also has a positive impact on people's tipping behaviors.More importantly, offering an explanation also influenced customer perceptions. Social or causal accounts have been widely used in organizational settings in order to mitigate the ill-effects of unfavorable outcomes (e.g. Bies and Shapiro, 1987; Rousseau and Aquino, 1993; Shaw et al., 2003). Previous work shows that explanations can be used as an impression management tool since offering a causal explanation for a poor outcome helps to reduce others' controllability attributions (Bell and Tetlock, 1989; Gioia, 1989; Greenberg, 1996). Yet, some research suggests that explanations might not always work in consumer settings. Tax et al. (1998) for example, showed that explanations might back-fire. In this study, we explored the role of two types of explanations identified in the organizational justice literature (Higgins and Snyder, 1989). An anticipatory excuse involves offering an explanation for poor performance prior to the actual service episode (i.e. at the beginning of the meal when the customer is seated at the table). Conversely, a retrospective excuse refers to explanations for poor performance after the fact (i.e. at the end of the meal). In our context of service failures customers seem to favor retrospective explanations. Offering a retrospective excuse for poor service was effective in enhancing customers' informational fairness perceptions whereas anticipatory explanations were somewhat less effective. These results lend support for the well-established recency effect suggesting that the last steps in the service delivery process might have the most influence in the evaluation process (e.g. Anderson, 2000; Verhoef et al., 2004). In terms of tipping, both types of explanations and compensation had a positive effect on customers' tipping percentages. Consequently, service providers might be able to use explanations as an impression management tool (Greenberg, 1996).Finally, the results from our qualitative data clearly suggest that participants in this study thought that the service provider was lacking in interpersonal skills (i.e. not friendly). Although offering an explanation or compensation resulted in higher justice perceptions, the magnitude of the positive impact was not strong enough to compensate for poor interpersonal treatment. This is evidenced by below neutral ratings, expect for informational justice.Managerial implications\nLimitations and future research: Several limitations of this study must be recognized. Owing to the restaurant-oriented focus of this study, these results may not be generalizable to other service industries. Second, our stimuli involved service encounters that were clearly inadequate in terms of interactional treatment. Third, due to the research method employed in this study the dyadic nature of the service encounter is minimized. Fourth, the student sample somewhat limits the generalizability of the results.Future work should consider how other types of explanations might moderate customer responses to service failures. For example, attributing blame to an internal cause (e.g. terrible headache) might produce different types of reactions to service failures. Additionally, some customers might be more influenced by explanations than others. For example, repeat customers might be more open to explanations that are perceived as sincere.\n",
"role": "user"
},
{
"content": "Purpose: - To examine the role of explanations in influencing customer perceptions of service failures, this study investigated the impact of two types of explanations: retrospective excuses and anticipatory excuses on customers' fairness perceptions and their tipping behaviors.\nDesign/methodology/approach: - A 3 (explanation: absent, anticipatory, retrospective) x 2 (service recovery effort: no tangible compensation or 20 percent off the total bill) between-participant design was used to test the hypotheses. Simulated dining experiences were enacted and videotaped to represent the six conditions.\nFindings: - The findings of this study imply that customer-contact employees might be able to influence customer impressions by offering a causal explanation for a service failure.\nResearch limitations/implications: - Owing to the restaurant-oriented focus of this study, these results may not be generalizable to other service industries. Second, our stimuli involved service encounters that were clearly inadequate in terms of interactional treatment. Third, due to the research method employed in this study the dyadic nature of the service encounter is minimized. Fourth, the student sample somewhat limits the generalizability of the results.\nPractical implications: - The results indicate that retrospective excuses might enhance customers' fairness perceptions more than anticipatory excuses. Yet, it is important to keep in mind that explanations for failures, even when combined with tangible compensation, are poor substitutes for inadequate interpersonal treatment.\nOriginality/value: - The findings of this study add to the evidence that offering an explanation for a service failure can positively influence customer perceptions. Moreover, the paper introduces tipping as a surrogate of satisfaction to the service recovery literature.\n",
"role": "assistant"
}
] |
Integrated priority decision index for risk assessment in chaos: cost overruns in transport projects
|
[
"Cost overruns",
"Project chaos",
"Complexity-risk interdependency",
"Infrastructure transport projects",
"Analytical network processing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Complex and dynamic nature of construction projects have been encountered by a series of cost overrun problems over the years (Asiedu and Adaku, 2019; Tabei et al., 2019). Consequently, approximately 64 percent of ongoing projects have been affected by cost under estimation issues globally (Olaniran et al., 2017). This cost failure is because of the presence of complexity (Fang et al., 2013) and the dynamic nature of project tasks (Pehlivan and Oztemir, 2018). Where, if there is uncertainty about project information, accurate project costing and future outcomes cannot be determined (Dikmen et al., 2007). The presence of cost uncertainty in a project is because of the prevailing interdependency between complexity and associated risk factors (RFs) (Fang et al., 2013; Qazi et al., 2016). Therefore, the baseline cost escalation (Flyvbjerg et al., 2004) adheres to the implementation of contingency plans for sustainable project progress in managing complexity and complexity-driven risk within an uncertain environment (Dikmen et al., 2007; Love et al., 2016).\n2. Literature review: 2.1 The Chaos Theory (CT) in project management\n3. Research methodology: procedural steps: The measurement of project risk and complexity starts through the judgment process of likelihood and severity of the risk and then being transformed into quantitative measures to address the interdependencies (Ock and Han, 2010). Such quantitative tools are based on different linear and nonlinear approaches (Fang and Marle, 2012). As mostly infrastructure construction projects show stochastic behavior, therefore nonlinear models are dominant to address this phenomenon of interdependency (Dekker, 2013).\n4. Results/findings: This study demonstrated IPDI for finding critical complexity-risk interdependency using a descriptive evaluation of large ITPs in the Asia-Pacific region. The results of the interdependency algorithm concerning cost-chaos illustrate that complexity and risk have a global priority factor of project cost overruns of 0.588 and 0.411, respectively. CEs explains major variations and lead to non-linear behavior in outcomes (Qazi et al., 2016). Conversely, risk criteria create a less negative impact on cost overruns causing a project to possibly turn into chaos, as the system is unable to address the uncertainty level of project risk timeously (Dekker, 2013). Subsequently, project complexity appears to be an important driver of cost-chaos. Figure 4 illustrates the interdependency network of an important attribute of complexity and risk criteria against cost overrun where each attribute shows the network interdependency in determining cost-chaos. The network interdependency weight of project complexity against risk is 0.612 and interdependency weight of project risk againstcomplexity is 0.386, respectively, in the cost-chaos network, which shows significant contribution for predicting chaos in a project as shown in Figure 4. The values of interactions vary according to cluster attributes, where each attribute shows a different level of interdependency within and across the clusters (Love et al., 2016). Table V illustrates the interdependency matrix of important attributes of complexity and risk criteria against cost overrun where each attribute shows the network interdependency in determining project chaos.\n5. Managerial implications: Cost overrun has proven to be a major source of chaos in projects, particularly in developing countries. It is important to address whether project experts deliberately consider the notion of interdependency between complexity and complexity-driven risks in chaos. The current work has contributed theoretically in a way to articulate an efficient integrated approach for interdependency mapping in cost-chaos drivers that can provide a better understanding to identify key complexity and risk paths, rather than focusing on complexity and risk independently. In addition, this work has evaluated the causes of cost-chaos in terms of complexity-risk interdependencies in construction through three simple steps: designing a structured hierarchical breakdown structure of potential CEs and RFs in transport construction project; developing an IPDI using RT-Delphi and ANP to measure potential complexity-risk interdependencies; and classification of key cost-chaos alternatives in accordance with the level of severity associated with each cost alternatives.\n6. Conclusions: Various risk assessment frameworks have been adopted to better evaluate the causes of cost overrun but the issue remains static due to the unique nature of each infrastructure project. This cost overrun problem usually drifts a project into a chaotic situation where it has failed to showfurther progress in future. Therefore, this studyis undertaken to explore the critical elements of complexity-risk interdependency that create cost-chaos due to uncertainty in a dynamic system. Addressing cost-chaos phenomenon in risk assessment is an important addition to a project to achieve the project goals and to avoid uncertainty in progress.\n",
"role": "user"
},
{
"content": "Purpose: Cost overrun is inherent to project chaos, which is one of the key drivers of project failure. The purpose of this paper is to explore the critical elements of complexity-risk interdependency for cost-chaos in the construction management domain by utilizing a multi-criteria decision model.\nDesign/methodology/approach: A total of 12 complexity and 60 risk attributes are initially identified from the literature and using expert's judgements. For the development of a structured hierarchy of key complexity and risk drivers, a real-time Delphi process is adopted for recording and evaluating the responses from experts. Afterwards, a pair-wise comparison using analytical network processing is performed to measure complexity-risk interdependencies against cost alternatives.\nFindings: The findings of the integrated priority decision index (IPDI) suggest that uncertainties related to contingency and escalation costs are the main sources of cost overrun in project drift, along with the key elements such as \"the use of innovative technology,\" \"multiple contracts,\" \"low advance payment,\" \"change in design,\" \"unclear specifications\" and \"the lack of experience\" appear to be more significant to chaos in complexity-risk interdependency network.\nResearch limitations/implications: This study did not address the uncertainty and vulnerability exit in the judgment process, therefore, this framework can be extended using fuzzy logic to better evaluate the significance of cost-chaos drivers.\nPractical implications: These results may assist the management of cost overrun to avoid chaos in a project. The proposed model can be applied within project risk management practices to make better-informed technical decisions in the early phases of the project life cycle where uncertainty is high.\nOriginality/value: This research addresses the importance of cost overruns as a source of project chaos in dynamic systems where projects reach the edge of chaos and progress stops. A new IPDI index contributes toward evaluating the severity of complexity and risk and their interdependencies which create cost-chaos in infrastructure transport projects.\n",
"role": "assistant"
}
] |
Patient-centered communication: dissecting provider communication
|
[
"Vulnerable populations",
"Latent class analysis",
"Patient-centered communication",
"Provider communication effectiveness",
"Provider courteousness"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Effective provider communication is essential for patient-centered medical care (Levinson and Pizzo, 2011). Patient-centered communication expands the conventional clinical paradigm to include patient views and perspectives, and also encourages the physician-patient relationship (Ishikawa et al., 2013). Increasingly complicated patient needs coupled with a dynamic healthcare delivery system, requires more effective patient and healthcare provider communication (Gordon et al., 2015). Patient-centered communication encompasses the following provider skills and behaviors:\nMethodology: Study design\nResults: The five-cluster solution was selected as the final model. The BIC values for the classifications with one to six clusters are shown in Table I. The conditional response probability patterns for the five clusters are shown in Figure 1.\nDiscussion: We examined the latent heterogeneity in patients' evaluation of patient-centered provider communication. In the overall sample, most respondents were satisfied and trusted their providers (75 and 70 percent, respectively). Across the nine provider communication items, which encompassed constructs relating to provider communication effectiveness and provider courteousness, we identified five distinct patient profiles/clusters with similarities and differences in patient satisfaction, patient trust and patient socio-demographics.\nConclusions: Our study found that patients receiving healthcare at free clinics rated their safety net providers' communication and courteousness highly. Most respondents expressed high satisfaction with and trust in providers. Communication effectiveness was the primary factor related to patient satisfaction and trust. Females and Hispanic patients are the demographic groups that benefited most from effective provider communication and courteousness. To our knowledge, this study is the first to examine provider communication individual components and how they relate to intermediate patient outcomes (satisfaction and trust) in free clinics. LCA proved to be a useful tool to more fully examine the communication effectiveness and courteousness constructs, which could lead to more tailored and accurate quality improvement, treatment and prevention interventions (Lanza and Rhoades, 2013; Platonova and Shewchuk, 2015).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the relationship between patients' provider communication effectiveness and courteousness with patients' satisfaction and trust at free clinics.\nDesign/methodology/approach: This cross-sectional survey (n=507), based on the Consumer Assessment of Healthcare Providers and Systems instrument, was conducted in two Southeastern US free clinics. Latent class analysis (LCA) was used to identify patient subgroups (clusters) with similar but not immediately visible characteristics.\nFindings: Across the items assessing provider communication effectiveness and courteousness, five distinct clusters based on patient satisfaction, trust and socio-demographics were identified. In clusters where communication and courteousness ratings were consistent, trust and satisfaction ratings were aligned with these domains, e.g., 54 percent rated communication and courteousness highly, which was associated with high patient satisfaction and trust. When communication effectiveness and courteousness ratings diverged (e.g., low communication effectiveness but high courteousness), patient trust and satisfaction ratings aligned with communication effectiveness ratings. In all clusters, the association was greater for communication effectiveness than for provider courteousness. Thus, provider courteousness was important but secondary to communication effectiveness.\nPractical implications: Investment in patient-centered communication training for providers will improve patient satisfaction and trust.\nOriginality/value: The study is the first to examine individual provider communication components and how they relate to patient satisfaction and trust in free clinics. LCA helped to more fully examine communication constructs, which may be beneficial for more nuanced quality improvement efforts.\n",
"role": "assistant"
}
] |
The challenges to performance and sustaining mutual health organisations/health institutions: An exploratory study in Ghana
|
[
"Efficiency",
"Risk management",
"Quality health care",
"Effectiveness",
"Patient perception",
"Finance/economics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In Sub-Saharan Africa (SSA), certain problems have created conditions whereby there seems to be serious inequity such that there is more people in the population who have either little or no access to orthodox health care (HC) when they are needed (McIntyre et al., 2006). Unfortunately, there is no longer universally free HC at the point of service (Magar, 2013). This option has proved inapplicable in the current HC context owing to economic policy reforms with implications for the health sector (Meessen et al., 2011). Nonetheless, the second health financing (HF) reform strategy emphasises the need to involve the private sector and private health financiers and encourage prospective users to share risk in HC delivery in developing economies (Mills et al., 2012). This approach uses health insurance (HI) especially, social health insurance (SHI) and is expected to enhance the chances of achieving universal health coverage, provide financial security for all against exorbitant HC costs and deliver fairness in financing for HC (Carrin et al., 2005).\nLiterature review: Saksena et al. (2011) found that MHI is associated with a higher financial risk protection and catastrophic health expenditure was almost four times less than in households with no coverage. Nsiah-Boateng (2010) reports a 22.6 per cent coverage rate and a 22.2 per cent community penetration rate in the Ga DMHIS (Ga District MHI Scheme). However, Pallas et al. (2013) identified 23 enabling factors and 15 barriers to scale-up sustainable community-oriented programmes. To improve performance and enhance CBHIS/MHO sustainability, service quality (quality for short) issues have to be addressed (Criel et al., 2002). Dudley et al. (1998) suggest that health plans (CBHIS) and providers have to focus on quality by developing mechanisms to correct risk differences among enrolees.\nTheoretical perspective: Neo-classical economics theory\nData collection: Our field study, using a qualitative research method, took place between November 2006 and January 2007 (This has been indicated as a limitation - see last paragraph).\nFindings and discussion: Increasing access to quality care for MHOs members\nPerformance, sustainability and access: Key challenges to MHO performance and sustainability, including inability to manage HI risks: moral hazard, adverse selection, fraud, abuse and cost escalation have been emphasised (Tang et al., 2012).\nSustaining efficient and effective HC services: Brugiavini and Pace (2011) found that introducing NHIS has a positive and significant effect on HC service use, although it had only a weak effect on out-of-pocket expenditure. The authors observe that there are problems with financial inflow into HC institutions; affecting their smooth operations, coupled with the excessive HC use by MHO members.\nConclusion: We evaluated MHO performance as a mechanism to enhance access to quality HC, exploring how HI risks influence MHO and health institution performance and sustainability increase access to quality HC for members in four DMMHIS/MHOs.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore challenges to the performance and sustainability of mutual health organisations (MHOs) and health institutions towards enhancing access to quality health care (HC) in Ghana.\nDesign/methodology/approach: Data were gathered through interviews and documentary review.\nFindings: Problems with late release of reimbursement funds for discharging with claims by the central government has impacted heavily on the financial and strategic management and decision-making processes of the MHOs and health institutions.\nResearch limitations/implications: The lack of in-depth analysis of the financial viability of the MHOs; and the limited number of schemes selected.\nPractical implications: Recommends the need to ensure prompt release of reimbursement funds by government to enable the MHOs to reimburse claims to health institutions.\nSocial implications: There is a perceived tension between the MHOs and HC institutions due to late release of reimbursement funds by the government.\nOriginality/value: Contributes to understanding of how the NHI Act influences the operations of MHOs and health institutions towards increasing access to quality HC and financing.\n",
"role": "assistant"
}
] |
The Thatcher government and (de)regulation: modularisation of individual personal pensions
|
[
"Modularity",
"Deregulation",
"Individual personal pensions",
"Industry change",
"Margaret Thatcher"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The (de)regulatory[1] agenda of the Conservative government elected in 1979, led by Margaret Thatcher, is, we suggest, an underexplored example of the \"financialisation\" of financial services product markets that occurred in the UK in the 1980s (occurring broadly at the same time as similar reforms in the United States and across Europe, see, for example Dixon and Sorsa, 2009; Krippner, 2012; Langley, 2004; 2007; van der Zwan, 2014). In this paper, we focus specifically on the relationship between the (de)regulation agenda and the modularisation of UK individual personal pensions. The individual personal pensions regime was implemented in 1988, following embodiment in the Social Security Act, 1986, and we argue that these events represent an important change event in the development of the wider UK pensions market, bringing to the fore the ideas of individual and personal control and responsibility for retirement provision[2].\nMethod: Given the paucity of studies concerned with the development of individual personal pensions in the aftermath of the election of the Conservative government in 1979, the inspiration for this paper was a retrospective, oral history study of the UK individual personal pensions product market between 1984 and 2014, conducted in 2014. In other words, the data set for this paper is part of a larger study of the sector that examined the relationship between industry development and product design, and modularity theory was a guiding theoretical lens. To explore the connections between changes in (de)regulation (at the industry level) and product design (the product level) between the period of mid-1980s to mid-1990s, we adopted an oral history data collection method (Thompson, 1988). The term \"oral history\" often encapsulates various forms of in-depth life history interviews, biographical interviews and personal narratives. Oral history is different from simple autobiography in terms of the degree to which the subject controls and shapes the process; oral history is interactive, drawing on another person's questions (Haynes, 2010; Thompson, 1988).\nFindings: Mid-1980s to late-1980s\nDiscussion: The (de)regulation agenda of the Conservative government in the mid-to-late 1980s was a pivotal and critical change period in the development of the UK individual personal pensions product market. The Social Security Act, 1986, and Financial Services Act 1986, enacted in 1988 carved out a new individual personal pension regime and ultimately transferred much of the obligation for pension provision from the state to consumer. Although the agenda was heavily politicised, regulation had a significant influence on the architectural choices of product design in the sector, which are arguably still playing-out today. Moreover, regulation in the two decades that followed, such as the Stakeholder Pensions regime (2001) and the pensions simplification agenda (2006) both led by the then Labour government, can all be interpreted as further attempts by government to better regulate the industry and ensure more flexibility, choice and protection for consumers.\nLimitations: With theoretical implications aside, this paper has some limitations. First, we rely on oral histories from 31 senior managers as our data source. We have not attempted to verify or triangulate their accounts with archival or secondary data. Our primary aim in this study was to reveal new discoveries about the potential relationship between (de)regulation and product design from actors who were actually involved in interpreting the (de)regulation in real-time and leading product design changes, and, therefore, our interviews provided access to primary data unavailable by any other methods. Nonetheless, we would welcome further future studies examining the relationship between (de)regulation and product and/or industry change using archival and secondary sources. We also recognise that the system property of modularity is a matter of degrees (Schilling, 2000). Product designs are unlikely to be fully \"non-modular\" or \"fully modular\" and often the degree of modularity a system exhibits sits between these two polar extremes. Nonetheless, our generic product design types \"made sense\" to respondents and their oral histories provide evidence of the trajectory to a \"more modular\" product design during the period.\n",
"role": "user"
},
{
"content": "Purpose: The (de)regulation agenda of the Conservative government, led by Margaret Thatcher, elected in 1979 is an important change point that has attracted only limited attention from management and historical research scholars. Thus, how (de)regulation in this era influenced the evolution of product design remains ripe for exploration. The purpose of this paper is to examine the UK individual personal pensions product market between the mid-1980s and mid-1990s to examine the relationship between (de)regulation - an industry-level factor - and its impact on architectural choices of product design - a product-level factor.\nDesign/methodology/approach: A retrospective, oral history research design with 31 senior managers in product development firms with first-hand experience of the change period was adopted.\nFindings: Findings indicate that the (de)regulation reforms and the context of the financialisation of product markets came to define how products were then designed, evolving product design from non-modular to near-modular, a trajectory that arguably continues until the present day.\nOriginality/value: The main contribution lies in examining the role of (de)regulation and financialisation as modularisation processes. The increasing modularisation of individual personal pension product design between the mid-1980s and mid-1990s provides further support for the body of scholarly work on modularisation processes and their relationship with industry change.\n",
"role": "assistant"
}
] |
Data collection after massive natural disasters (focusing on Bam earthquake, Iran)
|
[
"Natural disasters",
"Death rate",
"Data handling",
"Iran"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Disasters have been described as the most impending threats for human societies, and have affected many populations worldwide. According to the World Health Organization (WHO), about two billion people were affected by disasters and 600,000 lost their lives in the 1990s alone (World Health Organization, 2002).Disasters can affect societies in different ways, including health and socio-economic situations. These impacts can occur immediately after the disaster and can linger for a long time. The economical effects of a disaster can be on the individuals, such as the death of breadwinner (Charveriat, 2007), and on the broader macroeconomic community, such as inflation (Otero and Marti, 1995, pp. 16-18.).In order to measure the effects of a natural disaster there should be ongoing and routine data collection commencing immediately after a disaster. These data produce information needed for planning, management, and evaluation of the health services, with an ultimate aim of returning to the original health conditions, at both the individual and population levels (Lippeveld, 2001). The number and characteristics of casualties are considered to be one of the main aspects of a disaster effects study (Seaman, 1990). Also, mortality data such as maternal and child mortality can reflect the longer-term effects of a disaster, indicating the socio-economic effects of a disaster (Graham et al., 2004; Onwudiegwu, 1997), as well as the quality and quantity of health care services (World Health Organization, 2006, 2007; Schaider et al., 1999; Okafor and Aniebue, 2006).Iran is one of the most disaster-prone countries in the world. The most frequent disasters occurring are natural: drought, earthquake and flood. Bam is an ancient city on the old Silk Route located in Kerman province in south east of Iran, and was almost destroyed by an earthquake on 26 December 2003 (United Nations, 2004). This earthquake resulted in 31,828 deaths, 17,500 injured and 525 lost; additionally 75,000 were rendered homeless (Ashtiany et al., 2005).The aim of this study is to explore mortality data collection after a destructive earthquake in Bam, Iran.\nBackground: Health system in Iran\nMethods: This study is a part of a larger study carried out in Bam, a town located in Kerman province, and in Kerman, a city in the centre of Kerman province. This is a qualitative study and the data were collected via interviews and documents. The interviewees were selected so as to involve informants involved in all levels of the data collection procedure, including data collectors in the rural and urban areas, data compilers and data processors. There were also a few interviews with policy makers. Subjects were selected in the number of ways, starting with purposive sampling to cover the hierarchies, snowball sampling to widen scope, and random sampling of a subset of health houses. Another source of data were documents, there is minimal information available on the Iranian health information system on the internet. There are, however, some books published in Persian which provided useful information (e.g. A Series of Registration Rules and Regulation by Mansoor (2003)), these can be accessed through the official health institutions. WHO health metrics information framework was used to develop the interview guide, and in-depth interviews were used to gain rich information.\nResults: The results of this study relate to the number of deaths that happened immediately after the earthquake and were directly caused by the earthquake, as well as the number of deaths that happened within a few months following the earthquake.The results of the interviews with data processors showed that there are no records of deaths caused by the earthquake at the Health Ministry. Indeed, the Vital Registry is the only organization that recorded the deaths caused by the earthquake. Information from interviews and viewing some of the data produced by the Vital Registry found that these data are available in cross tables based on gender, age groups, and causes of death. The data are not accessible from the original forms in order to maintain confidentiality, unless there is judicial permission. The data are entered electronically using software designed by the Vital Registry, which is different from the software being used in the Health Ministry. This makes it impossible to directly export data from the Vital Registry to the Health Ministry. In addition, deaths caused by disasters are registered as deaths due to accidents, and therefore are included along with deaths from incidents such as car accidents, suicide and poisoning. Deaths are to be registered by the relatives, neighbours or friends of the deceased. The interviews found that double registration of a death, and falsely reporting a living person as dead were problems associated with this method of registration. The interviews also illustrated that due to the mass tragedy and numerous casualties it is impossible to exam the bodies and determine the exact cause of deaths.In terms of the recording of deaths that occurred after the earthquake, the results of interviews with data collectors, health staffs and policy makers revealed two types of problems, in both the provincial and local levels. At the provincial level, findings of this study show that, there was no routine connectivity between Kerman Medical University and Bam Health Network. At that time, Kerman Medical University did not maintain a supervisory role over the health network for a few months; instead Bam health staff worked in close relationship with the Red Cross and the Health Ministry. Interviews with the health staff in Bam found that initially the events were recorded on the patient's file and on the Red Cross Organization log books, which were in English. They were then were replaced by patients' files and logbooks in Farsi, the formal language in Iran. These files and logbooks were given to the Red Cross organization when their responsibility for health care delivery ended.At the local level, the Bam Health Network had collapsed after the earthquake, and there was no place to hand in the data collected from the urban and rural health centres. These data included the routinely collected mortality data from the rural and urban health centres. Also, there was no way to enter data for a few months, those involved with the data collection and data processing said there was no place to do the work and no working computers. The interviews also revealed that after the earthquake the authorities were distracted from routine data collection, and focused on specific data collection, such as infectious disease. The large scale of the tragedy created by the earthquake made the routine data collection less of a priority. During as interview with one of the policy makers it was mentioned that \"at that time [after the earthquake] nobody was thinking of routine data collection\".\nDiscussion: Health information system data, including mortality data, have an imperative role in decision-making processes. Cibulskis divided the utility of health information into four major groups: first, for planning and monitoring; second, for increasing organizational accountability; third, for securing financial resources; and fourth, for increasing the knowledge of systems in the long term (Cibulskis and Hiawalyer, 2002). In Iran the main policy decisions are made in the Health Ministry (World Health Organization, 2005), and therefore the Health Ministry is one of the main data users. However, the results of this study flag some serious problems which limited the data usage by the Health Ministry. In terms of data collection immediately after the earthquake the limitations are that the data at the Vital Registry are collected through individuals reporting it to the registry, while the Health Ministry collects the data through its health staff. This difference does not allow the Health Ministry to collect the data routinely on its own immediately after a large disaster. Instead it relies on the Vital Registry data, which are aggregated into forms of data presented as cross tables. This form of data might be insufficient for epidemiological research to recognise at risk groups during an earthquake, which is claimed to be one of the major issues in disaster relief functions (Nishikiori et al., 2006). This in turn might impede the planning and evaluation for the next disaster. But, even if there is a judicial permission for use of individual data, the inconsistency between the software used in these organizations makes it impossible to transfer the data from the Vital Registry to the Health Ministry. This problem was also pronounced in Romania as one of the obstacle in the health information system (Csiki et al., 2003). Also the way earthquake mortality data were recorded relied on the people's reports. This in turn limits the knowledge of the exact cause of death for victims, which is important in epidemiological research.Regarding the long-term effects of a disaster on routine data collection, several factors hinder the resumption of data collection shortly after an earthquake. First, restoring the routine data collection system to the pre-disaster conditions takes considerable time, this can reflect either the ability of rehabilitation or the priority setting after an earthquake. The latter can be further explained by two issues: initially, the tendency of policy makers to solve the daily and short-term problems rather than the longer-term problems (Sauerborn, 2000), and finally, little interest in the collection of routine data. This may be due to the lack of skills and knowledge of data usage among policy makers that can hinder the data utility (Saebo and Titlestad, 2004).Second, changes in conditions caused by the earthquake, such as living conditions, distract the policy maker's attentions away from the routine activities, instead focussing them on emergency and surveillance actions. Although these tasks are imperative in the critical conditions that occur in the aftermath of a disaster they should not undermine the routine tasks, particularly when the data are not only used to monitor the longer-term effects of a disaster, but also to make the governments accountable to their people (Cibulskis and Hiawalyer, 2002). Finally, low co-ordination between the national and international aids, in terms of sending routine data to the province was another problem after the earthquake. Although the role of national and international aids are imperative in the relief efforts after a disaster, poor co-ordination and a lack of defined tasks regarding routine data collection resulted in no data being sent to the Ministry of Health for a few months after the disaster. This time has critical effects on people's health due to changing socio-economic factors. And, these data have a critical role in recognising the problems that develop after a disaster, and can be used as important indicators during decision making.There is very little research on routine data collection following disasters, and this study could provide useful information in this important area. However, this study was conducted on the Bam earthquake, which was a massive earthquake. Therefore, the results might be different in less serious disasters, and other type of disasters such as floods.\nConclusion: The occurrence of natural disasters is increased over the time. Recent studies show that globally there has been an 18 per cent increase in the natural disasters frequency in 2004 and 2005 (Hoyois et al., 2006). This means that the preparedness and planning for future disasters should be a main priority for most societies. This study aimed to explore the problems that arose from the Bam earthquake in terms of data collection, which is an imperative part of disaster preparedness. With regards to the availability of good quality data after a disaster, in developing countries most problems originate from the poor infrastructures. Furthermore, this study found that poor co-ordination between national and international aids to collect and register data leaves a significant hole in the data collected from the disaster stricken areas. This data can provide important information for recognition of vulnerable people, as well as on health services quality and disaster relief performance. This study suggests a multifaceted effort in the global and national level to strengthen the health information systems as a disaster preparedness activity, to be done by considering its efficacy during and after a catastrophic event.\n",
"role": "user"
},
{
"content": "Purpose: - The aim of this study is to explore mortality data collection after a destructive earthquake in Bam, Iran.\nDesign/methodology/approach: - This is a case study of mortality data collection using a qualitative approach. The study data were collected through interviews with people involved with data collection and processing in Iran, and an evaluation of routinely collected data.\nFindings: - The results indicated that there were many limitations affecting data collection after the earthquake. These limitations are rooted in basic problems within the existing data collection system and a lack of co-ordination between the groups collecting data, including national and international aid groups that provided help after the earthquake.\nResearch limitations/implications: - The research is based on a very large and destructive earthquake; the results may be different for other natural disasters such as floods.\nPractical implications: - The main target group of this study is the decision makers involved with the disaster relief issues at national and international levels. This issue of data collection is imperative for future disaster aid.\nOriginality/value: - The study highlights the problems affecting routine collection of mortality data after a disaster, arising from the scant attention paid to proper documentation.\n",
"role": "assistant"
}
] |
An evaluation of the Balanced Scorecard(r) in equity valuation: The case of exchange ratio in the M&As of Taiwan's financial industry
|
[
"Balanced scorecard",
"Finance companies",
"Acquisitions and mergers",
"Performance measures",
"Equity capital",
"Taiwan"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: To promote global competitive advantage of domestic banks, the Taiwan Government enacted the Act of Financial Holding Company (FHC), encouraging financial institutions to engage in merger and acquisitions (M&As) to enhance their operating efficiency. The new law opens opportunities for major banks to exploit the potentials of universal banking and build firewalls between affiliates within the financial groups. In response to the new law, many Taiwanese banks have engaged in a series of consolidations of different types of financial institutions (commercial banking, life insurance, property and casualty, investment banking, securities, etc.). By the end of year 2002, 12 financial holding companies are established through the exchange of shares between the acquiring and the target banks.In the process of a merger, both the acquiring and target firms consider the characteristics of both firms. Exchange ratios reflect these characteristics and are a major concern of the shareholders of both sides since it can result in a wealth redistribution, the potential dilution or growth of earnings per share of the FHC. How to reach a fair valuation of the companies in the merger is a fundamental and difficult issue.A recent trend in valuation is the increasing emphasis on the intangible, qualitative and non-financial sides of the companies. Although accounting earnings are still an important indicator of valuation, more and more empirical evidence suggests that the returns-earnings relation has declined over decades (Brown et al., 1999; Collins et al., 1997; Francis and Schipper, 1999). Recognizing relevance lost in the performance measures of the traditional management accounting, Kaplan and Norton (1996a, b) developed the theory of the Balanced Scorecard(r) (BSC) as an approach to integrating financial and non-financial measures into management in the hyper-competitive environment.With a sample of 12 FHCs, the study investigates the relation between exchange ratio and the BSC measures. The primary empirical results are as follows. First, the regression R2s show that the financial perspective of the BSC explains about 50 percent of the variations in the exchange ratio. The inclusion of the other three non-financial perspectives improves the R2 to about 90 percent. This empirical result supports the claim that non-financial aspects are as important as financial perspective. Consistent with the theories of M&A, the empirical results show that the acquiring banks are better performers than the acquired banks as measured by the BSC. Finally, the results also show that small and good performers are more likely to be merger targets than bad ones.This study contributes to the literatures in performance evaluation, equity valuation and merger and acquisition. First, I empirically exploit the potential uses of the BSC in the area of M&A. Current research on the uses of the BSC mostly focuses on the aspect of strategic and employee compensation management (e.g. Hoque and James, 2000; Ittner et al., 2003; Malina and Selto, 2001). This study extends the literature on the value relevance of non-financial performance variables (e.g, Riley et al., 2003). Second, this study is also an extension of the M&A literature in that it provides a potential framework for the determination of an exchange ratio.The remainder of the study is organized as follows. The next section develops hypotheses, followed by a discussion of the sample, variables, and models in the following section. The penultimate section presents and analyzes the empirical results. The final section presents conclusions and the implications of the study.\nLiterature review and hypothesis: Over the past 20 years, increased global competition and deregulation have fostered a wave of consolidations through M&A in the banking industry in many countries. This wave of mergers has changed the organizational structure and operating environment of the banking industry. At the same time, this consolidation trend raises an important question regarding the wealth distribution between the shareholders of the acquiring and the target banks. A related comment on merger provided by Warren Buffet, the chairman of Berkshire Hathaway, is perhaps the most famous one. The risk is the managerial temptation of the acquiring to pay double to kiss the toad (see Berkshire Hathaway 1981 Annual Report cited in Foster, 1983). Many failed mergers involve overbidding (e.g. Lys and Vincent, 1995). Therefore, how to establish a realistic price for the target financial companies based on a careful assessment of performance measures is an essential to a successful M&A.Recent studies have shown that financial performance measures are losing their relevance in valuation and management (Brown et al., 1999; Collins et al., 1997; Francis and Schipper, 1999). At the same time, Kaplan and Norton (1996a, b) developed the BSC as an approach to integrating financial and non-financial measures into management in a hyper-competitive environment. The BSC translates mission and strategy into objectives and measures, organized into four perspectives: financial, customer, internal business process and learning and growth. In light of this, this study examines the relevance of BSC in the valuation and management of financial industry. I establish the following hypothesis:H1. Exchange ratio is related to both financial and non-financial aspects of the firms in the merger. Specifically, financial, customer, internal process and growth dimensions affect exchange ratio.\nResearch design: Exchange ratio and BSC\nEmpirical results: Sample source and selection\nSummary and conclusions: The recent M&A activities of FHCs in Taiwan provides us an opportunity to examine the potentials of the BSC in equity valuation. I investigate the relation between the BSC indicators and the exchange ratios involved in the M&A activities of FHCs. With a sample of 12 FHCs, the study investigates the relation between exchange ratio and the BSC measures. The primary empirical results are as follows. First, the regression R2 shows that the financial perspective explains about 50 percent of the variations in exchange ratio. The inclusion of the other three non-financial perspectives improves the R2 to 90 percent. The empirical results are in support of the claim that non-financial aspects are as important as the financial perspective. Based on the theories of M&A, our empirical results show that the acquiring banks are better performers than the acquired banks as measured by the BSC. Third, the results also show that a small and good performer is more likely to be a merger target.Our study contributes to the literature in performance evaluation, firm valuation and M&A literatures. First, I empirically exploit the potential uses of BSC in the area of M&A. Current research on the uses of BSC mostly focuses on the aspect of strategic and employee compensation management. This study also represents an extension of the literature on the value relevance of non-financial performance variables. This study extends the M&A literature in that it provides a framework for the determination of exchange ratio. Overall, the empirical evidence is consistent with extant M&A theories.The BSC reflects the future capability and intangible dimensions of firms. Therefore, one can use the BSC as a framework and examine the relation between intangibles of firms and equity valuation. This research issue is particularly important with the advent of a new economy.\n",
"role": "user"
},
{
"content": "Purpose: - To examine the value relevance of the Balanced Scorecard (BSC) for equity valuation.\nDesign/methodology/approach: - The investigations are based on the empirical relationships between a set of BSC indicators and the exchange ratios for a sample of 32 mergers and acquisitions (M&As) of 14 financial holding companies. Data sources are collected from machine-readable database, annual financial reports, industry reports and professional magazines. The study period covers years 2000-2002.\nFindings: - First, the BSC indicators explain as high as 90 percent of the variations of the exchange ratios. Consistent with the spirit of BSC, non-financial performance measures play an important role in the valuation of financial institutions, incrementally improving about 40 percent of explanatory power. Second, empirical evidence shows that acquiring firms outperform the acquired firms and target firms outperform the non-target firms in terms of BSC measures.\nResearch limitations/implications: - The small sample size and the limited set of the BSC indicators may constrain the applicability of the results to other industries and time periods.\nPractical implications: - The empirical results provide a concrete and practical framework for the valuation of intangibles and the application of the BSC in M&As.\nOriginality/value: - The study examines the BSC as a framework for the valuation of intangibles. The empirical results support the usefulness of the BSC in equity valuation and further corroborate a number of extant theories in the M&A literature.\n",
"role": "assistant"
}
] |
Oh, what happiness! Finding joy and purpose through work
|
[
"Performance",
"Management",
"Perception",
"Stress",
"Happiness",
"Work"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: There is a Buddhist story about a king who gave up his wealth, prestige, and distanced himself from all his important problems to become a monk. His meditation practices in a peaceful forest helped him realize how happy life could be, and he became one of the happiest monks of his time. One day, he decided to visit his old monk teacher, who had retired in the palace of a generous king in a neighboring city. The happy monk set himself at the feet of his old teacher and silently enjoyed the moments of togetherness, only now and then softly saying, \"What happiness! Oh what happiness!\"\n",
"role": "user"
},
{
"content": "Purpose: The aim of this paper is reaffirming the real purpose of studying, working, and acquiring a decent lifestyle, which is happiness.\nDesign/methodology/approach: This paper includes reflections on workshops with professionals, coaching insights, and literature review.\nFindings: Attaining happiness is not necessarily dependent on our employer but on our perception of our work. If our mind is at ease, and our outlook is positive, our work can be joy, and we can tread our personal path of happiness.\nPractical implications: May serve as a source of self-reflection to readers and potential reconsideration of the reasons behind their professional performance: while earning money is important to pay the bills and care for our families, we owe it to ourselves to be happy while doing it. Finding contentment in our work is the critical step to enjoying it and, thus, being happy doing it.\nSocial implications: Individuals who focus on acquiring and maintaining their own happiness will be more likely to also exude positive energy to others, thus elevating the chances of these others to pursue happiness as well.\nOriginality/value: This is a call for refocusing on the true purpose of all our actions. While it is actually rather simple, we seem to keep losing focus and digress into converting means into ends, while disregarding the real end of all our actions, which is happiness.\n",
"role": "assistant"
}
] |
Common mistakes in designing and implementing service guarantees
|
[
"Service guarantees",
"Qualitative research",
"Australian service firms",
"Service control",
"Australia",
"Customer services quality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: Service guarantees are written promises of service performance declared through advertising and company literature, making offers of compensation if promises are not honored (Kashyap, 2001; Sum et al., 2002). They may be either unconditional or conditional, and are prevalent across many service industries, including the retailing, real estate, fast food, airline, telecommunication, transport, and leisure industries (Burch, 1993; Fabien, 1997; Henderson, 1997; Lewis, 1993; Maher, 1991, 1992), as well as professional services (Hart et al., 1992; Raffio, 1992; Reske, 1995), financial services (Berry, 1995), and education (Lawrence and McCollough, 2001; Magnuson, 1996; Maher, 1991; Ostrom and Iacobucci, 1998).The appeal of service guarantees lies in their potentially broad strategic application. A service guarantee may serve as a quality instrument for improving firm performance, a marketing device to provide competitive advantage through differentiation and the opportunity to command a price premium, a customer service tool to improve levels of satisfaction and increase the number of legitimate complaints, or a combination of all of these uses (Berry, 1995; Hart, 1993, 1988; Spreng et al., 1995; Tucci and Talaga, 1997; Wirtz, 1998).A recent review of more than 20 years of service guarantee research in key relevant journals notes that the vast majority of empirical research on service guarantees (87 percent of the 109 papers reviewed) focuses on the promotional and marketing effects of service guarantees (Hogreve and Gremler, 2009). These investigations implicitly assume that the guarantees under study have been effectively designed and implemented across the organization and have been carefully communicated to customers. Studying the effects of poorly conceived service guarantees could potentially result in a contradiction of their findings.Unquestionably, improving the design and implementation of service guarantees would enhance their effectiveness. A framework resulting from a study of companies with service guarantee experience sets out five distinct steps for the successful design and implementation of a service guarantee (Fabien, 2005):1. Analysis of market and internal factors.2. Service quality signaling.3. Guarantee design.4. Implementation and communication.5. Performance analysis.However, the actual process companies use to design and implement a service guarantee remains obscure. To what extent do companies actually follow these implementation steps and their accompanying guidelines? This study answers a call for research determining how companies can design service guarantee processes to enable employees to deliver service and satisfy customers (Hogreve and Gremler, 2009). The main objective of this study is to investigate the experience of companies that have implemented service guarantees and to identify common lapses in the development and implementation process. Insights from this investigation would inform future academic research in this field and assist practitioners in learning from the experiences of others.In addressing the overarching research objective, we begin by reviewing research issues from the service guarantee literature. We then outline our research methodology, and we follow this discussion with a presentation of our analysis and findings in the form of common missteps companies make in developing service guarantees. We conclude by connecting the contribution of this research to the current body of theory, providing recommendations for practitioners, and proposing issues for further research.\nResearch issues: Two themes dominate the literature, which spans more than two decades:1. factors relating to the design of service guarantees; and2. measurement of customer and firm-related outcomes (Hogreve and Gremler, 2009).These themes shape the direction of this study and we discuss them in turn.Designing service guarantees\nMethodology: To study the way companies design and implement service guarantees, we began by analyzing the guarantees within the sample for their particular design elements. These elements included type (conditional or unconditional), promises of compensation, and conditions for invoking. Appendix 1 (Table AI)presents a summary of the results of the content analysis. As the earlier section covering research issues shows, the choice of design elements has received a great deal of interest from academic researchers. The discussion of the types of guarantees in the sample provides a context for the findings.We found that each of the service guarantees met the criteria of being simple and understandable. All appeared to be easy to invoke with the exception of the bank case, where customers who spent more than five minutes in a queue and wished to invoke the guarantee needed to wait additional time to complete a detailed claim form.In terms of the types of guarantees introduced, only the hotel case had adopted a 100 percent unconditional guarantee as part of a world-wide initiative. The most comprehensive and innovative conditional service guarantee was that of insurance-B, which claims to be first in the world to offer a customer charter in a non-public organization. The company's charter outlines 21 service standards that customers could expect from the company and reinforces service promises by a $30 payment offer for any breaches of the guaranteed conditions. A national accounting firm independently audits the charter performance, and the company makes the results of the audit public. In total, nine companies offered conditional guarantees that promised either monetary compensation (for example, $5 for slow service in the bank case) or a refund of any charges in the event a customer invoked the guarantee (the most common form of compensation).The process of identifying common problems in the design and implementation of service guarantees relied on qualitative research, which is appropriate for a constructivism epistemological position where the goal is new theory development (Carson et al., 2001; Crotty, 1998; Healy and Perry, 2000). Qualitative research is also appropriate when a new slant on a phenomenon is required or where the subtle details of phenomena are not suited to an investigation using quantitative methods (Strauss and Corbin, 1990).We conducted a total of 22 in-depth, personal interviews across ten different service organizations representing nine industries. In the previous 109 studies of service guarantees, only five investigations consider more than one industry context (Hogreve and Gremler, 2009). Examining many industries in one study, as is the case here, allows the findings to be applied beyond a single case experience.Each of the companies studied operates in Australia, where we carried out the interviews. Seven conduct business nationally and one internationally, with the remaining two being locally based in the city of Melbourne. While the organizations in the study constituted a convenience sample, we made every effort to obtain sample variability in terms of types of industry, company size, and type of guarantee, resulting in an initial sample of 17 companies that advertised a service guarantee in company literature. We conducted interviews within these organizations until a convergence of views was obtained - that is, until no new themes emerged (Miles and Huberman, 1994). This process resulted in a final sample of the ten organizations listed in Table I.We selected individual company respondents using the key informant approach (Robson and Foster, 1989), which consisted of talking with managers who were personally responsible for or closely associated with the guarantee design and implementation process. In most organizations, we conducted multiple interviews to capture the views of the cross-functional unit involved.Interviews followed the recursive model of interviewing (Minichiello et al., 1995), as answers to open questions are more likely to reflect a respondent's own thinking and thereby improve the validity of the results (Dey, 1993; Patton, 1990). A topic guide directed the flow of the interviews. Each interview lasted between 45 and 90 minutes, and all of the interviews were audio-taped and later transcribed verbatim.The data analysis consisted of data reduction, followed by data display and conclusion-drawing/verification (Miles and Huberman, 1994). Owing to the exploratory nature of the study, data reduction for each of the 22 interviews employed mainly in vivo codes (Strauss and Corbin, 1990) and yielded a matrix table from codes developed from each interview. We then integrated and synthesized the displays to identify respondents' attitudes and opinions. For the data analysis, we used QSR NVivo6 computer-based analysis software (QSR International Pty Ltd, 2006).\nFindings: The findings regarding actual company practice reveal significant discrepancies from the recommendations of Fabien (2005) concerning good practice in the design and implementation of service guarantees. These variances between theory and practice include inadequate or non-existent pre-launch market research; ambiguous definition of the role of the guarantee; inadequate market testing of alternative guarantee promises; a lack of consultation with key functional managers during development; a lack of CEO commitment; ambiguous assignment of responsibility for ongoing management of the guarantee; and an absence of performance evaluation. We describe these common mistakes in detail below, under headings that refer to the steps in the guarantee implementation process (Fabien, 2005). Appendix 2 (Figure A1)presents a model depicting these mistakes.Step 1 - Analysis of market and internal factors\nDiscussion and implications for management practice: Our findings of Australian service guarantees expose many discrepancies between theory and practice. Our sample revealed that conditional service guarantees were much more common than the 100 percent unconditional guarantee which has been popularized in the academic and practitioner literature (Hart, 1988). Organizations favor conditional guarantees because they appear to be easier to perform against and less risky than unconditional promises.In terms of the expected outcomes, we found that Australian managers envisaged that beyond providing quality signals (Fabien, 2005), their guarantee would deliver marketing-, quality-, and customer service-related benefits to their organizations. However, the service guarantee often began life as a marketing device and shifted over time toward a change management tool monitored by a cross-functional team responsible to the HRM manager.An outstanding example of an effective guarantee implementation process operates at insurance-B in the form of as a customer charter, which has replaced the former quality system. The actual charter is a dynamic document, relying on continuous market data from customers and staff to inform new charter promises. The extant literature contains very little about this tool, which appears to offer benefits over traditional service guarantees by allowing companies to add new promises easily and publicly report audited performance scores.The non-performance of marketing research by most companies in the pre- and post-launch phases of the guarantee implementation process was surprising. These critical first and last stages of Fabien's (2005) model appear to have been neglected in practice, creating difficulty for managers in both designing strategic service guarantees and later evaluating their effectiveness.Given that service guarantee effectiveness depends on how well the guarantee is designed and implemented, this study offers many lessons for managers. We found that in general, service guarantees were ill-conceived, poorly implemented, or not monitored after launch. Comparison between theory and both good and poor business practice identified a number of common mistakes, including inadequate or non-existent pre-launch market research; ambiguous definition of the role of the guarantee; inadequate market testing of alternative guarantee promises; a lack of consultation with key functional managers during development; a lack of CEO commitment; ambiguous assignment of responsibility for ongoing management of the guarantee; and an absence of performance evaluation. These mistakes offer a roadmap and identify potential potholes for managers considering the introduction of a service guarantee.\nConclusions and implications for further research: Our study of 10 Australian service guarantees addresses an important gap in the service guarantee literature by exploring the companies' experiences and comparing those findings with the extant literature, in particular the model for the effective design and implementation of service guarantees (Fabien, 2005).The study of a cross-section of Australian companies is valuable in that it provides data beyond the single case-study experience typically reported in previous research. In addition, the study identifies a number of potential areas for further empirical investigation. First, given the number of companies that expected the guarantee to deliver benefits beyond customer outcomes, previous researchers' preoccupation with customer rather than organization outcomes seems unwarranted. Findings from previous investigations may need reconsideration, particularly those that relied on surveys or case studies rather than experimental designs. The assumption that all service guarantees are well designed now seems optimistic, and future studies should distinguish between the conditional and unconditional guarantees when reporting findings. Second, the findings concerning the outstanding performance of the customer charter at insurance-B warrant further empirical investigation, particularly vis-a-vis the charter's performance compared to traditional forms of service guarantees.Although it does not undermine the validity of our findings, one limitation of the study deserves mention. The study was exploratory and consisted of Australia-based organizations, and as with all exploratory research, extrapolation of the findings across a broader sample must be done with care.In summary, despite the publication of over 100 papers over the past two decades, the service guarantee literature remains undeveloped. This study makes an important contribution to this literature by exploring companies' experience in designing and implementing service guarantees. The findings reveal a number of common management errors, indicating that managers should think carefully about the design and implementation of service guarantees so as to maximize customer and firm outcomes.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to explore companies' experiences in designing and implementing service guarantees.\nDesign/methodology/approach: - The methodology relied on 22 in-depth personal interviews across a sample of ten Australian service firms.\nFindings: - The effectiveness of a service guarantee depends on how well a firm designs and implements it. It was found that service guarantees were generally not well conceived, implemented, or monitored after launch. Through a comparison of theory and practice, this study identifies a number of common mistakes, including inadequate or non-existent pre-launch market research; ambiguous definition of the role of the guarantee; inadequate market testing of alternative guarantee promises; a lack of consultation with key functional managers during development; a lack of CEO commitment; ambiguous assignment of responsibility for ongoing management of the guarantee; and an absence of performance evaluation.\nResearch limitations/implications: - The study employs qualitative research techniques and considers only Australian firms.\nPractical implications: - While the common mistakes offer cautions for managers when planning a service guarantee, some outstanding examples of successfully implemented service guarantees also emerged. A notable example is the customer charter, a more comprehensive conditional guarantee that avoids many of the pitfalls associated with traditional service guarantees.\nOriginality/value: - Previous studies do not address the experiences of a broad sample of companies that have designed and implemented a service guarantee. The findings in this paper extend the understanding of how service guarantees could become more effective and identify directions for future research.\n",
"role": "assistant"
}
] |
Latest trends and issues in the ASP service market
|
[
"Computer applications",
"Software tools"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The market for application service providers (ASPs) has been booming over the last two years. ASPs deliver application functionality and associated services across a network to multiple customers using a rental-pricing model. Many customers have found it more cost efficient to focus on their own core competencies, so they have turned to outsourcing IT functions and renting software applications through providers. Now the arena is changing again as the major software and hardware companies such as Microsoft, Dell and IBM have got involved in the ASP market. Increased competition for the ASPs may force them to change their focus and approaches to survive.\nEmergence of the ASP market: In 1989 Kodak outsourced its data center operation to IBM. Prior to this, smaller companies had contracted out some IT services such as programming when their own equipment and personnel were inadequate for their needs, but major companies had built their own IT departments to be self sufficient. Since then companies large and small have jumped on the outsourcing bandwagon, allowing application service providers to take over primary roles that IT departments had played before.There have been several reasons cited for the move to outsource IT functions and these reasons have evolved over the last decade as technology and customer needs have grown. In the early 1990s the primary reason given for outsourcing decisions was the cost-benefit of maintaining fully staffed and equipped IT departments in-house. It became increasingly difficult for companies to attract and retain well-trained IT personnel. There simply were not enough to go around. In addition, software applications were expensive and were constantly being upgraded and replaced. For many it did not make sense to invest in products with such short lives.Transaction cost theory has been used to account for the growing ASP market. While the transaction costs of acquiring outside services increases with increasing use of outsourcing, economies of scale of the company decreases. The company benefits however when it takes advantage of the economies of scale of their vendors. As a result internal coordination costs of the company are reduced. External coordination costs to the company vary with asset specificity of their outsource services. Customization of services by an ASP limits the potential application of those services to other customers, which increases the external coordination costs of the company receiving the customized services.The term application service provider (ASP) has been used to describe many different types of companies but has really only been a buzzword for the past two years. Some major ASPs define themselves as companies providing access to and services for software applications licensed to the ASP by the software vendors. Many ASPs are now integrating these applications from their vendor sources. Some ASPs house or host the information themselves in data centers while some rely on partnerships with vendors and others for data warehousing.First generation ASPs were able to host one or more specific solutions such as enterprise resource planning (ERP), customer relationship management (CRM) and e-commerce. They worked with existing infrastructures that were not originally designed for hosted environments (www.corio.com). Second generation ASPs provide integration of services between hosted and non-hosted applications, hosted applications and other outsourcers, and between ASPs. Corio, founded by Jonathan Lee in September of 1998, claims to be the first of second generation ASPs.At the end of the decade the ASP customers are demanding more. They need integrated services without sacrificing their own customer relations. Major ASPs are trying to distinguish themselves in their abilities to meet this need. Many have turned to partnerships with other providers and vendors to be able to offer all inclusive, single point-of-contact services to their own customers.\nSelecting an ASP: For selecting an ASP, the following guideline should be followed:* ASP's business strengths should be carefully examined. During this assessment process, ASP's capability to manage your business, volume and support requirements should be examined. Make sure that you find an ASP that can grow with your company. A good indicator of this capability is the ASP's growth. Have they consistently gained market share? ASP's capability to offer applications, client devices, network connections, and interfaces to existing systems and application need of the business. ASP's service level agreement (SLA), in addition to system performance and customer support should set specific levels of availability and throughput and define remuneration if these obligations are not met. The SLA is used to set customer expectations for availability, reliability and responsiveness. The benefits of a SLA should be apparent to both the customer and the vendor. The vendor needs to have specific guidelines within which to operate its business, while the customer needs a clear idea of what to expect. One example of an effective SLA would require the vendor's help desk to respond within 4 hours of receiving a call during normal business hours. The vendor knows that in order to meet the minimum requirements of the SLA, it will have the return the customer's call within the four-hour period. The customer, on the other hand, knows that the agreement is only in effect during business hours (Rotondaro, 2002). The SLA must be adaptable to new services or deliverables and, in general, should be adequate to meet the needs of both parties. A termination clause should be included in the SLA. It should also have both incentive and penalty clauses to ensure performance and help derive the vendor's level of compliance to a higher level.* ASP's security track record in protecting data and network communications should be examined. ASP must have best practices in place and should be willing to strengthen measures to meet the expectations of your customers and business partners. Pappalardo and Cox (1999) points out that many ASP are unspecific about security from cost and product perspective and no ASP uses security to differentiate itself. ASP should offer real time intrusion detection and firewall options and digital certificates. At a minimum, ASP should offer IP security compliant firewalls that can support 128-bit key encryption. Each ASP should have an incident response mechanism that clearly defines what will happen should disaster strike or a hacker get in. Incident response should cover everything from notification of the client to a focused strategy for dealing with the problem.* ASP's ability to optimize or customize applications. Customization from ASPs will always be limited; given the economies of scale they need to create by renting highly standardized applications to as many customers as possible. For that reason, ASPs help their customers in differentiating or gaining competitive advantage. Pappalardo and Cox (1999) point out that if an ASP had to customize and upgrade 50, 100 or eventually 1,000 individual customer deployments, it is easy to see why that service provider would keep customization to a minimum. Every time an ASP spends money on customization means it lost money because that altered software will most likely not be transferable to another customer.* ASP's financial situation is another key variable. Firms that do not have significant funding will not last. If the money is gone, so are they. ASP must have an exit strategy in place in case of problems, especially in the area of data recovery. Do they have strong partners and adequate capitalization? With an ASP, you are basically buying data management for your most sensitive material, so make sure that ASP will be around. Get to know the management team and the company's investors; know who is endorsing the organization.* ASP's service and training matters. Find out if ASP offers training and implementation services since your IT department will inevitably be smaller and less qualified in these areas. Training and support are part of ASP's core competencies.The key questions to be asked in the selection process of an ASP are:* Does the ASP have 24/7 support?* Do they have low-resolution time per call?* Do they have higher than 99 percent of system availability?* Is there scheduled downtime limited to 1 to 2 hours per week?* Does ASP have low response time to access network resources?* Does ASP have penalty/incentive clause in SLA based on performance?* Does ASP have termination clause?* Does the ASP support 128-bit encryption between customer and the ASP?* Does the ASP perform external penetration tests and audits on at least quarterly basis?* Does the ASP offer application specific training?* Is ASP's capable to optimize or customize applications?\nPricing models: ASP's are using different pricing models. Some of the most common ones are as follows.Per-user model\nThe bottom line: The architecture employed by ASPs is nothing more than a hosted, web-enabled application. Phillips (2000) points out that most user organizations are transferring only a few special applications to ASP until this model of software and licensing and usage is widely used. Internet based application software hosting is expected to become a major trend in software services and delivery over next few years (Kao and Decou, 2003). Internet based ASP model is largely untested and still evolving compared to other means of delivering services (Davidrajuh, 2003; Duffy and Dale, 2002). It is not clear, for instance, to what extent rented applications can be customized for delivery, since customization adds cost and limits marketability. While ASPs manage software for clients, they often outsource data hosting to other companies. These arrangements usually work smoothly since data centers have greater skill in that area than ASPs, but the clients should aware of higher risks associated with multiple relationships.If your answer to all the key questions is yes, some words of caution are still in order: not a single ASP is making money and they are not expected to do so for the next couple of years. The ASP of your choice may be merged with or acquired with another company. Jossi (2001) suggests that there is no way in the world you can predict and put into agreement that something that says, \"you can't go out of business,\" but you can put some protections into the agreement. Among the protections suggested is the ability to license software ASP itself has created, as well as any other software it uses. The client should have the ability to transfer the used software to another ASP or to an in-house environment as quickly as possible to avoid downtimes. ASPs should also allow immediate access to hosting facilities or subcontractors providing that service. An additional consideration is the need to protect electronic records residing in active systems. It would be wise to maintain in-house archives of all active records that could be accessed during vendor system failure. This could require the creation and maintenance of a scaled down, redundant computer system to be used during emergencies. This would prevent complete record loss in case the ASP closes its door due to financial difficulties.Dobbs (2000) points out that with an ASP, the only major concern for the company's in-house staff will be securing an internet connection. A trouble free internet connection is still a problem for lot of companies. Connections in many cases remain slow, as companies that link to the internet via regular telephone lines experience dropped or failed connections due to overloaded lines. Only few companies have the high-speed broadband access needed to support an extensive use of an ASP.In terms of pricing models, ASPs may offer a rent-to-own option that allows the healthcare organization to assume ownership of the software eventually and move the software to a different site (typically in-house). Organizations could seriously consider this option if they anticipate high growth that will eventually require software customization.ASP should consider paying penalties for poor response time and down time, especially if the applications are mission-critical. Turico (2000) suggests that these penalties should be more than simply a fixed percentage of the monthly fee. For recurrent problems, for example, penalties might include escalating percentages of the monthly fee each time the problem recurs. Or the contract might include an exit clause that allows the healthcare organization to get out of the contract for free while retaining control of any customized software and data (Shin et al., 2002; Wang, 2001). The contract would need to specify that the clause could be effective only if the ASP failed to meet certain minimum service levels.Application service providers should develop software that lets customers manage both local and hosted applications from a single interface. Integration is a key challenge to such an effort because most customers run a mix of software, from homegrown applications to commercial and ASP offerings. Dale Fletter, manager of vendor relations for the California State Automobile Association, which has 4 million members in California, Utah, and Nevada, is intrigued by iSrvce. \"It seems to be unique,\" he says. \"The ability to [work with] our operation regardless of the hosting provider is something we haven't been able to find.\" The challenge, he says, will be finding hosting providers that will welcome such a product Khirallah (2002).The ASP industry seems to be maturing globally. But demand could be better. Bland (2002) says that since the dotcom \"crash\" businesses have been deferring their investments in e-business infrastructure. This has, understandably, affected the ASP industry, although it has not struck the \"death blow\" some were expecting. ASPs are gearing up to meet these markets. In a 2002 survey of 60 ASPs in 13 Asia-Pacific countries, IDC found that 65 percent are partnered with data centers, compared with 25 percent, which own their own data centers. The rest use data center partners combined with their own facilities. And 77 percent of ASPs are targeting specific industries. Of these, retail, discrete manufacturing and financial services are considered the most viable.\n",
"role": "user"
},
{
"content": "Purpose: - To provide students and academia with an opportunity to gain an understanding of recent trends and issues regarding the application service providers (ASP) market.\nDesign/methodology/approach: - A range of recently published (1999-2003) works, which provide practical issues, is reviewed to explore the existing pricing models, selecting an ASP and risk/rewards associated with ASPs.\nFindings: - Provides guidance to businesses in selecting an ASP in terms of key questions to be asked and discusses the importance of selecting reliable ASPs. It also recognizes key areas academia should emphasize in teaching students outsourcing of IT.\nResearch limitations/implications: - It is not an extensive study of the issues and trends of ASPs. The paper does not review in detail the impact of outsourcing on organizational participants, which perhaps limits its usefulness in a complex business world.\nPractical implications: - A very useful source of information for businesses making outsourcing decisions. The paper provides impartial advice for students and academia and for those who want to conduct research on outsourcing evaluation.\nOriginality/value: - This paper provides important insights on recent trends and issues of ASPs, especially in evaluating the performance of ASPs.\n",
"role": "assistant"
}
] |
The moderating role of trust and the theory of reasoned action
|
[
"Competitiveness",
"Education",
"Knowledge sharing",
"Theory of reasoned action",
"Private education institutions",
"Trust in colleagues"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Knowledge is a primary resource of competitive edge for any organisation (Fernie et al., 2003; Penrose, 1995) and is a pre-requisite for success (Hansen et al., 1999; Sambamurthy and Subramani, 2005). Knowledge is vital to the long-term survivability of organisations and is seen as a way to have an advantage over rivals (Nonaka and Takeuchi, 1995). Organisations need to generate, use and transfer knowledge to continually advance their performance (Argote et al., 2003; Martin-de Castro, 2015; Wang et al., 2016).\nResearch Background: Private higher education industry in Singapore\nMethod: Sample\nResults: The principal component analysis is used to check on each of the scales. The cut-off point is 0.5 and the criterion to designate satisfactory item loadings and a Cronbach's alpha of 0.7 designates internal reliability (Nunnally, 1978). All measurement items of each variable loaded satisfactorily on a single component respectively. The Cronbach's alpha for each construct reveals the followings:\nDiscussion: The overall objective of the research study was to examine the relationships among perceived social pressure about knowledge sharing, attitude toward knowledge sharing, behavioural intent to share knowledge, trust in colleagues and knowledge-sharing behaviour. This research study contributes significantly to the extant literature on knowledge-sharing behaviour by integrating the role of trust with the TRA. Specifically, this study extends the conceptual model of the TRA by examining the role that trust in colleagues plays as a moderator. The part that trust in colleagues plays in moderating both the relationships between attitude toward knowledge sharing and perceived social pressure on knowledge-sharing behaviour illustrates a new theoretical perspective that accounts and conceptualises for the position of trust on knowledge sharing.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to examine the role of trust in colleagues and its relationships with the factors of the theory of reasoned action (TRA). Specifically, this study examines the relationships among perceived social pressure about knowledge sharing, attitude toward knowledge sharing, behavioural intent to share knowledge, trust in colleagues and knowledge-sharing behaviour.\nDesign/methodology/approach: The methodology adopted was a questionnaire survey of employees working in 34 member institutions of the Singapore Association for Private Education (SAPE). These institutions form the entire member list of the SAPE as of 2020. A total of 297 employees completed a self-administered and anonymous survey using a cross-sectional design. Multiple linear regression was used to test the conceptual framework.\nFindings: On the mediation effects, full mediation was found to affect attitude toward knowledge sharing on knowledge-sharing behaviour, and partial mediation was found to affect perceived social pressure on knowledge-sharing behaviour. On the moderation effects, trust in colleagues moderates both perceived social pressure and knowledge-sharing behaviour and attitude toward knowledge sharing and knowledge-sharing behaviour positively. Specifically, as the level of trust in colleagues increases, the impact of direct relationships also increases.\nResearch limitations/implications: Data for the current study were obtained at a single point in time and self-reported. The findings may be biased because of common method variance. Furthermore, this study was conducted in a specific industry in Singapore, i.e. the private education institutions, which limits the generalisability of the research.\nPractical implications: The results of this study indicate that managers need to encourage a higher level of trust between employees. Policies and processes could be enacted to promote building quality and trusting relationships between employees.\nOriginality/value: This study contributes to knowledge-sharing behaviour by integrating the role of trust with the TRA. This study extends the conceptual model of the TRA by providing a new theoretical perspective that takes into account the position of trust in knowledge sharing.\n",
"role": "assistant"
}
] |
The role of familial socio-political forces on new venture creation in family business
|
[
"Family business",
"New venture creation",
"Business innovation",
"Socio-political clout",
"Transgenerational entrepreneurship",
"Venture creation process"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Creation of new business venture has invited considerable scholarly attention (Gartner et al., 2004; Townsend et al., 2010) since the seminal work published by Gartner (1985). Researchers have reported many preconditions that facilitate new venture (NV) formation (Plummer et al., 2016; Rodriguez et al., 2009; Gomez and Korine, 2008; Aldrich and Ruef, 2006; Perrow, 2002; Carter et al., 1994). The NV creation process has been extensively studied from individual entrepreneurship and corporate entrepreneurship perspectives (Vogel, 2017; Marchisio et al., 2010; Yeung, 2000; Bhave, 1994; Shaver and Scott, 1991). However, recent discussion on NV creation has been extended to ecosystems such as family businesses (FBs) (Sharma et al., 2014; Sieger et al., 2011). FB is the dominant form of business organization across the world (De Massis et al., 2015; Tagiuri and Davis, 1996). FB context provides a unique ecosystem for creating and nurturing new business ventures. FBs exhibit distinct decision-making processes (Ensley and Pearson, 2005), socioemotional attachments (Berrone et al., 2012) and institutional overlaps (Tagiuri and Davis, 1996). The complex relational dynamics at play in FB (Sharma et al., 2012) may drive their NV creation process in unique ways. Though researchers have reported a high degree of NV creation activity in FB (Chittoor et al., 2011; Rodriguez et al., 2009; Aldrich and Cliff, 2003), our understanding of how the idiosyncratic features of FB influence the process of NV creation still remains limited (Alvarez et al., 2013; Steier, 2009; Aldrich and Cliff, 2003).\nNV creation process: theoretical foundation: Extant literature on NV creation primarily deals with four important facets - NV creation process, factors influencing the process, establishment of legitimacy of NV and the role of the proposer in NV creation. NV creation is a multi-stage process. For instance, Vogel (2017) suggests that the NV creation process has two phases: i.e. venture idea generation and venture opportunity development, evaluation and exploitation. Zahra et al. (2006) mention five stages in NV creation: opportunity recognition, evaluation, refinement, championing and implementation. They suggest that relational trust fosters activities needed for successful new business creation.\nDistinctiveness of the FB context: A family firm operates as an inter-institutional system composed of the business and family (Steier, 2009). Its decision-making logic is ingrained in both business considerations and family dynamics. While business considerations are often similar to those of non-FBs, family considerations are shaped by socio-political forces within the family. These forces arise out of the family's genealogy, quality of relationship and culture. For instance, the financial decisions of the family firm are based both on its market and financial goals and family wealth generation objective (Miller et al., 2010). While non-family firms invest in developing organizational expertise to improve their strategic position in the market (McKelvie and Davidsson, 2009), business families invest in developing the family's expertise. FBs do this with a focus on the long-term enrichment of family's human capital for sustaining the family's business legacy (Sharma, 2008; Habbershon, 2006).\nNV creation process in FB context: FB researchers recognize that NV creation is important for FB survival, profitability and growth (Rogoff and Heck, 2003; Zahra et al., 2004). Business families are known to provide their younger members early exposure and skills for entrepreneurship and opportunities to engage in entrepreneurial activities (Dyer and Handler, 1994; Au et al., 2011). Family firms are also known to promote family entrepreneurial spirit by nurturing spin-offs by the next generation (Au et al., 2013). Therefore, members of a business family are favorably predisposed to NV creation. Family also provides both monetary and non-monetary assistance to founders for creating NVs (Astrachan et al., 2003; Erikson et al., 2003). Chrisman et al. (2003) observe that family is often a critical resource for start-ups. NVs in FB context have access to a number of unique family resources such as human capital, social capital, patient capital, survivability capital and governance structure (Sirmon and Hitt, 2003) that might improve their survivability and provide competitive advantage.\nData and methodology: The study was conducted using a triangulation technique advocated in management research (Ellis et al., 2017; Souitaris et al., 2012; Scandura and Williams, 2000), and also used in FB literature (Kammerlander et al., 2015; De Massis et al., 2008). Triangulation involves the corroboration of evidence using multiple research methods, which helps the researcher overcome the limitations of any one technique and leads to more robust findings. For this study, evidence was triangulated using three research methods, i.e. observations from extant literature, findings from in-depth interviews of 25 FBLs and insights shared by two FB practitioners (see Figure 1). The main research study was designed as an empirical qualitative study primarily based on the in-depth interviews of 25 FBLs.\nFindings and discussion: The analysis of interview discussions and practitioner insights on the NV creation process in the FB context led to four broad categories of statements[1]. The first category had statements focused on the process of NV creation and the stages in that process. The second category of statements described what socio-political forces influenced the NV creation process. Next was the group of statements that indicated how the incumbent leadership perceived, evaluated, and decided the fate of the NV proposal and the motives that drove those decisions. The fourth category of statements was related to the proposer's socio-political clout in the family and the influence it has on various stages of the NV creation process. This section presents the statements and deduces four propositions from those statements.\nImplications: The study has significant implications for theory and practice. This research uses the lens of social identity theory and observes the familial socio-political influence of the NV proposer on NV creation in FB context. The study also extends the SEW literature by registering the role of SEW preservation considerations of FB TMT in making NV creation decisions. Future research can quantitatively measure the phenomenon and validate our findings.\nConclusion: NV creation is critical to business growth and long-term survival. NV creation requires a proposer to champion the venture idea and win support across the business setup for its creation. In the FB context, the NV creation process becomes even more complex because of the owner family's involvement and its dynamics. Family's goals of growth and perpetual existence influence its business goals and NV creation decision. We conclude that family entrepreneurship is built not only on the techno-economic feasibility of a business opportunity but also the socio-political considerations in the family. The TMT is driven by strengthening its socio-political influence over the family. They view NV proposals by IN-group members as contributing toward this objective and the family's SEW. For an NV to come into existence, it is crucial that the proposal attracts attention and support from the key decision makers and influencers in the family. This study indicates that the proposer's socio-political clout in the family creates strong forces of positive and negative energies and influences every stage of the NV creation process.\n",
"role": "user"
},
{
"content": "Purpose: New venture (NV) creation is critical to the growth and long-term survival of business groups. The purpose of this paper is to investigate the NV creation process in family business (FB) context and examine the influence of familial socio-political considerations and dynamics on venture creation processes.\nDesign/methodology/approach: The paper employs a triangulation technique drawing from the extant literature, observations from 25 in-depth interviews of FB leaders and insights from two FB practitioners and abductive reasoning to theorize on the NV creation process and the influence of socio-political considerations and dynamics within family.\nFindings: The results show that there are four distinct stages of the NV creation process in FB context. Familial socio-political considerations and dynamics greatly influence the NV creation process. These considerations and dynamics vary according to the socio-political clout enjoyed by the proposer. Leadership's predisposition to the proposer and the proposer's socio-political clout in the family determine whether an NV proposal leads to venture creation.\nResearch limitations/implications: The study extends NV creation literature by suggesting that in addition to the economic rationale, socio-political considerations play a critical role in venture creation decisions. Future research can validate the findings with quantitative analysis.\nPractical implications: FB members must garner strong socio-political support for their NV proposal. FB leaders must ensure that their NV proposal evaluation and resource allocation decisions are not unduly influenced by the proposer's socio-political clout.\nOriginality/value: The study views the NV creation process in FB context from the lens of familial forces at play. It identifies four distinct stages of the NV creation process and examines the role played by familial socio-political considerations and dynamics during each stage.\n",
"role": "assistant"
}
] |
Corporate characteristics and leverage: evidence from Bangladesh
|
[
"Profitability",
"Liquidity",
"Age",
"Debt ratio",
"DSE",
"Size of the firm"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Financial managers of the firm are trying to enhance the value of their business unit, funding, timely investment in profitable projects and optimal allocation of existing resources. In fact, in this competitive world, the company's manager should make the best decision by examining and evaluating different investment projects. Suppose, the management first evaluates different existing projects and then chooses the best one. Now it should supply financial resources required for the investment using one of the various funding methods. Each single decision made by the management can affect the firm's capital structure. On the other hand, investors while making investment decisions consider different factors.\n2. Research objectives: The research objectives of this study are as follows:\n3. Literature review and hypotheses development: 3.1 Size\n4. Methodology: 4.1 Sample design\n5. Result and discussion: 5.1 Descriptive statistics\n6. Conclusions: This study is an extension of previous research, where a set of corporate characteristic variables is considered to examine their association with the leverage of the firm. The objective of this study was to examine corporate characteristics and their influence on leverage. These characteristics include total assets (TA), total sales (TSE), return on assets (ROA), return on sales (ROS), liquidity, age, debt equity ratio (DER), current debt ratio (CDR), proprietary of equity ratio (PER) and current assets proprietors' funds ratio (CAPFR) of the firm. The findings of this study have contributions for the regulators and enforcement agencies such as Institute of Cost and Management Accountants of Bangladesh (ICMAB), Institute of Chartered Accountants of Bangladesh (ICAB), the Securities and Exchange Commission (SEC) and the Dhaka Stock Exchange (DSE). It will enable the regulatory agencies to aim at greater compliance with the local and international standards and will also enable them to enforce penalties for non-compliance. The limitation of the study is that only non-financial companies have been used as a sample. Hence, the results may not extend across all listed companies in Bangladesh. The study explores only six corporate characteristic variables; other factors influencing the leverage of the firm such as number of foreign shareholders, ownership structure and auditors' opinion could be explored in further studies.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to determine the influence of various corporate characteristics such as total assets (TA), total sales (TSE), return on assets (ROA), return on sales (ROS), liquidity and age on leverage of the listed non-financial companies in the Dhaka Stock Exchange (DSE).\nDesign/methodology/approach: A non-probability sampling technique has been used in this study, and the leverage of 106 companies listed in the DSE has been examined for the time period 2011-2015. Multiple regression models are used to estimate the influence of corporate characteristics on leverage and leverage is measured by the debt ratio, that is, total liabilities divided by total assets (TA).\nFindings: The results obtained from the regression models show that TA, ROA and age are negatively and significantly related to the leverage of companies.\nResearch limitations/implications: Considering only non-financial companies as the sample is a limitation. Hence, the results may not extend across all listed companies in Bangladesh. The study explores only six corporate characteristics variables; other factors influencing the leverage of the firm such as the number of foreign shareholders, ownership structure and auditors' opinion could be explored in further studies.\nOriginality/value: The finding of this study contributes to the regulators and enforcement agencies such as Institute of Cost and Management Accountants of Bangladesh (ICMAB), Institute of Chartered Accountants of Bangladesh (ICAB), the Securities and Exchange Commission (SEC) and the DSE. It will enable the regulatory agencies to aim at greater compliance with the local and international standards and also enforce penalties for non-compliance.\n",
"role": "assistant"
}
] |
Destructive creation
|
[
"Corporate strategy",
"Learning"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Every piece of business strategy ... must be seen in its role in the perennial gale of creative destruction; it cannot be understood irrespective of it or, in fact, on the hypothesis that there is a perennial lull (Schumpeter, 1942).\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to set out an analogy between the creatively destructive forces that buffet business and the similar forces that affect physical systems.\nDesign/methodology/approach: - The paper makes use of previous learning and work with numerous clients to illuminate certain aspects of the context within which business strategy is formulated.\nFindings: - The paper finds that the integrity and identity of any business is to some degree dependent on the external pressures exerted on it by the competitive environment; strategic success may be the greatest threat to future strategic success.\nOriginality/value: - The paper expresses ideas in a way that the author hopes they have not been expressed before.\n",
"role": "assistant"
}
] |
Relational governance in supply chain: a systematic literature review
|
[
"Supply chain management",
"Governance",
"Relational governance",
"Systematic review",
"Trust"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Over the past 20 years, an increasing number of studies have focused on the complex issue of supply chain (SC) organization, particularly the most appropriate interorganizational relationships and modes of governance that companies must adopt to improve their economic performance (Wacker et al., 2016; Dolci et al., 2017). To facilitate these relationships, it is generally understood that two distinct forms of governance manage most supplier relationships: contractual and relational (Poppo and Zenger, 2002; Lumineau and Henderson, 2012; Cao and Lumineau, 2015). The first form, aligned with the transaction cost theory (TCT) (Williamson, 1985), highlights the importance of the contract between trading partners and their formal rules of compliance (Lumineau and Malhotra, 2011) to protect themselves against opportunism and conflict. The second form, based on social exchange theory (SET) (Emerson, 1976) and relational view (Dyer and Singh, 1998), focuses on the relational governance, suggesting that relational instruments such as flexibility, solidarity and information sharing (Macneil, 1980; Heide and John, 1992; Griffith and Myer, 2005; Liu et al., 2009; Yeh, 2015) and trust are established, maintaining the relationship and reducing the need for pure authoritative relationships to minimize opportunism (Cai et al., 2009; Wilson et al., 2015).While governance research has focused on economic governance strategies, from a contractual perspective, recognition of relational governance strategies has attracted academic and managerial focus to the inclusion of relational standards to maintain the relationship and promote the objectives of both parties (Liu et al., 2009; Wacker et al., 2016; Dolci et al., 2017). First, research has focused on understanding whether the use of relational instruments to govern interorganizational relationships can be crucial to improving relationship performance (Yeh, 2016; Zhu et al., 2017; Dolci et al., 2017). Second, past research has focused efforts on investigating what conditions and characteristics of the relationship motivate the use of relational and/or contractual instruments between trading partners (Liu et al., 2009; Wacker et al., 2016).In this context, one question raised by scholars is to understand how relational mechanisms produce certain outcomes. That is, the decision about when to use relational instruments to govern relationships is probably driven by the key features of the relationship as well as by the context in which they are needed. Although some instruments of relational governance (such as information sharing, flexibility and solidarity) have been identified and analyzed empirically, their link with SC performance in the presence of various contextual factors has not been systematically investigated. To date, there is no comprehensive and generic review of which contextual factors motivate the use of relational governance instruments and the impact of relational governance on performance in interorganizational relationships of SC. This study intends to take a step forward in breaching this gap recognized in the SC literature by answering the following question:RQ1. What are the contextual factors influencing the selection of relational governance instruments in supply chains and what is their impact on the performance?To answer this question, a systematic review of the literature was conducted, using 15 keyword combinations and searches in 13 databases, resulting in 8,469 gross articles. The academic contribution of this research corroborates the current literature to support advanced studies of governance in the SC providing a conceptual model with three propositions about the relationship between context, relational governance and SC performance, which may motivate further studies in this field.\n2. Theoretical background: 2.1 Relational governance and contextual factors\n3. Research methodology: The purpose of the literature review is to bring together and analyze a significant amount of information on a particular topic (Potdar et al., 2017). Thus, a systematic literature review (SLR) (Tranfield et al., 2003) was conducted to search for relevant studies on the subject. Seeking to identify what are the contextual factors influencing the selection of relational governance instruments in SCs and what are their impact on the performance, the survey was conducted through the combination of the following keywords (Table 1) found in the titles and/or abstracts of the articles. The searches of the articles were performed in the Cambridge, Academic Search Premier-ASP, SpringerLink, Taylor and Francis, ScienceDirect, Web of Science, Scopus, Emerald, Scielo, IEEE Explore, Inderscience, Oxford and Wiley databases with a temporal cut, encompassing articles published from 2007 to 2017 because in a preliminary search, it was verified that from 2007 there was an increasing number of publications addressing the theme. These databases cover a representative number of journals from different areas, with significant impact. Initially, a total of 8,469 articles were found (Figure 1). Then, the filtering procedures proposed by Campos et al. (2017) were applied. Duplicate articles (4,467), chapters of books and/or conference papers (93), articles with titles (3,491) and abstracts (242) which were not aligned with the goal of the research were excluded, resulting in 176 articles that were read in their entirety. After their reading in full, 73 articles were excluded due to the absence of alignment, resulting in the selection of 103 articles (Figure 1). Only articles published in peer-reviewed journals were considered, as they would probably be of a higher quality than other non-peer-reviewed documents, such as conference paper and book chapters (Campos et al., 2017).After 103 articles had been selected, they were subjected to a review, in which a summary of the information contained in each article was prepared using a spreadsheet organized under descriptive, methodological and context categories (Table 2). We used the research question as a structure for reporting the findings from this review. Where there is both conceptual and empirical evidence to support the relationship between relational governance instruments and performance or contextual factors, we feel more confident to integrate the findings. This was done through a series of three propositions.\n4. Findings: A descriptive background to the analysis of thematic findings that follows is provided by locating them in terms of the chronology of the research, published periodical and the nature of the studies. A summary of the information on relational instruments, contextual factors and performance cited in the revised articles is presented in Appendix. 4.1 Studies over time, published periodical and nature of the studies\n5. Discussion: Addressing the sub-questions in this review helped to gradually provide some information on the main question: What are the contextual factors influencing the selection of relational governance instruments in SCs and what is their impact on the performance? A number of different contextual factors, relational governance instruments and performance were identified. Based on the findings, it is possible to develop a conceptual framework, as illustrated in Figure 3.There are other contexts and alternative outcomes, and it is not clear from this review whether all outcomes are possible in all contexts. There may be mutually exclusive or managerially undesirable combinations. For example, relational instruments may not be available in an environment with a high degree of risk and uncertainty on which to base a shared understanding to reach an agreement on desired outcomes. New work is needed to further explore how different relational instruments facilitate a greater or lesser degree of different combinations of context and outcome. However, it is possible to infer the following propositions connecting contextual factors, relational governance instruments and SC performance: 5.1 The choice for relational governance instrument is influenced differently by the contextual factors\n6. Final considerations: The literature that addresses the question \"What are the contextual factors influencing the selection of relational governance instruments in supply chains and what are their impact on the performance?\" is diverse and rarely provides a perspective on context, relational governance and results. Our analysis allowed us to synthesize fragments of a range of works focused on the relational governance of SCs, using a variety of governance instruments, in various contexts and performance. Our findings provide a small number of categories for each of these elements (instruments, contextual factors and performance). This study advances the SCG literature by developing theoretical propositions to explain the effects of contextual factors on relational governance instruments and their performance, mediated by trust.Our findings derive from several theoretical approaches used by the revised literature to support the results. From the perspective of SET, relational instruments such as flexibility, solidarity and information sharing proposed by Macneil (1980) and trust are used to govern interorganizational exchanges because the immersion of social connections creates expected behavior patterns, reducing the need for just a contract to govern the relationship. In addition, the SET suggests that attitude and behavior can affect social interaction (Wang et al., 2016) and therefore, power is considered an antecedent that impairs the cooperative climate and deterrents of trust (Cheng et al., 2008; Brito and Miguel, 2017).From a TCT view that the company is a governance structure that serves as a mechanism to minimize transaction costs and these costs are calculated from transaction frequency and risk, our results support that the high risk of opportunism is negatively related to relational instruments as partners seek to use contracts to ensure better financial performance and to protect the relationship. On the other hand, the relationship history and environmental uncertainty may be positively related to relational governance, because in addition to the history promoting a familiar environment for both partners, environmental uncertainty can make partners more flexible to adapt to changes, reducing costs in drafting contracts. According to the RDT, where companies rely heavily on other SC business partners to survive, interdependence between partners influences them to adopt relational instruments to safeguard the relationship, probably because contracts are often expensive to design and cannot cover all possible future contingencies (Heide and John, 1992; Cai et al., 2009; Shi and Liao, 2015). From the commitment-trust theory, we propose trust as a mediating factor between contextual factors, relational instruments and performance, having a motivational role in relational governance instruments. Moreover, from the social capital theory, national and organizational culture can help explain differences in trust and why partners will choose relational governance (Liao et al., 2012; Ozer et al., 2014; Cadden et al., 2015). In this sense, we encourage company managers to first evaluate the contextual factors and type of trust around the relationship and then choose the governance instruments most appropriate for the expected outcome. Finally, we developed three broad and powerful propositions, based on empirical, conceptual and analytical studies and suggested three standpoints for future research that can contribute to better understand relational governance in SCs. However, a more empirical study is required to confirm and expand our understanding of when trust and relational instruments are preferable for governing SCs. 6.1 Identification of gaps and challenges\n",
"role": "user"
},
{
"content": "Purpose: The goal of this research is to establish which contextual factors influence the selection of relational governance instruments in supply chains (SCs) and how these factors impact the expected performance.\nDesign/methodology/approach: A systematic literature review (SLR) identified 103 conceptual, empirical and analytical studies between 2007 and 2017.\nFindings: A conceptual framework is developed from the categorization of contextual factors, relational governance instruments and expected SC performance. The conceptual framework provides three propositions: (1) The choice for relational governance instrument is influenced differently by the contextual factors; (2) the impact that the contextual factors have on the governance instruments and SC performance is mediated by trust; (3) The SC performance is affected differently by the instruments of flexibility, solidarity and information sharing.\nPractical implications: The findings of this research can help business managers better govern and know the contextual factors and use different relational governance instruments and trust dimensions to drive the expected results of the SC.\nOriginality/value: The synthesis reveals contingencies of relational governance instruments in SCs for performance expected in different contexts and proposes a standpoint for further research in the area.\n",
"role": "assistant"
}
] |
Emotional intelligence and its relationship to workplace performance outcomes of leadership effectiveness
|
[
"Individual psychology",
"Leadership",
"Performance measures",
"Personality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Despite a substantial amount of research on leadership, there is still much uncertainty about what is required to be an effective leader (Kets de Vries, 1993; Higgs and Rowland, 2000; Higgs, 2003). Questions still remain around why intelligent and experienced leaders are not always successful in dealing with environmental demands and life in general. Perhaps what leaders need is emotional intelligence (EI), rather than cognitive intelligence or specific personality traits?Surprisingly, there has been little research investigating this possibility. The few studies that have been published have tended to focus on leadership effectiveness from the view point of Bass and Avolio's (1990, 1994) transformational/transactional leadership model (Barling et al., 2000; Palmer et al., 2001; Gardner and Stough, 2002). This research has been valuable in understanding the link between EI and leadership, but it has yet to examine objective indices of leadership performance. Thus, we can not be certain if EI is related to actual leadership performance rather than perceived performance.\nEmotional intelligence: Popular definitions of EI refer to various things, including motivation, empathy, sociability, warmth, and optimism (Mayer et al., 2001). EI approaches can be classified into two broad categories:1. ability models (Mayer and Salovey, 1997); and2. mixed models.Ability models conceptualise EI in a similar way to cognitive intelligence (i.e. intelligent quotient (IQ)). EI is assumed to develop over time, be correlated with measures of IQ, and be measurable with a test based on performance (Ciarrochi et al., 2000). In contrast, mixed models of EI incorporate both non-cognitive models (Bar-On, 1997) and competency-based models (Goleman, 1995). These mixed models tend to overlap or \"mix\" with traditional models of personality and tend to utilise self-reports as their primary mode of assessment. Both models measure EI in different ways.There is now substantial evidence to suggest that self-report measures of EI correlate with personality (Ciarrochi et al., 2000, 2001; MacCann et al., 2004). Such overlap may make interpretation difficult. One can always question whether a result involving self-reported EI is really just a replication of a previous result using a similar personality measure. For example, research has clearly demonstrated a link between self-esteem and mental health (Ciarrochi et al., 2003). Thus if an EI measure correlates substantially with self-esteem and mental health, is the EI test measuring something new, or is it replicating previous self-esteem effects?While self-report measures have received considerable criticism, they have also shown utility, and there is evidence that some of them can be distinguished from other personality constructs (Van Rooy and Viswesvaran, 2004). However, an important point is that interpretation can sometimes be difficult, due to self-report measure overlap with personality.Another potential limitation of self-reports is that they are subject to reporting bias, especially in organizational contexts where people might be motivated to fake good. Presumably, ability measures are more difficult to fake good, given that people do not know the \"correct\" answers ahead of time (Ciarrochi et al., 2000; MacCann et al., 2004; Dawda and Hart, 2000).In contrast to self-report measures, EI ability measures correlate only slightly with personality. They also tend to correlate modestly with intelligence, as would be expected from Mayer and Salovey's (1997) theorizing. For this study the ability model of EI as defined by Mayer and Salovey (1997) will be used:... an ability to recognize the meanings of emotions and their relationships, and to reason and problem-solve on the basis of them. EI is involved in the capacity to perceive emotions, assimilate emotion-related feelings, understand the information of those emotions, and manage them (p. 267).\nEmotional intelligence and leadership effectiveness: Past researchers suggest that EI will be linked to transformational leadership style (Barling et al., 2000; Palmer et al., 2001; Gardner and Stough, 2002). Transformational leaders are seen as those executives that are able to create a vision, communicate this vision, build commitment amongst subordinates to the vision and model the vision within the workplace. Transactional leaders are viewed more as managers that maintain the status quo. Their focus is on linking job performance to rewards and ensuring subordinates have the necessary resources to undertake their roles. It is felt that as transformational leaders are able to deal with strategic matters more efficiently and in turn are able to build commitment in employees, these leaders are more likely to take an organization forward. Thus the assumption is that transformational leaders are more effective than transactional leaders, at least in some instances (Bass and Avolio, 1994; McShane and Von Glinow, 2000).Palmer et al. (2001) administered a self-report EI measure to 43 managers in order to evaluate the link between EI and leadership style. They found significant correlations with several components of the transformational leadership model. Specially, the inspirational, motivation and individualized consideration components of transformational leadership correlated with the ability to monitor emotions and the ability to manage emotions.Similarly, Barling et al. (2000) conducted an exploratory study on the relationship between EI and transformational leadership. Their results suggest that EI is associated with three aspects of transformational leadership, namely, idealized influence, inspirational motivation and individualized consideration. The leaders who report exhibiting these behaviours were assumed to be more effective in the workplace.Gardner and Stough (2002) and later Palmer et al. (2003) also examined the relationship between a self-report measure of EI (using the Swinburne University Emotional Intelligence Test (SUEIT) (Palmer and Stough, 2001), personality, and effective leadership as measured by the multifactor leadership questionnaire (MLQ) (Bass and Avolio, 1990). Their results indicated that EI, specifically the ability to perceive and understand emotions in others, accounted for the majority of the variance in transformational leadership when compared to other personality measures.Dulewicz and Higgs (1999) examined the link between self-reported EI and job competence, and unlike many previous studies, did not focus on the transformational-transactional model. These researchers looked at leadership effectiveness from the perspective of progression within the hierarchy of an organization amongst 58 managers from the UK and Ireland. Using a self-report measure of EI, which they derived from a job competency survey, they found that EI was able to explain a greater proportion of an individuals' advancement than either cognitive intelligence (also derived from elements of the job competency survey) or personality traits (using the 16 personality factor and organizational personality questionnaire).In summary, the available research supports the hypothesis that EI is linked to indices of leadership style and effectiveness. This research has focused on self-reports of EI and of leadership style. Research is needed to evaluate whether an ability-based test of EI is related to relatively objective (or non-self-report) measures of leadership effectiveness.Leader effectiveness has always been difficult to measure as objective criteria are often absent (Murensky, 2000). Some have argued that emphasize must be made on shareholders return on investment (Bass and Avolio, 1990), while others have advocated for a more balanced approach which also incorporates non-financial measures (Kaplan and Norton, 1996).Within the Australian Public Service, two measures of leader effectiveness are often used (the performance management system and 360 measures of leadership behaviours espoused within the workplace). They are measures that are intended to assist one's understanding of whether a leader has managed to attain organization goals in such a manner that the organizational is also able to grow. The purpose of the performance management system is to evaluate an employee's performance in achieving agreed business outputs (e.g. increased product turnover) in the previous financial year (known as the \"what\", i.e. what has been achieved?) and to evaluate how the employees demonstrate the expected leadership behaviours in achieving those outputs (known as the \"how\", i.e. how has it been achieved? Did the executive model the core values of the organization?). The \"what\" and \"how\" evaluations highlight two separate but related aspects of an individual performance (Management Advisory Committee, 2001).The performance management system is seen as a good indicator of an individual's leadership effectiveness (Management Advisory Committee, 2001). That is, does an individual meet business outcomes in such a manner that they not only achieve results but also build effective working relationships? This leads us to our first hypothesis, which is the core focus of this study.H1. An ability-based model of EI is positively associated with effective leadership as measured via a performance management system.\nAbility-based measure of EI - relationship with personality and intelligence: A new ability-based measure of EI was used in this study, the Mayer-Salovey-Caruso Emotional Intelligence Test, Version 2.0 (MSCEIT V2.0; Mayer et al., 2002). The MSCEIT is intended to measure the four dimensions of EI as postulated by Mayer and Salovey (1997):1. perceiving emotion accurately;2. using emotion to facilitate thought;3. understanding emotion; and4. managing emotion.The MSCEIT is based on the premise that EI involves problem solving with and about emotions (Mayer et al., 2003). This concept is quite different to the many self-report measures of EI in that it does not correlate highly with personality, and tends instead to correlate modestly with IQ (Ciarrochi et al., 2000; MacCann et al., 2004; Dawda and Hart, 2000).In a recent study by Mayer et al. (2003), the MSCEIT was found to show reasonable reliability and support for the dimensional structure postulated by Mayer and Salovey (1997) of EI. In another study, Brackett and Mayer (2003) found the MSCEIT to be distinguishable from well-studied measures of personality and well-being and predict important life criteria such as drug use, alcohol use and academic achievement. Other studies have found dimensions of the MSCEIT to predict high performance on a cognitive decision-making task (Day and Carroll, 2004), low levels of social deviance (Brackett et al., 2004) and higher levels of intelligence (MacCann et al., 2004). These results suggest that the MSCEIT is reliable and valid in measuring something other then personality and well-being, and relates to important outcomes. This draws us to a second and third hypothesis.H2. An ability-based measure of EI is distinct from the Big Five personality factors.\nMethod: Participants\nResults: We conducted preliminary analyses on the relationship between EI and leadership effectiveness. Following these analyses, the study focused on the predictive and incremental validity of the MSCEIT (relative to personality and cognitive intelligence).Emotional intelligence and leadership effectiveness\nDiscussion: The findings suggest that executives higher on EI are more likely to achieve business outcomes and be considered as effective leaders by their subordinates and direct manager. Regression analysis revealed that EI, specifically the capacity to perceive emotions, was able to predict effective leadership. These result may have important implications on how we performance manage, select and develop executives. However, caution must be given in making to wide a generalization of these findings, as the sample size is small.In terms of performance management, it is important for an executive to be able to not only deliver outputs (\"what\" performance), but also to deal effectively with colleagues and staff (the \"how\" of performance - MAC, 2001). It may be common for executives to have \"what\" but not \"how\" skills. For example, a technical specialist may perform complex tasks tenaciously and manage to produce business outputs, but may be ineffective at managing his or her subordinates, leading to staff turnover and underperforming. The results of this study show that EI may be useful in identifying who is and is not likely to deal effectively with colleagues and staff.A central theme of the ability-based MSCEIT is that it has incremental value over both personality and cognitive intelligence (Mayer et al., 2002). The overall EI score did not correlate with any of the 16 personality factors. Some of the subscales of the MSCEIT correlated modestly with warmth, vigilance, privateness, and social boldness. These results support previous research, which showed that MSCEIT scores are distinguishable from personality (Brackett and Mayer, 2003; Brackett et al., 2004).Concerning cognitive intelligence, correlations were found between the total EI score and verbal IQ, performance IQ and full scale IQ, supporting our final hypothesis. Worth noting is that the executives in this study had significantly higher IQs than the average adult population. This pattern of findings suggests that an executive may need a high IQ to get to the management or executive levels, but once there, IQ does not discriminate between better or worse performing managers. It may be useful to evaluate whether the MSCEIT can be used to help select the best performing managers.The correlations between EI and performance measures of leadership effectiveness were modest (rs<0.45). However, this level of correlation is the same or higher in magnitude as many other correlations observed in the personal selection context (Cook, 2004). For example, assessment centres show correlations with performance that often vary between 0.33 and 0.43. Indeed, many of the most important relationships observed in psychology occur at the r=0.35 and below range (Mayer et al., 2001), and correlations of this magnitude can lead to substantial increases in selection success rate (e.g. in picking high quality managers (Rosenthal and Rubin, 1982)).The research had some limitations that will need to be addressed in future research. It will be interesting to evaluate the link between EI and leadership in larger samples, and across different industries. It will also be important for research to evaluate the ability of EI to predict future performance. We hypothesize that EI leads to better executive performance. However, it is possible that good executive performance leads to higher EI. One way to resolve this problem is to conduct a longitudinal study that involves measuring EI before newly hired executives start the job. This would allow one to establish if EI skills were likely to be antecedents to managerial success. In conclusion, EI shows much promise in the organizational domain and is worthy of further investigation.\n",
"role": "user"
},
{
"content": "Purpose: - This study seeks to investigate the relationship between emotional intelligence (EI), personality, cognitive intelligence and leadership effectiveness.\nDesign/methodology/approach: - Senior executives (n=41) completed an ability measure of EI (MSCEIT), a measure of personality (16PF5) and a measure of cognitive ability (the Wechsler abbreviated scale of intelligence (WASI)). Leadership effectiveness was assessed using an objective measure of performance and a 360deg assessment involving each leader's subordinates and direct manager (n=149).\nFindings: - Correlational and regression analyses revealed that higher EI was associated with higher leadership effectiveness, and that EI explained variance not explained by either personality or IQ.\nOriginality/value: - This paper establishes a link between EI and workplace measures of leadership effectiveness.\n",
"role": "assistant"
}
] |
Tourism in Niagara: identifying challenges and finding solutions
|
[
"Tourism",
"Hospitality management",
"Economic trends",
"Canada"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nThe Niagara region: The history\nTourism: World\nKey trends: The first-ever special issue of an international refereed journal devoted to \"Tourism and hospitality management in Canada\" was published in the year 2003. According to the editors of this special issue (Jayawardena et al., 2003), there are three major trends that are shaping the managerial landscape for the Canadian hospitality and tourism industry in these early years of the twenty-first century. Five years later the relevance and the importance of these trends seem to increasing.1. Consolidation. In Canada and the USA, distinct perhaps from the experience in some European countries and Australia, the effect of consolidation of ownership has been pronounced. In the lodging sector, across both the leisure and business parts of the industry, the power of branding, the investment climate, and global strategies for major corporations has virtually mandated alliances, mergers and consolidation activity as the preferred pathway to growth. The result is that there are fewer, and bigger, and more complex interlocking patterns of ownership unprecedented in Canadian hospitality and tourism. The significance of the \"consolidation trend\" is much more, however, than an observation on the shift in industry structure and market share from independent operators to larger corporations. The consolidation trend is also impacting related industries such as visitor attractions, gaming and wine making. Collaboration in tourism development, destination marketing and binational tourism are all related to the trend of consolidation. This trend is directly linked to the second trend.2. Professionalization. For the first time in the hospitality and tourism industry the \"through the ranks\" career path is not the only way to the top in a managerial sense. With bigger and fewer corporate interests, where ownership of hotel interests or foodservice companies are only part of a portfolio, the operator or \"mine host skills\" may not carry through to the executive ranks. Increasingly, the ability to manage the balance sheet as well as the income statement, the ability to manage owners as well as customers, staff and the community, and strategic abilities make the difference between being a general manager and becoming a regional vice president.3. Recruitment and retention. Recruitment and retention is the biggest problem faced by the industry now. The demographics are relentless. For entry-level positions the traditional youth market is getting smaller and wage expectations are capped by economic reality. There are competing occupations that appear more attractive to youth. Corporations in Canada are currently influencing the immigration policy, implementing grey market employment strategies and deploying new service technology in order to deal with the looming labour market crisis (Jayawardena et al., 2003).A total of five years after the initial prediction of the above, these three themes continue to shape the managerial landscape for the Canadian hospitality and tourism industry. These themes also impact tourism in Niagara and the region's key challenges.\nWhat is WHATT?: The past, the present and the future\n2007 WHATT roundtable in Niagara: After a brief welcome to the packed house of around 250 participants by the Dean, Faculty of Hospitality and Tourism of Niagara College, the author of this paper provided the backdrop to the day's proceedings. The moderator made his opening remarks and as a prelude showed a video prepared by nine years ago by a sustainable development committee of the region. This video depicted a multitude of dynamic tourism attractions and recreation and leisure opportunities available in the Niagara region. Moderator's remarks set the stage for talks about sustainability, collaboration with all levels of government, and the challenges to overcome such as recovery from the effects of 9/11 and SARS. New challenges for the tourism industry would include the current border crossing issues and competition, both from Niagara's neighbours, and globally. In concluding his opening comments, the moderater stated that the panel should not try re-inventing the wheel, but should look at the past, the present, learn from the past, and then look forward.Arlene White was then asked to summarize some of the key outcomes and challenges from the binational conference between the New York State and the Province of Ontario, held a few days before the roundtable. She summarized the key points as:* a blueprint for sustainable tourism \"The Niagara Gateway Project\" was created ten years ago to raise the profile;* a binational tourism alliance was put in place to raise the profile of cross-border regions across North America and to raise the profile of tourism industry;* one of the unique ways to encourage cross border alliances was working with Niagara College and the University of Niagara collaboratively at the conference;* Niagara College graduate students majoring in tourism surveyed area and found the biggest impediment that hinders tourism development is the government regulations and rules;* the key challenge is to utilize and capitalize on being a cross border region, and to collaborate with cross border regions; and* the industry must work on the customer's needs and not their own in order to bring the customer back on repeat visits.Ryan Brain posed the following questions:* Who is custodian of the brand \"Niagara\"?* How can Niagara define what it has that is better than other destinations?* How should the Niagara region provide the essential quality customer service?Other participants then engaged in an active debate and identified the following challenges and key issues:* A key challenge is to eliminate the border between the USA and Canada.* Limited dollars invested for destination marketing.* Different groups talk about the same issues without action.* The region does not appreciate the assets it has.* The region needs to create a value proposition for the long-term.* Gaming industry encounters problems in jurisdiction and regulations.* No current tourism research is available.* Brand perception (international tourists) is too narrow.* The region has a good overall product, but lacks consumer awareness.* Large hotel and tourism operators do not collaborate or promote each other.* Each community in the region is doing its own thing.* There is too much overlapping work.* The labour force does not have high paying jobs.* A key challenge is to get industry leaders to commit to making things work, year after year.* Qualified tourism planners are lacking in the region.* Research is outdated and tourism planning is often ineffective.* Inconsistencies and discrepancies need to be identified.After a break the panellists continued the discussion, now looking for practical solutions to the problems identified above:* Education is important, leadership in the industry is critical.* Service quality levels need to be higher.* Should the region create a hospitality charter for all to follow?* Education needs to include applied research the encourage the desire to inquire.* Improve the coordination of tourism in the region.* Need to work together and improve binational issues.* Niagara should be developed as a year around tourist destination.* Need to share information.* Building of a new convention centre is fundamental to generating extra tourism revenue.* Need to offer different and authentic experiences and souvenirs.* Define the experience and what niagara has to offer the tourists.* Need consistency of service execution.* Quality service should be addressed at the college level.* Increase the minimum wage to keep good staff employed.* Need management training programs and continued training.* Need to attract, and entice tourists to stay longer.* Destinations have to monitor the implementation of the region's tourism master plan.* The industry should clearly identify what needs to be researched.* Government and industry need to work together.* Need to have more students stay in the area after graduation.* Create tourism as a first rate career.* Create a positive image for the region.* Set objectives to enhance quality service.* Mechanism for collaboration between public and private sectors.* Industry driven tourism analysis on the niagara region has to be done.* Need to create reasons such as unique festivals and events (i.e. the ice wine festival).Outcomes of the roundtable\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to provide a relevant backdrop for the Worldwide Hospitality snd Tourism Themes (WHATT) theme issue on tourism challenges and solution in the Niagara region, and to present the key points discussed during the 2007 WHATT roundtable discussion in the Niagara region, Canada.\nDesign/methodology/approach: - The approach of this paper is more a narrative one. It also draws data from a series of web sites to analyse the past and present performance of tourism.\nFindings: - This paper provides a historic perspective of the Niagara region and presented in the context of tourism in the world, Americas, Canada, and Ontario. Then it travels back to explain the origins of WHATT and its scholarly journey over the years. In capturing the essence of the 2007 WHATT roundtable discussion in Niagara, the paper provides a strong foundation for the other nine articles, which follow in this WHATT theme issue.\nOriginality/value: - In a world of theories, this paper provides fresh perspectives on many relevant ideas by using original expert views. Readers who are interested in the Niagara region would benefit from this paper.\n",
"role": "assistant"
}
] |
Antecedents of shared leadership: empowering leadership and interdependence
|
[
"Interdependence",
"Team performance",
"Empowering leadership",
"Shared leadership"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Increasing globalization and competition, along with the ubiquitous pursuit of innovation, drive organizations to organize more effectively, in order to maintain a competitive advantage in an ever-changing market (Drucker, 1995; Goodwin et al., 2009). Concomitantly, organizations face the challenges of meeting the needs and seizing the advantages of a more educated workforce with a wider range of knowledge to offer (Pearce and Manz, 2005). As a result, the prevalence and use of teams has increased along with a focus on factors that enable team performance (Burke et al., 2011; Morgeson et al., 2010). Leadership is one crucial contributor to team performance (Zaccaro et al., 2001); however, in order to leverage the advantages of teamwork and to capture emergent leadership dynamics in the team (Barry, 1991), it is necessary to question traditional leadership models that place the formal hierarchical leader as the primary center of attention (Gronn, 2002; Morgeson et al., 2010).\n2. Shared leadership and team performance: Shared leadership is defined as a \"simultaneous, ongoing, mutual influence process within a team that is characterized by 'serial emergence' of official as well as unofficial leaders\" (Pearce, 2004, p. 48). Defined in this way, shared leadership emerges in the team processes and involves distribution and rotation of leadership to the team member with most relevant knowledge and skills in any given situation (Carson et al., 2007; Conger and Pearce, 2003; Perry et al., 1999). Accordingly, vertical influence from a single team leader is substituted for or complemented by continuous interactions of reciprocal and horizontal influence processes among the members of a team. Therefore, instead of mostly relying on the skills of one leader, shared leadership allows for the enactment and effective use of skills and knowledge across multiple team members (Carson et al., 2007). Based on these theoretical advantages, shared leadership should predict higher levels of team performance, particularly in knowledge work areas that require complex decision making, creativity, and flexibility. This argument is supported by some initial research on shared leadership that demonstrates shared leadership's positive influences on attitudes, behavior, cognitions, and performance (see Wassenaar and Pearce, 2012, for a review). However, the relationship between shared leadership and team performance has primarily been studied in the USA (Pearce, 2008). Conversely, the present study of 81 teams is conducted in a Danish context. Hoch et al. (2010b) demonstrated a positive relationship between shared leadership and team performance in a German context, and Ishikawa (2012) found positive relationships between shared leadership and R&D performance in a Japanese context. Denmark is characterized by a much lower power distance than Germany, the USA, and Japan (index 18 compared to 35, 40, and 54, respectively; Hofstede, 1980), which according to Muethel and Hoegl (2010) predicts a higher level of shared leadership. Moreover, Denmark is known for its traditions of participation, which involve employee involvement and influence sharing. Since shared leadership is predicated on the notion of power sharing, Denmark makes for a relevant and interesting country to study the effects of shared leadership. Consequently, we propose the following hypothesis:\n3. Antecedents of shared leadership: Since shared leadership is defined as an emergent team process (e.g. Carson et al., 2007; Pearce et al., 2004), we maintain that certain conditions must be met, in order for it to develop. We aim to inquire into two dimensions in teams' conditions, pertaining to the significance of teams' relation to the external managerial system and to their internal task and goal organization. By including these two contextual dimensions, we argue that we tap into two different, yet important, conditions for team work. Following this, we investigate vertical empowering leadership - in terms of an external empowering team leader - and task and goal interdependence.\n4. Methods: 4.1. Sample and procedure\n5. Results: Means, standard deviations, and intercorrelations of key variables are presented in Table I. A missing data analysis showed covariance coverage of minimum 0.99. That is, missing data were less than or equal to 1 percent.\n6. Discussion: The present study contributes to our understanding of factors associated with the development of shared leadership in teams. Moreover, it extends previous research findings by pointing toward a comprehensive model and understanding of antecedent conditions as well as performance effects of shared leadership. In sum, our results support our proposed model and demonstrate that vertical empowering leadership and interdependence are positively associated with the development of shared leadership in teams, which, in turn, is positively related to team performance.\n7. Conclusions: The issue of team leadership and in particular of shared leadership has been addressed increasingly in recent years, and questions have been raised regarding how to develop shared leadership successfully. The present paper has empirically tested and found positive relationships among vertical empowering leadership, interdependence, shared leadership, and team performance. As expected, both an empowering team leader and interdependence were positively related to the development of shared leadership, which, in turn, was positively related to team performance. The present study provides a new context for shared leadership studies, and it confirms the theoretical notions of an external empowering team leader as well as organizational conditions that encourage interdependence in the team, in order to facilitate the emergence of effective shared leadership.\n",
"role": "user"
},
{
"content": "Purpose: Shared leadership describes leadership as a collective and reciprocal activity distributed among the members of a team (Carson et al., 2007). The purpose of this paper is to investigate variables assumed to be antecedents for this leadership approach. In particular, the authors examine the importance of external empowering leadership and task and goal interdependence for shared leadership as well as the relationship between shared leadership and team performance.\nDesign/methodology/approach: In order to test the hypotheses, the authors applied structural equation modeling using a field sample of 81 knowledge and manufacturing teams from a Danish company.\nFindings: Results indicated that an external empowering team leader and interdependence in the team significantly predicted the extent of shared leadership, which, in turn, was positively related to team leader ratings of team performance.\nResearch limitations/implications: Overall, the study supports previous findings that the act of sharing leadership in a team may contribute to increased team performance. In addition, the study provides an initial understanding of antecedent conditions for the successful development of shared leadership. However, as the study was cross-sectional and conducted within a single organization, care must be taken in making causal claims or in generalizing the results without additional evidence.\nOriginality/value: Few studies focus on the antecedents of sharing leadership. The authors obtained evidence, which suggests that the development of shared leadership may depend on the presence of an empowering team leader as well as task and goal interdependence in the team.\n",
"role": "assistant"
}
] |
Exploring relationship between value perception and luxury purchase intention: A case of Indian millennials
|
[
"India",
"Structural equation modelling",
"Purchase intention",
"Luxury value perception",
"Millennnials"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Worldwide personal luxury goods market is estimated to grow by 6-8 percent to reach around EUR280bn in 2018 (Bain & Company report, 2018). Millennials are the key growth engine of the luxury market (Giovannini et al., 2015). In total, 85 percent of the luxury growth in 2017 is propelled by millennials (Bain & Company, 2017). As per Bain & Company, by 2025, millennials will constitute 40 percent of global personal luxury goods market (Solomon, 2017). According to the previous research, millennial consumers are more likely to spend on luxury products as compared to previous generations (Higgins and Wolf, 2016). They are rapidly becoming the world's most significant generational cohort. Given their importance in providing global business opportunities, understanding and attracting millennials has become focal point for many businesses. According to Pew Research Center, the Millennials, also referred as \"Generation Y\" (Gen Y) or \"young consumers,\" are those born between 1981 and 1997. Millennials accounts for around 30 percent of the world's population and 58 percent of global millennials reside in Asia with India leading in the world, accounting for about 20 percent of the global generational cohort (AT Kearney Report, 2016).\nLiterature review: Conceptualizing luxury\nConceptual framework: Figure 1 presents an integrative conceptual framework for the study.\nMethodology and measurement: Data collection and the sample\nResults: This study used SEM to test proposed hypotheses. The two-stage model building process for applying SEM was followed as suggested by Hair et al. (1998) and Joreskog and Sorbom (1996). Confirmatory factor analysis was performed followed by multi-group analysis and testing of structural models using AMOS (version 21). SEM is used in this study as \"Structural equation modeling is a statistical methodology that takes a confirmatory (i.e. hypothesis-testing) approach to the analysis of a structural theory bearing on some phenomenon\" (Byrne, 2001, p. 3).\nDiscussion and findings: Based on the conceptual work of Vigneron and Johnson (1999), this study tries to present and test the theoretical framework with survey data of young Indian luxury fashion consumers. This study is one of the first studies that focuses on luxury fashion consumption of millennial consumers in Indian market setting. The findings confirm that this framework is valuable in understanding luxury value perceptions in an emerging market context under collectivist culture.\nConclusion: To summarize, the findings of the study suggest that the conspicuous value is the key predictor of luxury purchase intention among young Indian luxury consumers followed by experiential value, susceptibility to normative influence and utilitarian value. However, the uniqueness value is found to have weak relationship with purchase intention. In addition, results revealed that relationship between luxury values and purchase intentions do not vary significantly between male and female.\nImplications: As India is abode to the largest millennial population in the world (AT Kearney Report, 2016), the findings of this study have important implications for both marketers and researchers. First of all, the conceptual framework presented in this research backed by empirical evidence can help the practitioners understand various factors affecting luxury purchase intention in context of emerging markets like India. This study will help marketers to understand unique consumer behavior of millennial population in India. The results obtained from this study can be used by international players to establish and expand their presence in Indian markets. It will enable international luxury fashion marketers to adapt marketing strategies and design their marketing programs based upon understanding of Indian consumer behavior (Busnaina, 2014). Marketers can classify the target audience into two groups viz. those influenced by interpersonal factors and others who are affected by personal factors. Varied communication strategies can be devised for different target groups.\nLimitations and scope for future research: Findings of this research must be assessed in light of some limitations that provide direction for future research. First, in this study, the purposive sampling method was used that has its own limitations; therefore, in future the probability sampling method may be used to obtain better generalizability of the data. Also, this research was conducted in New Delhi, and further, other cities like Mumbai and Bengaluru can also be explored to better understand young luxury consumers' mindset in India. Likewise, this study provides India-specific research, in future a cross-cultural study can be performed to explore the role of culture in luxury buying behavior of millennials. Furthermore, the present research only focus on millennials; therefore, in future, a cross-generational study may be conducted to gain better understanding about the similarities and differences that exist across generational cohorts that may help luxury companies in creating more targeted marketing strategies.\n",
"role": "user"
},
{
"content": "Purpose: Although India is home to the world's largest millennial population, so far, hardly any studies exist that explain the key drivers leading to the luxury goods consumption among this generational cohort. Therefore, the purpose of this paper is twofold: first, to develop and empirically test the conceptual framework designed to measure the relationship between luxury value perceptions and purchase intentions among the young Indian luxury consumers, and, second, to examine the moderating effect of gender in the relationship between luxury value perceptions and purchase intentions.\nDesign/methodology/approach: A purposive sampling technique was employed to collect the data from young luxury fashion consumers. Statistical tests including confirmatory factor analysis, multi-group analysis and structural equation modeling were applied for data analysis.\nFindings: The findings show that the conspicuous value is the most significant determinant of luxury purchase intention followed by the experiential value, susceptibility to normative influence and utilitarian value. The uniqueness value was found to have weak relationship with purchase intention. Furthermore, results revealed that the relationship between the luxury values and the luxury buying intentions does not vary significantly between male and female.\nOriginality/value: This is one of the first studies that provides meaningful insights to the academicians and marketing practitioners about why millennials buy luxury fashion brands in emerging markets like India.\n",
"role": "assistant"
}
] |
The moderating role of price consciousness on the effectiveness of price discounts and premium promotions
|
[
"Discounts",
"Premium pricing",
"Promotional methods",
"Consumer behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In recent years the increasing relevance of consumer promotions is exemplified by the increased budget allocated to marketing communications by many firms. According to the 2006 PROMO industry trends reports, the percentage of executives polled using promotions as a part of an overall marketing strategy has grown from 55 percent in 2004 to 64 percent in 2005. Although price discounts are the most traditional form of sales promotion, premiums are also becoming popular. The premium provided by a company as a promotional stimulus is simply a product or a service offered for free or at a relatively low price in return for the purchase of one or many products or services (d'Astous and Jacob, 2002). In light of this, managers may be interested in knowing what strategy is the best when deciding how to promote their products.To analyze the effectiveness of a promotion, most of the present research has focused on the benefit provided (Raghubir, 2004; Hardesty and Bearden, 2003; Nunes and Park, 2003; Alford and Biswas, 2002; Inman et al., 1997), and the promotional framework used (Palazon-Vidal and Delgado-Ballester, 2005; Nunes and Park, 2003). However, lacking in this research are insights into how consumer personal characteristics may affect that effectiveness (Alford and Biswas, 2002). In this sense, consumer heterogeneity deserves special consideration because different consumer segments are likely to vary in their responsiveness to sales promotions and would determine the most effective form of policy for reaching target consumer segments (Kukar-Kinney et al., 2007).The goal of the current research is to fill this gap in knowledge about consumer responses to sales promotions incorporating price consciousness in the analysis. Although consumers' level of price consciousness has been extensively discussed, there is limited research to date examining its moderating effect on the effectiveness of promotional actions (Choi and Kim, 2007; Kukar-Kinney et al., 2007). To accomplish this objective, we will analyze the responses of more and less price conscious consumers to price discounts and premium promotions across different benefit levels.\nConceptual framework and hypotheses: Price consciousness is defined as \"the degree to which the consumer focuses exclusively on paying a low price\" (Lichtenstein et al., 1993). Therefore, price conscious consumers are concerned about searching for a low price in the marketplace, and they derive emotional value and entertainment from shopping for lower prices (Alford and Biswas, 2002). It is reflected in their shopping basket, since they tend to be store brand users (Ailawadi et al., 2001). Given that price consciousness has similarities with other consumer personal characteristics related to price perception (e.g., value consciousness, sales proneness, coupon proneness) such as the focus on paying lower price, these characteristics have many similar implications for marketplace responses. However, consumers who are value conscious, sales prone or coupon prone have different perceptions of prices. Of special interest are the differences between value consciousness and price consciousness because sometimes both concepts have not been properly differentiated (Dutta and Biswas, 2005). Value consciousness is conceptualized as reflecting a concern for price relative to quality received (Lichtenstein et al., 1990, 1993). However, price consciousness is a narrower concept where the concern about the quality obtained is not reflected. Given that there are characteristics not shared by these two constructs, there are some different implications for the evaluation of a promotional action (e.g., perceived value, buying intention).Sinha and Batra (1999) consider price consciousness as an attitude-like enduring predisposition that varies in intensity across individuals: some individuals are simply more conscious of the prices they pay than others. Therefore, different consumer segments can be distinguished based on their price consciousness (e.g., high vs low). Less price conscious consumers are not very involved with the price aspect of the purchase (Lichtenstein et al., 1988) and wish to engage in little price search (Lichtenstein et al., 1993). Due to their low involvement with prices, they are likely to perceive a discount as a cue for an important reduction in the price. For consumers with lower levels of price consciousness, perceived value and buying intentions are more likely to be influenced by the magnitude of discount (Alford and Biswas, 2002) than by the monetary value of the discount offered. Using an information-processing framework, Kukar-Kinney et al. (2007) suggest that low price conscious consumers will not be motivated to process information about price extensively.In contrast, high price conscious consumers, given their focus on prices (Lichtenstein et al., 1988), are cognitively very involved with price and consequently more deeply process any price related information. Based on this approach, the findings obtained by Manning et al. (2003) show that consumers who are relatively vigilant in paying low prices would be motivated to process unit pricing information, and thus be more likely to develop unit price usage knowledge. They spend more time thinking about prices (Kukar-Kinney et al., 2007), and therefore about the convenience of a price discount. They evaluate the monetary saving offered by the promotion and not merely the promotional claim. This does not mean that they conduct an absolute processing of the price discount information, because it is probable that they conduct a relativistic processing, evaluating the monetary saving relative to the price paid. As a consequence, they are engaged in higher levels of search and are more devoted to the task of shopping to the extent that a better price exists elsewhere (Babin et al., 2007).The above-mentioned considerations draw our attention to the fact that the differences in the information processing conducted by consumers determines the effectiveness of price discounts and premium promotions. But the information processing also depends on the promotional benefit level offered by the promotion. Following Hardesty and Bearden (2003) and Grewal et al. (1996) when the promotional benefit is low, consumers are unlikely to process information extensively since the promotion (e.g. price discount) has little monetary value. Similarly, when the promotional benefit is high, consumers are predicted to be unlikely to process information extensively since there is less uncertainty about the merits of the deal. However, consumers are expected to process information more elaborately at moderate benefit level since there is greater uncertainty regarding the deal. Thus, at low and high promotional benefit levels information processing is low, while at moderate benefit levels, information processing is high (Hardesty and Bearden, 2003; Grewal et al., 1996), which implies that the consumer information processing follows a U-inverted function. Therefore, we are interested in analyzing the moderating effect at two benefit levels: one where the information processing is high, and another where it is low. Taking into account that low price discounts will be not so interesting to study from a price consciousness approach, in this study we focus on moderate and high benefit levels.Moderating effect at moderate promotional benefit level\nMethodology: To test the hypotheses we used a two promotional benefit level (moderate, high) x two promotion type (price discount, premium) between subjects experimental design. The data were obtained from a controlled experiment involving undergraduate students (see Table I).Pretests to the treatments design\nResults: Manipulation checks of the promotional benefit levels were conducted separately from the main study to avoid the possibility that the subjects' own responses to the dependent variables might bias their reaction to the subsequent manipulation measurements (Kidd, 1976). For that purpose a total of 114 individuals participated in a pilot study to verify that manipulations of the benefit levels were as intended. The one-item promotional benefit level manipulation check derived from the work of Hardesty and Bearden (2003) was included for both conditions, price discounts and premiums. An ANOVA indicated that the price discounts manipulation check varied across promotional benefit level (F1,58=17.046, p<0.001; Xmoderate=4.11, Xhigh=5.50). Similarly, an ANOVA indicated that the premium manipulation varied across benefit levels (F1,54=13.184, p<0.002, Xmoderate=3.63, Xhigh=4.64).To test the hypotheses, data were pooled over the two product categories (pizza and soft drinks). An ANOVA was conducted in every promotional benefit level (moderate and high) for each dependent variable (perceived value and buying intention), focusing on the interaction between promotion type and price consciousness.H1 and H2 predicted that price consciousness would moderate the effectiveness of price discounts and premium promotions at moderate benefit level. Specifically, H1 posits that price discounts generate a higher perceived value than premium for low price conscious consumers in comparison to high price conscious consumers. The ANOVA reveals a significant interaction effect between the type of promotion and price consciousness (F(1,111)=6.872; p=0.01). Consistent with the prediction of H1, there are significant differences in the groups' responses, showing that price discounts generate a higher perceived value than premiums for low price conscious consumers (see Figure 1). The results also show that the main effect of the type of promotion was not significant. This finding supports previous knowledge about similar evaluations of different promotional tools at this benefit level. Regarding H2, the ANOVA shows that the interaction effect is not significant (F(1,111)=2.195; p=0.141), although the pattern of response is similar to H1 (see Figure 1). Therefore we cannot support H2. Again, the main effect of the type of promotion was not significant, with buying intention being similar for both types of promotion.When the benefit offered is high, H3 posits that high price conscious individuals perceived higher value in price discounts than for premiums in comparison to individuals who are low in price consciousness. The interaction effect was significant (F(1,116)=4.287; p=0.041). Consistent with the predictions of H3, the results reveal a significant difference in the groups' responses, showing that price discounts are more valued than premiums by high price conscious consumers (see Figure 1). The results obtained show that the main effect of the type of promotion is significant, with price discounts being more highly valued than premiums (Xdiscount=5.14, Xpremium=4.40; F(1,116)=13.513; p=0.000). Regarding H4, the interaction effect was significant (F(1,116)=4.058; p=0.046) and the hypothesis is supported. As predicted, price discounts generate a higher buying intention than premiums for high price conscious consumers (Xdiscount=4.99, Xpremium=4.30; F(1,116)=9.445; p=0.003). In sum, Figure 1 depicts the pattern of responses across the different treatment conditions.\nDiscussions: The objective of this paper is to examine the moderating effect of price consciousness in the effectiveness of price discounts and premiums at different promotional benefit levels. To accomplish this objective, the responses of more and less price conscious consumers to price discounts and premiums across moderate and high benefit levels have been examined because there are few publications to date that have analyzed its moderating effect. On the whole, the results largely support our main predictions.In particular, the results reported by this research indicate that price consciousness is a key consumer trait because it moderates the effectiveness of price discounts and premiums at moderate and high benefit levels. The results obtained at moderate benefit level show that price discounts and premiums are equally effective for high price conscious consumers, which is consistent with the rationale of the U-inverted function proposed by Grewal et al. (1996). At this promotional benefit level consumers are expected to process promotional information more elaborately, being more conscious about the benefit offered by the price discount and the premium. Thus, price discounts and premiums are evaluated similarly. However, price discounts are more effective than premiums for low price conscious consumers. This result corroborates the spontaneous information processing conducted by low price conscious consumers such as Alford and Biswas (2002) posit.This research also finds that price discounts are more effective than premiums at high promotional benefit level. This is consistent with the arguments of price acceptability function (Ofir, 2004) and the Silver Lining Principle (Thaler, 1985). But we also provide evidences about the more apparent positive effect of price discount for high price conscious consumers. In particular the latter finding about the strong effectiveness of high price discounts for high price conscious consumers shows how they derive much benefit form price saving at this level. These results have important implications for both scholarship and practice.Implications for scholarship and limitations\n",
"role": "user"
},
{
"content": "Purpose: - This study aims to integrate price consciousness into the promotional effectiveness framework. Specifically, it aims to analyse whether price consciousness affects the evaluation of price discounts and premiums at two different benefit levels (moderate vs high).\nDesign/methodology/approach: - An experiment was conducted with two promotional benefit level (moderate, high) x two promotion type (price discount, premium) between subjects design. A sample of 229 undergraduate students was randomly assigned to a specific product-promotion combination.\nFindings: - The results obtained indicate that at moderate benefit level, price discounts and premiums are equally effective for high price conscious consumers. However, price discounts are more effective than premiums for low price conscious consumers. At high benefit level price discounts are more effective than premiums, but this effect is more apparent for high price conscious consumers.\nResearch limitations/implications: - The limitations of the study are those typically applied to the experimental methodology. Specifically, only two product categories and two types of sales promotions were used which limits the generalization of the results. Another potential limitation is the use of students' respondents.\nPractical implications: - It is recommended that managers should know how price conscious their consumer segment is before taking any decisions regarding the promotional strategy. To be more effective, it is recommended to offer premiums instead of moderate price discounts if the target segment is high price conscious. However, such a recommendation should only be followed when the target is not low price conscious, because for this consumer segment a moderate discount is preferred.\nOriginality/value: - To analyze the effectiveness of a promotion, most of the present research has focused on the benefit provided, and the promotional framework used. However, lacking in this research are insights into how consumer personal characteristics may affect that effectiveness. The current research is to fill this gap in knowledge about consumer responses to sales promotions incorporating price consciousness in the analysis.\n",
"role": "assistant"
}
] |
The impact of supervisor humor on newcomer adjustment: The mediating role of relational identification
|
[
"Mediation",
"Newcomer adjustment",
"Relational identification",
"Supervisor humor"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The adjustment of newcomers to their new workplace has been of utmost importance for themselves, peers and organizations (Cooper-Thomas et al., 2012). During the period of early entry, employees mould their perceptions regarding what their new context is like and whether they will fit in (Kammeyer-Mueller and Wanberg, 2003). In fact, empirical studies have demonstrated that these premature perceptions and attitudes are related to subsequent attitudes and behaviors (Adkins, 1995; Morrison, 1993; Wanous, 1992). Combined with the contemporary high mobility of the workforce, organizations have come to realize the salient role of the \"on-boarding\" process in their overall success. As such, organizations attempt to effectively adjust newcomers by employing socialization tactics. Effective socialization may ameliorate both person-organization and person-job fit (Cohen and Veled-Hecht, 2010).\n2. Theory and hypotheses: 2.1. Supervisor positive humor\n3. Method: 3.1. Sample and procedure\n4. Results: 4.1. Descriptive statistics\n5. Discussion: Researchers have long been interested in newcomers' adjustment processes. As a consequence, there has been substantial empirical research with regard to its antecedents (e.g. Bauer et al., 2007; Kammeyer-Mueller and Wanberg, 2003). Subsequent to this, several studies have addressed the vital role of humor in newcomers' socialization and adjustment (Heiss and Carmack, 2012; Mak et al., 2012; Tracy et al., 2006). To date, however, there is insufficient empirical research that examines the use of positive humor by the supervisor in relation to the newcomers' adjustment. Thus, we examined the relationship between supervisor positive humor and newcomers' adjustment through the mediating role of relational identification with the supervisor. Congruent with the extant literature, our findings corroborated prior studies proposing humor as an important contributor to the effective adjustment of newcomers.\n6. Conclusion: In the present paper, we attempted to illustrate the role of supervisor humor in ameliorating newcomers' adjustment. Toward this end, we encompassed relational identification with the supervisor as an explanatory mechanism for the effects of supervisor positive humor on newcomers' adjustment. The results supported our hypotheses demonstrating that the use of positive humor by the supervisor positively influences newcomers' adjustment through the mediation of relational identification with the supervisor.\n",
"role": "user"
},
{
"content": "Purpose: Substantial research has examined the pivotal role of supervisor positive humor in generating employee outcomes. To date, though, little is known about the relationship between supervisor humor and newcomers' adjustment. The purpose of this paper is to contribute to this gap by examining the effect of supervisor positive humor on newcomers' adjustment. In doing so, the authors highlighted relational identification with the supervisor as a mediating mechanism that explains the aforementioned association.\nDesign/methodology/approach: Data were drawn from 117 newcomers. In order to collect the data the authors used the snowball method. Also, hierarchical regression analysis was conducted.\nFindings: The results demonstrated that supervisor positive humor affects employees' relational identification with the supervisor which, in turn, positively relates to newcomers' adjustment.\nResearch limitations/implications: Data were collected using a cross-sectional design and, therefore, the authors cannot directly assess causality. Moreover, the authors used self-report measures which may strengthen the causal relationships.\nOriginality/value: To the best of the knowledge, this is the first study that illustrates the role of supervisor humor in enhancing both newcomers' relational identification and adjustment.\n",
"role": "assistant"
}
] |
Business ethics - Part Two: Making it part of your strategy
|
[
"Leadership",
"Business ethics",
"Governance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In Part One of this series of articles (Industrial and Commercial Training, Vol. 40 No. 5) I laid out some recent examples of where there has been a lack of business ethics large enough to hit the headlines. The first part also examined some definitions of terms; what some cynics might say who regard \"business ethics\" as an oxymoron; and finally the case for including ethical business values in the strategic plans of your business. If the first article answered for you the \"why do it\" question, the second part goes on to look at the \"how\" question.\nEthics, strategy and governance: The values of an organisation are a strategic question. There is a clear link between business success and ethical values. Since ethical questions are rarely straightforward, there is an element of understanding and managing the risk when decisions are taken - again something for senior managers to appreciate.Values also touch on governance issues. Enough people in Enron suspected that something was wrong but they did not blow the whistle because the values of the organisation led them to keep their mouths shut. Non executive directors and Trustees have fiduciary responsibilities that go beyond the hard financial results and extend to the softer less tangible aspects of organisational health that are often seen in the behaviours that reflect the values-in-practise, often described as the culture.One solution is to have in place a management system to support the behaviours arising from the preferred values of the organisation. The system has to be working and this section of this article examines these issues. There is a caveat I would make. All of this section depends upon having effective strategic leaders, and this point is so important that I will start with it here.\nStrategic leadership and values: In developing a strategy the SWOT analysis may point to ethical values that can be retained because they are strengths. It will probably also expose weaknesses in the current culture where the status quo will not be appropriate to the longer term vision of the business. Whatever the business objectives that are subsequently developed there will be ethical values and their linking behaviours that will need to be encouraged. See the example about Office Depot which clearly links values to behaviours (Table I).The next example links ethical values to competitive advantage. The Cooperative Bank is an example of a business that has ethical values as one of the main planks of its business strategy - see case study provided by Frank Lees. Notice that the values in this case go well beyond anything that is needed simply to meet legal obligations and are designed to differentiate the bank with a unique appeal to investors and borrowers.A key strategic leadership question arises when introducing a change in strategy and new values into the business. Directors cannot do it all on their own. Without the right people in place, there is little point in launching a strategy involving new values. In the key positions we need people who share our view and values needed for the future. This may mean moving some managers to one side and recruiting replacements.Promotion and other reward decisions are critical for aligning management behaviour with the values of the organisation. These are the points where the belief of the strategic leadership will be clearly seen by all the staff. If we are seen to be promoting someone who does not support the values, or rewarding them in some other way, then the credibility of the strategy could be put in jeopardy. Worse the suppliers, staff and customers who are aware of the promotion are likely to become cynical about any other policy we introduce.The most crucial test is what happens to a director who does not exemplify the values. We would not have heard anything about a junior manager in the World Bank who avoided the procedure and ordered a promotion and excessive salary increase for her partner. But when Paul Wolfowitz did it, he was shown to have double standards about a key value of the organisation namely resisting corruption. He had to go or the values would mean nothing for a very long time and may have stifled the work of the World Bank for years to come.\nThe management system to support the strategy: The acid test of a management system designed to support values in the work place is the behaviours that become the norms day to day. Simply writing an aspirational statement and putting it on the notice board is not a management system I have in mind. We need to ask ourselves how we get people to understand the values and then apply them to often complex situations so that others such as customers for example notice them.There is a fundamental decision that needs to be taken and it is highlighted by looking on the Wikipedia web site of a company called Nordstrom in the States, which for many years had as its rule book:Nordstrom Rules: Rule #1: Use your good judgment in all situations. There will be no additional rules.At the other extreme we could have a highly structured approach including for example codes of practise, an ethics committee of the board, a confidential hot line, and a whole auditing procedure to name just four. It is easy to say \"Rules are for children and values are for moral adults\". There needs to be a balance and one of the determinants will be the business strategy.We may want to give employees more freedom in support of a strategy that encourages creativity in the business. What we don't want is those creative staff wasting time using their imagination to find ways around rules which are too restrictive. On the other hand an organisation that depends on teams working together may need rules about when all the team need to be present.Size too could be important. A small organisation may have few rules about expenses claims, a larger one will need to be much more formal. The important point is to decide how values and rules will be blended to achieve the organisation's strategic objectives.\nProviding a governance trail: Let's look at some elements of a management system that will be in place however structured your approach and that would provide a trail for non executive directors, Trustees and auditors to follow.A values statement that is discussed and agreed by the Directors and by the CEO is essential. How this document is being supported and publicised throughout the organisation can be followed. An example from a small organisation that illustrates how a values statement is being supported is provided in the HDL case study (see Box 1).If new ethical values are being introduced, or if new staff are being introduced to the existing values then some form of training will be happening. Competencies defined for jobs will include ethical values and behaviours. There would be a trail available in the records and in the comments by employees.As real situations occur, to develop knowledge and behaviours there will be a continuing dialogue around the application of the values - the principles being put into practise. The dialogue would probably be cross functional since they probably will affect more than one department. They could also include representatives from functions not directly affected to ensure a wider point of view in the discussions. If values are a process, which they should be then there will be a trail.Some mistakes are inevitable and they will be treated in the same way as any performance question. Were they wilfully ignoring responsibilities or was it a result of not understanding what to do? There may be evidence that could be followed in the warnings records and in the retraining plans. And if one of the values of the business is the fair and just treatment of staff, then there will be direct evidence here of how managers apply these values to their management style.Are there reports of misconduct in others made by employees and how are they handled? Without reports those concerned with the governance of the organisation have reason to be concerned. How they are handled will give evidence of the human values applied by managers.\nManaging ethical corporate behaviour in an international context: The increase in global product sourcing particularly has forced organisations to consider their social responsibilities to suppliers who operate outside of their own culture (see Co-operative Bank example in Box 2). The Fair Trade movement is a response to the concerns of customers that businesses in the West were exploiting the economic weakness of some agricultural producers in the developing world. Customers have also asked whether it is ethical to buy from retailers in this country who obtain their supplies from countries who use child labour.The biggest challenge in this respect is to organisations that operate across the boundaries of countries with conflicting cultural standards. Some of the common practises of less developed countries are in the areas of bribery, sexual harassment, employee safety, working hours, racial discrimination and environmental protection which would be illegal and unethical in the West. To meet these challenges for example the United States has laws which forbid paying bribes at home and overseas.The management of these issues goes beyond simply complying with the law. It requires a strategic decision about the policy to be adopted and the latitude for discretion by executives working in countries with different ethical standards and traditions. Less time is then spent on whether to take a course of action that might be unethical and more time can be devoted to deciding how to successfully apply one's values to business decisions.\nSummary: Along with the \"why do it\" question from the first part, this article has addressed the \"how to do it \" question. It would be easy to say that ethical values are more important to some sectors such as pharmaceuticals, banking, health and charities. But I do not support this view. It is not an overstatement to assert that ethical values and social responsibility are critical to the success of both commercial and not for profit organisations. Whatever its size as the case studies illustrate, if any organisation would like to improve its performance then incorporating ethical values in its strategic leadership would pay excellent dividends.To do it means taking a longer term view. It can be done if you have the time and resources to step back from the relentless quest for quarter to quarter results. But that's not always possible. Recognising the pressures of business and to help you start the process, I list below eight questions that summarise the main messages of these articles.\nQuestions to get started: Go to jail. Go directly to jail. Do not pass Go. Do not collect PS100. You may remember this phrase from the Monopoly game. Likewise, as you read the following questions, if you can answer them with confidence, then give yourself a reward - even if it is just a metaphorical pat on the back and not PS100.If you cannot answer them positively, then go back directly to the drawing board and get some help from those who share your views. The questions and their answers could be an entry for strategic leaders to discuss how to use ethical values to develop a competitive position or a thought starter for managers discussing the topic at an away day for example:1. Do senior managers act as exemplary leaders of our ethical standards? Are the non executive directors interested in the softer aspects of their fiduciary responsibilities?2. Are our ethical values supported by an ethical values statement created by the Directors, consulted with our stakeholders and which is part of our strategy?3. Are the ethical values translated into behaviours for particular roles, explained and trained to staff, and part of the performance review procedure. Can customers and suppliers outside of the organisation say what our values are?4. Are managers and staff rewarded and promoted for living the behaviours associated with the ethical values and where necessary sanctioned if they do not?5. Do we regularly discuss complex ethical decisions? Have we dismissed courses of action that might be unethical and devoted our energies to working out how to successfully apply our values to the business decision in question?6. Have we considered the consequences of our decisions on all the stakeholders? Would our stakeholders consider our decision to be fair?7. Do our decisions uphold the values we hold most dearly? Do we act with integrity? If we were questioned about a decision by our family members, would we be proud that it would reflect our ethical values?8. When we are working across cultural boundaries, have we considered how our values and social responsibilities will work?\n",
"role": "user"
},
{
"content": "Purpose: - This second article (Part One in Industrial and Commercial Training, Vol. 40 No. 5) aims to provide an overview for senior managers of how to develop an ethical culture in business. It describes a management system to support a strategy, and a governance trail of activities that may be used by Non Executive Directors and Trustees as they discharge their fiduciary responsibilities.\nDesign/methodology/approach: - The object is achieved through arguing for a systematic approach to developing ethical behaviour based on core values that support the business strategy. The article then goes on to present examples from organisations from different sectors and sizes where ethical values are part of a positive strategy to determine the culture of the business.\nFindings: - There is clear guidance about how to manage ethical values in business. Readers could start with the \"ethical health check list\" of questions. The questions could be a thought starter to managers discussing the topic at an away day for example. Managers could conduct an audit of an existing ethical strategy to identify if their values statement is being translated into behaviours throughout the organisation. They could contrast their activities with the examples in the article.\nOriginality/value: - This article is educational and therefore provides value to senior managers and directors in a concise but comprehensive overview of the topic that is of current concern in the business world.\n",
"role": "assistant"
}
] |
Dividing network externality into the number of peers and users: Focusing on sociability and enjoyment in online games
|
[
"Perceptions",
"Empirical study",
"Structural equation modelling",
"Computer games",
"End user"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In October 2016, Google possessed 90.37 percent of the internet search engine market share, showing a distinguishable monopoly (Statista, 2017). Based on its strong market share, Google's advertisement profits reached $79.38 billion in 2016 (Statista, 2017). It is known that Google retains its dominant position not only by the superiority of its search engine and algorithm (Brin and Page, 2012; Barroso et al., 2003), but also by network externality (Argenton and Prufer, 2012), an increase in users that leads to an increase in the value of products or services (Katz and Shapiro, 1985). As the information and communication technology (ICT) industry develops, the role of network externality is becoming more crucial for securing users.\n2. Literature review: 2.1 Network externality\n3. Research model and hypotheses: Using SDT to study the motivation to play games, Ryan et al. (2006) proposed that the \"PENS\" can estimate the effects of autonomy, competence, and relatedness on enjoyment and future game play by assuming that desire is satisfied through playing games. A desire for autonomy is satisfied by an activity that produces interest or a feeling of individual value (Ryan and Deci, 2000). Competence is about the skill or ability to meet a challenge, so a game with controls that are intuitive or easy to master would produce high competence (Ryan and Deci, 2000). Relatedness is a psychological need in SDT that enhances motivation by creating a feeling of connection with others (Ryan and Deci, 2000). The present study states one hypothesis for the effect of all three of those factors on enjoyment (Przybylski et al., 2010):\n4. Survey and estimation results: 4.1 Survey and data\n5. Discussion: This study finds that competence, autonomy, and relatedness, concepts from SDT, have a significant and positive effect on enjoyment, which supports the previous literature. In addition, flow has a positive effect on attitude, similar to the results of previous works (Ha et al., 2007; Hsu and Lu, 2004), and enjoyment has a positive effect on attitude, similar to previous results (Ha et al., 2007; Shin and Shin, 2011). However, the primary aim in this study is to prove a difference between two network externality measures: the number of peers and the total number of users. Previous research shows that network externality has a positive effect to use social service but the implication of the network externalities can be confusing and blurry because there is no strict boundary between these two network externalities. As a first primary result, there are different effects of the number of peers and the total number of users, as shown in Lin and Lu (2011) and Wei and Lu (2014). The effect of the total number of users is larger in the present study, unlike the number of peers had a bigger effect in Lin and Lu (2011) and Wei and Lu (2014) did not try to study difference.\n",
"role": "user"
},
{
"content": "Purpose: Even though network externality plays an important role in users' motivations to use services or products, the implications of this are not clear because previous studies did not distinguish between the number of peers and the number of total users. Thus, the purpose of this paper is to demonstrate that there is a difference between the two network externalities, i.e., the number of peers and the total number of users. To accomplish this, the perception of quality of life is considered to have an impact on the effects of the two different network externalities.\nDesign/methodology/approach: Data were collected from a survey that covered 508 online game players in South Korea, and the two hypotheses of \"player experience of need satisfaction\" from self-determination theory as well as user gratification theory (UGT) were assessed using structural equation modeling.\nFindings: The results indicate that people consider the total number of users and the number of peers differently. In addition, the effects of the total number of users and the number of peers vary according to respondents' perceived life quality in four dimensions: loneliness, happiness, satisfaction with life (SWL), and escapism. In particular, people's offline tendencies are reflected online in terms of loneliness, whereas online life compensates for a lack of enjoyment offline.\nOriginality/value: The authors verify that UGT can affect the network externality by considering perceived quality of life (loneliness, SWL, happiness, and escapism) as a moderating effect.\n",
"role": "assistant"
}
] |
Manager's report: organizational culture & strategy association
|
[
"Culture",
"Managers",
"Strategy",
"Association"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: As no two businesses are operated the same, there is great debate among academics and managers related to the relationship between culture and strategy. Some researchers and practitioners have indicated that culture trumps strategy (Stothard, et al.; 2013; Katzenbach et al., 2012; Merchant, 2011). In contrast, there are those who suggest that strategy helps the formulation of organizational culture (Jones, 2010). Still other scholars suggest that for an organization to be successful, the alignment of culture and strategy is important (Trevor and Varcoe, 2016; Ahmadi et al., 2012; Zheng et al., 2010; Lytle and Timmerman, 2006). If these factors are not aligned properly, the chance of an organization achieving its objective decreases.\n2. Literature review and hypotheses development: Findings from earlier research indicate the existence of different strategy-making process models (Ocasio and Radoynovska, 2016). These strategy-making process models are classified based on either lying along a continuum of a particular model (Okumus, 2001) or on the applicability of such models within varying organizational contexts. For this study, we use the Hart's (1992) strategy-making process model, which incorporates different types of strategies (command, rational, transactive and generative), because of three reasons:\n3. Research hypotheses based on culture and strategy: 3.1 Bureaucratic culture and strategies\n4. Methodology: 4.1 Procedure for data collection and participants' profile\n5. Results: To properly analyze the relationship between each organizational culture and each of the four strategies under consideration, the data were stratified and analyzed separately. First, the associations between bureaucratic culture and the four types of strategies were analyzed. Second, the associations between supportive culture and the strategies were analyzed. Finally, the associations between innovative organizational culture and the four strategies were analyzed. Finally, the strength of each significant association was examined further. To examine the strength of significant associations, Phi tests/correlations were examined. Results of the cross-tabulations can be found in Tables III-V.\n6. Discussion: The objective of this study is to ascertain the strategies that organizations promote based on their corporate culture. To achieve this objective, this study answers the following key questions:\n7. Implications for managers and scholars: Based on the results, this study provides several inevitable implications for managers and scholars. First, past studies (Zheng et al., 2010; Muafi, 2009; Kaplan and Norton, 2004) have theoretically proven firms' improved productivity and profitability due to proper alignment of culture and strategies. This study empirically validates it with the help of data collected from managers showcasing strategies that organizations should endorse based on their culture. Based on this study, we can suggest that command and rational strategies should not be used with a supportive culture. Additionally, while significant, the associations between the supportive-transactive, innovative-transactive and innovative-generative dyads are very low, indicating a weak association.\n8. Conclusions, limitations, and future research: In conclusion, this study showcases that strategy and corporate culture are apparent for the success of a firm. What is most apparent, however, is how strategy and corporate culture develop and what causes the variation of results between similar firms in the industry. Using a managerial sample, this paper empirically examines the association between culture and strategy. Overall, this study provides critical insights for practitioners' understanding of such relationship.\n",
"role": "user"
},
{
"content": "Purpose: Strategy and organizational culture are indispensable for success within a business. Both behavioral scientists and practitioners have shown keen interest in understanding the association between culture and strategy; however, no strong consensus has been formed about this relationship. This paper aims to shed light on this relationship by answering the following questions: Is organizational culture separable from its strategies? Is there an association between organizational culture and strategy?\nDesign/methodology/approach: Using a sample of 496 service managers, this study empirically examines the relationship between culture and strategy. Due to the nature of the data, cross-tabulation research method was used for analysis and to check the association between organizational culture and strategy.\nFindings: Results indicate that successful firms with a bureaucratic and innovative culture may demonstrate any of the four examined strategies (command, rational, transactive and generative). The results also suggest that successful firms with a supportive culture will likely use a transactive or generative strategy. Overall, the results found that all four strategies are associated with each of the three corporate cultures, except for the supportive culture-command strategy and supportive culture-rational strategy dyads.\nOriginality/value: There are diverse views about the organizational culture-strategy relationship; however, no strong consensus has been formed about this relationship. Using managerial data collected from service industry, this study examines the relationship between three organizational cultures, namely, bureaucratic, supportive and innovative, and four different types of strategies, namely, specially command, rational, transactive and generative.\n",
"role": "assistant"
}
] |
Does interaction between TQM practices and knowledge management processes enhance the innovation performance?
|
[
"Innovation performance",
"Knowledge management processes",
"Partial least square",
"TQM practices"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Due to several factors such as the rapid technological development, change in the customers' preferences and needs, ever-increasing competition, it has become a hard task for organizations to maintain competitive advantages. As a result, organizations seek to adopt and adapt several strategies (e.g. total quality management (TQM), knowledge management, and innovation) whose effectiveness in achieving high rates of performance has been proven (Chen et al., 2009; Liebowitz, 1999). Among those strategies, innovation is considered by several studies as the key source of organizations' competitive advantage (Bigliardi and Dormio, 2009; Rosenbusch et al., 2011; Sandvik and Sandvik, 2003). Furthermore, the ability to leverage innovation constitutes a main engine and driver of economic growth (EPU, 2010; Rosenberg, 2006; Torun and Cicekci, 2007). According to Torun and Cicekei (2007), modern economies are built with ideas that are translated into creative outputs. Therefore, enhancing innovation performance has become an unavoidable choice for organizations in such a competitive environment.\nLiterature review: TQM practices and knowledge management processes\nResearch methodology: Data collection\nData analysis: The most popular statistical techniques under structural equation model (SEM) are covariance-based approach (CB-SEM) and variance-based partial least squares technique (PLS-SEM) (Jr et al., 2014). However, PLS-SEM has lately gained wide attention in many disciplines like marketing (Hair, Sarstedt, Ringle, and Mena, 2012), strategic management (Hair, Sarstedt, Pieper, and Ringle, 2012), management information systems (Ringle et al., 2012), and other areas. The ability of PLS-SEM to handle problematic modeling issues that usually happen in the social sciences such as unusual data characteristics (e.g. non-normal data) and highly complex models is the substantial reason behind increasing the usage of this approach.\nDiscussion: Based on a comprehensive review of the relevant literature, the present study proposed a structural model of relationships among a number of the distinct domains in the business world nowadays, i.e., TQM practices, knowledge management processes, and innovation performance. Essentially, this study attempted to investigate how much TQM practices are effective in enhancing knowledge management processes, and to clarify the role of knowledge management processes in improving innovation performance. The perceptions of the managers of the manufacturing companies in Malaysia were the source of the information for testing the hypotheses and the model involved in the study. The results of the hypotheses testing showed that a well-established TQM (as one set of practices) within the organization leads to a better performance of knowledge management processes. This result is compatible with several previous studies that have examined this relationship between TQM practices and knowledge management (e.g. Ooi, 2009; Skyrme and Amidon, 1997; Wong, 2005; Zwain, Lim and Othman, 2011). In addition, it could be argued that the basis of this positive relationship relies on the infrastructure provided by TQM practices, which (i.e. the infrastructure) represents an antecedence of knowledge management processes. Furthermore, TQM concept, as a set of practices, ultimately aims to satisfy customers through continuous improvement processes (Hackman and Wageman, 1995; Kumar et al., 2011). To achieve this goal, several processes and practices should be performed. Applying these processes and practices helps to make the organization more open and close to its environment (Chourides et al., 2003; Skyrme and Amidon, 1997). This, in turn, drives the process of knowledge management.\nImplications of study: From a theoretical perspective, the results of this study provide empirical evidence of the positive role of TQM practices in enhancing knowledge management processes. Additionally, knowledge management processes were found to be influential processes to boost innovation performance of the manufacturing companies. On the other hand, from a managerial perspective, the outcomes of this study provide the decision maker in manufacturing companies with a better understanding of how TQM practices can go beyond enhancing the quality of the manufacturing processes or quality improvement by facilitating knowledge management processes too. Obviously, the findings of this study indicate that in order to effectively enhance the processes of knowledge management, TQM practices must be taken as one set of practices that can complement each other starting from top management commitment to the detailed processes to report the necessary data among the divisions. Moreover, the decision makers should enhance applying TQM practices at the strategic level of the manufacturing company, which means giving more attention to soft practices of TQM rather than hard practices. To do so, the managers should emphasize the role of HRM and all behavioral aspects to improve the employees' skills, management leadership, teamwork, organizational culture, supplier relationship and management, and create value to the customers. Such practices by management will ensure the flow and update of the cutting-edge of knowledge throughout the organization. Moreover, it is highly recommended that managers allocate some extra efforts and financial supports to enhance PEM and QDR practices to reinforce and establish effective knowledge management processes. Therefore, further studies are needed to clarify the best way and strategies in this regard.\nConclusion: Innovation performance seems to be one of the critical indicators of success of the organizations through which companies can ensure their sustainability in the market. In this regards, gaining and managing knowledge have been recognized as one of the essential requirements to reinforce innovation performance. Therefore, the main purpose of this study is to empirically investigate the effect of applying TQM practices as a strategic option on knowledge management processes, which, in turn, leads to desired innovation performance. The findings of this study emphasize two main issues. The first issue is the influence of TQM practices on knowledge management processes. More specifically, the outcomes of this paper suggest that soft practices of TQM are more essential to knowledge management processes. However, this result does not mean that hard practices of TQM are not necessary. TQM practices should be applied as one set of practices (i.e. soft and hard practices) with an emphasis on soft practices. Second, the direct impact of knowledge management processes on innovation performance has been proved. Furthermore, the ability of the organization to gain the knowledge and use it has been determined as the main determinant of innovation performance. Accordingly, our findings recommend that TQM as one set of practices plays an indirect role to enhance innovation performance through providing the necessary antecedences (i.e. knowledge). This obtained conclusion has significant consequences for decision makers in manufacturing companies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the effect of applying total quality management (TQM) on enhancing knowledge management processes. It also examine the relationship between knowledge management and innovation performance in the Malaysian manufacturing sector.\nDesign/methodology/approach: This study used a survey method to test the formulated hypotheses. Therefore, the adopted questionnaire was used as an instrument to collect the needed data. The population of the study consisted of 800 big and medium manufacturing companies listed in Federation of Malaysian Manufacturers (2012).\nFindings: The results of this paper support a positive and significant impact of TQM practice on knowledge management processes. Furthermore, the relationship between knowledge management and innovation performance has been a proved. However, further analysis on dimension level indicates that knowledge acquisition failed to show significant relationship with innovation performance.\nOriginality/value: This study addresses one of the recent issues within the Malaysian context of becoming a developed nation, which is innovation performance, specifically for manufacturing companies. To do so, the overlapping relationships among TQM practices, knowledge management, and innovation performance have been tested.\n",
"role": "assistant"
}
] |
Health education for deaf Romanian children: parents' opinions
|
[
"Health education",
"Children with hearing disabilities",
"Communication technology",
"Parents’ involvement",
"Romania"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Adoption of an unhealthy lifestyle - smoking, unhealthy nutritional habits and sedentary behaviour - by children and adolescents has short-term and long-term effects on health, increasing the risk for chronic diseases such as cardio-vascular diseases, respiratory problems, diabetes and different types of cancer, especially that many times the behaviours which are established in childhood continue during lifetime (Currie, 2008; WHO, 2002, 2012; US Department of Agriculture and US Department of Health and Human Services, 2010). At the same time, these unhealthy behaviours could contribute to the development of social problems such as poor school achievements, social marginalization, and lower chances for professional and socio-economic development (Currie, 2008; WHO, 2002, 2012). Different organizations underline that health education programs focusing on promotion of a healthy nutrition, physical activity and smoking prevention should be offered to school children of all ages, being important to adapt based on the age and educational level the type of information and methods used for increasing awareness and building self-efficacy and skills which help children and adolescents (US Department of Health and Human Services, 1994; Centers for Disease Control and Prevention (CDC), 2011).\nMethods: Sample and procedure\nResults: Sample characteristics and use of internet\nDiscussions: Several studies from different countries of the world underline the importance of analyzing health and social problems that are novel and understudied and gaining the involvement of under-represented and vulnerable populations, such as people with disabilities, as well as improving their self-reliance and empowering them to be able to adopt a healthy lifestyle and to improve their social development and quality of life (Munoz-Baell et al., 2008a; Smith et al., 2012; Sadler et al., 2001; Smith et al., 2015; Pareek et al., 2015).\nConclusions: The results have several implications for health education and social work among children with hearing deficiencies from Romania and their parents.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to focus on parents of children with hearing disabilities from Romania and has three objectives. First, it assesses their characteristics regarding use of communication technology. Second, it investigates their opinions regarding the importance of school-based education for healthy lifestyle promotion among their children. Third, it evaluates the availability for their involvement in educational activities using face-to-face approach and communication technology dedicated to helping parents to promote healthy lifestyle among their children, as well as factors which influence this availability.\nDesign/methodology/approach: The study was performed in October-November 2015 in two schools deserving children with hearing disabilities from North-West part of Romania. Anonymous questionnaire were filled in by 182 parents.\nFindings: The majority of parents recognize the importance of school-based health education and more than 77 per cent totally agree that it should include issues regarding healthy nutrition, promotion of physical activity and smoking prevention. In total, 80.2 per cent of the parents declared that they are interested to participate in educational activities organized periodically at school and 66.5 per cent declared their interest in educational activities developed through communication technology in order to help them to stimulate the adoption of healthy lifestyle among their children. The availability was influenced by residence, educational level, understanding the importance of parents' involvement, characteristics regarding the use of communication technology.\nOriginality/value: This represents the first study from Romania investigating the opinions and availability for their involvement with regard to school health education among parents of children with hearing deficiencies. The results have several implications for health education among children with hearing deficiencies and their parents.\n",
"role": "assistant"
}
] |
Employee development is a great business opportunity: Investment in people is the key to company growth
|
[
"Organizational change",
"Leadership development",
"Engagement",
"Growth",
"Leadership",
"Employees development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The economic climate is challenging, so why do so many organizations continue to invest in employee development and engagement activities?\nMolson Coors seeks to lead in people development and engagement: Molson Coors Brewing Company has a clear ambition to become a top-four global brewer. With people at the heart of this ambition, its executives recognize that being a leader in people development and engagement is critical to achieving the ambition.In 2009 Molson Coors partnered Emenex to develop a program that focused on empowering people to take charge of their development and career. The initial focus was the organization's group of 150 \"promotables\" and it was important to Molson Coors that the program could be facilitated by its own in-house development team.Emenex developed a tailored web-based 360-degree feedback tool and a complimentary structured-workshop program that could be rolled out through the internal team.The first part of the program emphasized enabling an individual to identify his or her own talents and passions and build a development plan that gave ownership and responsibility for his or her career and contribution. The second part of the program focused on practising holding an effective developmental dialog between and individual and his or her manager. The importance here is that the dialog is led by the individual and the manager plays a facilitative coaching role.The initial feedback was excellent, with 100 percent of participants stating they would recommend the course to their colleagues. Subsequent developments happened quickly:* the program was extended beyond the initial 150 participants and opened up to all managers in the UK;* the program was extended to similar groups in Molson Coors Canada and USA;* more train-the-trainer programs were held in order to train more facilitators to meet the demand;* employee-engagement levels at Molson Coors (UK) increased to 89 percent from 86 percent in 2009, with much of the feedback including reference to the value and success of the career-development workshop; and* the program continued to attract participants in 2011, with a waiting list that now extends through 2012.Molson Coors is on its way to achieving its vision of being a top-four player in the global brewing market. Since the program started, the company has seen significant improvements. The career-development program is seen as a significant factor supporting and inspiring employees to manage their personal development and organizational contribution.\nAspire-i lost a major local-government contract: The second case study shows how employee development and engagement can work to a common goal for the future growth and stability of an organization.Aspire-i is a social enterprise, delivering a range of publicly funded services including employment services. It also has a commercial division that focuses on information-technology solutions and European careers-guidance information.In 2010 Aspire-i lost a major local-government contract. Trevor Mason, the then-chief executive at Aspire-I, stated: \"The fragile state of the organization made us consider how long we could survive like this, rather than how are we going to recover from this.\"That contract made us complacent. Whether we performed well or not, money still flowed into the organization, which helped to create a culture where there were no consequences for poor performance. The attitude pervaded all other contracts, leading to under-resourcing, lack of management control and a 20 percent shortfall in contract delivery. So, the loss of our main contract left us in a precarious position.\"It was at this point that discussions between Aspire-i and Emenex took place. Aspire's senior-management team, with the help of Emenex, made clear that Aspire-i had to become more commercially focused, re-establish the company's core competencies and develop a mindset of focusing on and valuing contribution.To help Aspire-i to understand the link between development and performance, Emenex introduced it to the Four Stages of Contribution model (see Figure 2). Developed over a 30-year period, the model illustrates the behaviors associated with high performance for people at different points in their career and across all organizational roles.\"We knew we had to get better but what does better look and feel like, how do we measure it? The workshop enabled us to define it,\" commented Trevor Mason.The Four Stages of Contribution model (Dalton and Thompson, 1993) gave Trevor Mason and his senior-management team the framework that enabled them to clarify and communicate expectations and behaviors that would bring success to the company. The Four Stages model was used as the basis for a workshop program that, along with some powerful vision-building tools drawn from the concepts from Spencer Johnson's Who Moved My Cheese? enabled the whole company to pull together (Johnson, 1998).Since the development program, Aspire-i has seen its best-ever out-turn. The company has increased turnover by 15.8 percent and now, unlike before, embraces challenge with enthusiasm and appetite to explore and capitalize on new opportunities and extract value from each piece of work. Divisional manager Nick Whiteside reflected: \"At the time we saw the program as just another piece of training. Now, we realize it had a much deeper impact.\"More than ever, companies in the UK understand that their most valuable tool, in terms of business survival and growth, is employee development and engagement. In business, people deal with people, and businesses can only function if employees are both engaged with the company's ideas and empowered to support and implement its plans.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to explore why investment in people development and engagement is the key to company growth.\nDesign/methodology/approach: - The paper explains the importance of people development and engagement in general terms; then highlights the contribution they made to performance at Molson Coors Brewing Company and social enterprise Aspire-i.\nFindings: - The paper reveals that the career development program at Molson Coors Brewing Company is seen as a significant factor supporting and inspiring employees to manage their personal development and organizational contribution, while the development program at Aspire-i has helped it to increase turnover by 15.8 percent and ensure that employees embrace challenge with enthusiasm and appetite to explore and capitalize on new opportunities and extract value from each piece of work.\nPractical implications: - The paper demonstrates that combining and developing an individual's talents and motives in the pursuit of an organization's goals provides both personal satisfaction and increased performance.\nSocial implications: - The paper highlights ways in which companies can improve their performance, to the ultimate benefit of society as a whole.\nOriginality/value: - The paper claims that businesses can only function if employees are both engaged with the company's ideas and empowered to support and implement its plans.\n",
"role": "assistant"
}
] |
External mandates and instructional leadership: school leaders as mediating agents
|
[
"United States of America",
"Schools",
"Leaders",
"Leadership",
"Educational policy",
"Implementation",
"Improvement",
"Legislation",
"Organizational behaviour",
"Principals"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Educators around the world are living in a period of almost unprecedented policy activism. State pressures for schools to be more publicly accountable for their results were observed in the USA by the end of the 1980s (Wills and Peterson, 1992; Wohlstetter, 1991), and similar initiatives were felt in other countries (Gordon, 1995; Louis and van Velzen, in press). Whether the pressures for improvement generated by elected officials are due to the increased availability of comparative data (such as PISA or TIMSS internationally, and NAEP within the USA) or the more general circulation of theories about how to improve the management of public services (new public management), one of the consequences has been a steady stream of research on the nature and impact of accountability policies.Through the mid-1990s, policy analysts began to examine the logic underlying state accountability and testing systems, most of which operated under the assumption that public test results would motivate school-based educators to work harder (because their performance would be made public) and smarter (because they would have clear objectives for improving student learning, McDonnell, 1994; O'Day, 2002). Initially, the responsibility for creating accountability systems rested with the states, and most responded to the expectation, albeit in different ways (Louis et al., 2005, 2010b).In the USA, the passage of the federal No Child Left Behind (NCLB) Act, which took effect in 2003, added another layer to the school accountability hierarchy. This legislation tied federal funding to state testing, public reporting and a set of increasingly harsh consequences for schools that failed to improve. Like most federal education mandates, NCLB provided an umbrella framework which states adapt to suit their existing political, and accountability contexts (Louis et al., 2010b). It also provided additional guidance and support, and, in particular, created the expectation that states would assist their districts in becoming better at supporting schools. The additional expectations for states that were embedded in NCLB have garnered less research attention than the law's effects on schools. This is, perhaps, surprising because the law was clear in its expectations that districts were seen as key institutional actors in creating results (Rorrer et al., 2008). Over the last decade, increasing research attention to districts suggests that, while some may be up to the task of setting strong improvement agendas and providing support for schools (Honig, 2006; Kerr et al., 2006; Togneri and Anderson, 2003), others are not (Coburn et al., 2009; Spillane, 1998). The NCLB requirements are, thus, contextualized within state and school district policies related not only to accountability but to any related reform initiatives.Given this layered accountability context, school leaders responses to federal accountability mandates are likely to reflect a complex interaction between their perception of state policies and support, the specific district contexts in which those policies are situated, including ongoing district reform initiatives and their own leadership beliefs and practices. In this paper, we investigate the relationship between these three forces, as we address a number of questions:* Do school leaders' perceptions of state or district school improvement policies and procedures influence how they lead their schools?* Are those school leaders who perceive their accountability context (state or district), as supportive more likely to behave as instructional leaders?* How do school leaders integrate their own leadership beliefs and agendas with the external mandates to which they are subject? To what extent do they perceive conflict between the two?* To what extent do school leaders' relationships with the district office enable them to craft coherence between the external accountability policies and their own agendas?The focus of NCLB policy is on student outcomes, not on leadership. Although in most states, NCLB has led to specification of the achievement targets to be met by each school, the federal legislation is silent about the role of leadership in achieving them. Yet the implications of NCLB for school leaders are profound. If targets are to be achieved, at least in the population of schools relevant to this paper, then instructional leadership that is skilled in monitoring student achievement data, in using that data to identify student needs and in building collective teacher capability to address those needs, are just a few of the leadership skills and responsibilities that are required. In schools without high levels of capability in these areas, skilled leadership of teacher change will also be required. NCLB not only sets highly ambitious student achievement targets, but by implication, also sets a very particular leadership agenda. The reaction of leaders to this agenda is, we predict, partly determined by the extent to which it is consistent with their current leadership beliefs and practice, and partly determined by their perception of district capacity to support them in achieving it.\nRelated literature: While our research questions belong within a long tradition of research on educational policy implementation, we draw, in particular, on two more recent theoretical trends within this literature. The first is that of sensemaking - the central insight of which is that implementing agents interpret policies through relevant prior beliefs and understandings (Spillane et al., 2002b). It is their interpretations of policy, rather than an invariant and objective policy idea, that is implemented. The explanation of policy implementation, therefore, requires investigation of the nature and determinants of actors' policy interpretations.The second theoretical idea is that of crafting coherence (Honig and Hatch, 2004). How NCLB policies are enacted by leaders is not just a function of their independent sensemaking processes. The quality of their instructional leadership is also likely to be influenced by the degree of coherence achieved between their own leadership agendas and the policy agenda set by the district office (Youngs et al., 2011). We use the term \"coherence achieved\" deliberately, because we see coherence as the result of sustained interactions between policy makers and implementing agents - one in which there is mutual influence and adjustment so that there is a principled integration of internal and external policy agendas.The third conceptual resource we draw on is that of instructional leadership. We explain the particular aspect of this concept that informed the items we included in our principal and teacher surveys.Sensemaking and policy implementation\nData collection methods: This paper is based on a secondary analysis of an existing data base. The data sources for this paper are drawn from a larger multi-method study that included nine randomly sampled states[1], 45 districts and approximately 175 randomly sampled schools within these states[2].Sampling\nData analysis: survey data: We first developed a merged data base that allowed us to match principal and teacher survey responses in 2008[5]. Due to non-response, either from teachers or any principal/assistant principal, on key items, our useable school sample size was reduced from 157 to 147, which includes 201 principal and assistant principal respondents[6]. We used the 2008 principal survey to develop measures of our two key predictor variables. A positive state policy index was measured by principal's degree of agreement with four positively worded items about state policy. Principals' perceptions of district accountability policies were assessed by their degree of agreement that the district was focussed on and could support the school's improvement efforts, which we called the positive district accountability index.An index of instructional leadership was developed from the 2008 teacher survey. It comprised seven items about school leader involvement in such things as setting standards and providing instructional advice (Louis et al., 2010a). Since the design of the wider study involved analyses of the relationships between selected school variables and student achievement, the instructional leadership measure was based on published evidence about the relative impact of various types of practice on student outcomes (Robinson et al., 2008; Marzano et al., 2005). Most of the items included in the survey were about direct instructional leadership, including frequency of classroom visits; provision of teaching advice and involvement in teacher planning. We judged that while it might be difficult for high school leaders to engage in these more direct forms of instructional leadership, they provided a realistic assessment of the focus of our elementary and middle school leaders.The description of each of these variables, along with associated survey items and scale reliability is reported in Appendix.In addition, we assumed that a number of measures of school characteristics might mediate the effects of accountability policies on principal behavior. In particular, secondary schools are typically under considerably more pressure on accountability measure, since they must not only perform on state tests, but also on graduation rates and increasing expectations about students being \"college ready.\" We therefore included school level (elementary/secondary) in our analysis. Student poverty levels (percent eligible for free or reduced price meals) are known to be associated with lower student test scores, which may put schools in jeopardy of not passing muster on state indicators.For this paper we examined the distribution of leaders' responses on key variables. Multiple regression analysis was used to examine predictors of administrative instructional leadership. The useable sample size for this analysis includes all respondents who identified themselves as an individual with the title of principal. This decision was based on the assumption that both principals and assistant were in a position to make assessments of state and district policies, that including both increased the sample size, and that teachers were asked to include all leaders in their rating of instructional leadership (see Appendix)[7].\nCase study sampling and data analysis: As already indicated, site visits were conducted in 18 districts and 36 schools in nine states. For this paper, an outlier sampling strategy was used to further reduce the number of case study schools. All site visit schools were ranked by the instructional leadership score described above. While the relationship between this score and school type (elementary or high school) was 0.33 in the survey sample, the relationship was considerably stronger in this sub-group, with none of the high schools ranked above the median. As a consequence, we decided to eliminate high schools from the case analysis and focus on elementary and middle schools serving students in grades eight or lower. We sampled the four top and four lowest scoring schools for analysis. As we proceeded, we discovered that one of the lowest scoring sites had incomplete data so we limited our analysis to the four highest and three lowest instructional leadership schools.The purpose of the qualitative analyses is to deepen our understanding of the inter-relationships between principals' attitudes to state and district policies and teachers' ratings of their leadership practices. We begin with a brief description of the context of each the seven cases and then provide a thematic analysis organized as a comparison between three principals categorized as low instructional leaders (LIL) and four categorized as high instructional leaders (HIL). This comparison is organized by four themes: principals' espoused leadership theory, instructional leadership emphasis and practices; response to state and district accountability policies, and their relationship with district office. Under each heading, we note the contrasts between the two principal groups while also being careful to note the variation within each group and the overlaps between them.The seven schools, which were located in six different states, varied widely in size, demographic characteristics of the student body and the administrative experience of their principal (Table I). Both LIL and HIL schools came from larger and smaller districts; both groups had relatively new and relatively experienced principals; both included schools with students from varying backgrounds (although the LIL schools had somewhat fewer minority students). The only obvious imbalance across the two groups is that all of the LIL principals were male, while all of the HIL principals were female. However, Walker Elementary, an HIL school, was led by a male principal during the first two and a half years of data collection. Only Maple Island had an assistant principal, and it was therefore decided to confine our analysis to the seven who served in similar positions.Given our focus on sensemaking and coherence we chose a largely inductive approach to interview analysis. Each author independently read interviews from both higher and lower instructional leadership and proposed themes that might be relevant to the research questions. After discussion we settled on seven coding categories that were discussed by all principals and that were relevant to the focus of our inquiry:(1) features of school and community context that they believed shaped their leadership;(2) descriptions of their personal leadership theories, including leadership vision and values;(3) descriptions of their leadership priorities, particularly with respect to instructional leadership;(4) connections between federal and state policies and their own leadership priorities and actions;(5) connections between district policies and their own leadership priorities and actions;(6) discussion of resources and their effects on their leadership; and(7) principals' sense of \"ownership\" of external policies from federal, state or district sources and their integration of external and internal initiatives.Responsibility for the initial analysis of each principal's interviews was divided equally between the two authors. Quotes and summaries of interview data were entered in an excel spreadsheet under each of the seven headings and then a draft case was written for each school. Before the cross-case analysis was completed the seven themes were reduced to the four which connected most closely to our research questions and captured the contrasts between HIL and LIL: leadership vision; reported instructional leadership practices; response to external accountability policies and relationship with district office. Once each case was written, a cross-case analysis was conducted in which the leadership of higher and lower scoring schools was systematically compared on each of the four themes. Claims made in the final cross-case analysis were checked against the entries in the excel spreadsheet and, in some cases, against the original interview transcripts. Finally, the first author read the district interviews with superintendents and other key leaders to cross-check principals' claims about both the district and the district's response to state and federal policies.\nSurvey results: instructional leadership and responses to accountability policies: Our analysis of the survey data was focussed on two questions: first, how did the principals describe the policy context? Second, did their views of their policy context help to explain their instructional leadership behaviors? Both of these questions address, albeit indirectly, the context in which they \"make sense\" of the policy environment in which they are located, and whether they view them as coherent and incoherent.School leader ratings of their policy context\nResponses to accountability policies: a comparison of low and high instructional leadership: In the remainder of our paper, we deepen our understanding of the impact of external accountability policies by looking at the relationships between school leaders' attitudes to and experience of state and district policies and their instructional leadership practices. Our findings are organized as a comparison between three school leaders who teachers rated as among the lowest on instructional leadership (LIL) and four who were assessed as among the highest (HIL). Table III presents an overview of this comparison organized under four themes relevant to our research questions. In discussing these findings, we will attend to both similarities and differences between the two groups of cases and to the differences within each group.Thematic analysis: espoused leadership theories[10]\nSummary and discussion: The evidence from our analysis of the survey data suggests that most principals in this study had positive perceptions of state policies, a finding that is corroborated from our qualitative data, where five of the seven had largely positive comments, while a sixth had mixed perceptions that included both positive and some negative comments. Given the public controversies about NCLB, school leaders' predominantly positive perceptions of state policy were something of a surprise. Overall, many journalistic reports about NCLB's effects on schools have tended to emphasize the problems that have been caused by particular stipulations of the law, such as tutoring requirements, or inflexibilities related to testing. At least one characterizes NCLB as heading toward \"predictable failure\" (Mintrop and Sunderman, 2009). This study suggests a need to reframe the discussion of accountability, at least in the context of the HIL.Accountability and instructional leadership\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to examine how US school leaders make sense of external mandates, and the way in which their understanding of state and district accountability policies affects their work. It is posited that school leaders' responses to external accountability are likely to reflect a complex interaction between their perception of the accountability policies, the state and district contexts in which those policies are situated and their own leadership beliefs and practices.\nDesign/methodology/approach: - The authors use both principal and teacher survey data to explore the question of how perceptions of external policy are associated with instructional leadership behaviors. Cases of seven principals are employed to flesh out the findings from the survey analysis.\nFindings: - It is concluded that external accountability policy may have a positive impact on instructional leadership - where they see those policies as aligned with their own values and preferences, and where they see their district leaders as supportive of school-driven accountability initiatives. In these cases, school leaders internalize the external accountability policies and shape them to the particular needs that they see as priorities in their own school. Where one or the other of these factors is weak or missing, on the other hand, leaders demonstrate more negative attitudes to external accountability and weaker instructional leadership.\nOriginality/value: - This analysis draws on a unique, large-scale data base and uses a mixed methods approach to answer the question.\n",
"role": "assistant"
}
] |
No longer a family: Employee perceptions of organizational culture of a Russian English-medium instruction university
|
[
"Russia",
"Organizational culture",
"EMI",
"Employee organizational commitment",
"Employee organizational identification"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Globalization has \"flattened\" the world (Friedman, 2005, p. 8), increasing both competition and cooperation among countries, industries, organizations and individuals. Globalization has reached higher education and prompted the emergence of English-Medium Instruction (EMI) universities that could be defined as institutions of higher education that use \"the English language to teach academic subjects (other than English itself) in countries or jurisdictions where the first language of the majority of the population is not English\" (Macaro et al., 2018, p. 37). Today, EMI universities could be found in every continent or region of the world and their numbers continue to grow. For example, in Europe, the number of EMI master's programs quadrupled between 2013 and 2014 and the number of undergraduate programs has grown to almost 3,000 in the past ten years (Sandstrom and Neghina, 2017). In Russia, several dozen leading universities have been receiving direct financial support from the government to develop and implement EMI programs (Block and Khvatova, 2017). In Morocco, at least 30 universities provide courses and programs in English (Minkara, 2013). Taiwan's Ministry of Education created the Higher Education Evaluation and Accreditation Council to provide quality assessment to over 100 EMI programs in the country (Hou et al., 2013).\nChange and EMI universities: The emergence of EMI universities and programs is driven by a variety of factors, including national and regional policies related to higher education, student employability, faculty mobility, or market for international student (Coleman, 2006; Doiz et al., 2011). For example, Sandstrom and Neghina (2017) report internalization of higher education and a goal to be competitive as top two reasons universities in Europe offer EMI programs. Adoption of English helps universities become more globally competitive (Breeze, 2014; Doiz et al., 2011). This competition is propelled and measured by university rankings: Times Higher Education World University Rankings, Academic Ranking of World Universities and Quacquarelli Symonds World University Rankings (Barron, 2017; Lo, 2014). The adoption of English as the language of instruction enables universities to attract fee-paying international students from other countries and to promote local student mobility. As a result, these universities compete for students globally and these students become their customers: \"Universities are no longer institutions but brands\" (Coleman, 2006, p. 3).\nOrganizational culture: The concept of organizational culture was first introduced to managerial literature by Blake and Mouton (1964) and became popular in managerial research and practice in the 1980s (Deal and Kennedy, 1982; Ouchi, 1981; Peters and Waterman, 1982). Organizational culture includes symbols, language, traditions, rituals and myths that communicate a company's ideology and beliefs (Ouchi, 1981; Pettigrew, 1979). Schein (1984) defined organizational culture as.\nOrganizational identification: How individuals identify themselves within an organization represents an important component of their work life (Ashforth and Mael, 1989) and affects their behaviors, attitudes and feelings (Blader et al., 2007; van Knippenberg et al., 2007). Organizational identification is a form of group identification that \"may provide one answer to the question, Who am I?\" (Ashforth and Mael, 1989, p. 22). Organizational identification can also be understood as an extent to which employees think of themselves as members of an organization (van Knippenberg et al., 2007). An organizational identity is \"self-definitional\" (van Knippenberg et al., 2007, p. 461); employees choose how to define their membership with the organization. Organizational identification includes one's identification with organization-specific values and goals, perception of common path and future with the organization, willingness to sacrifice career goals for the welfare of the organization, and a deep feeling of loss when leaving the organization (Ashforth and Mael, 1989).\nOrganizational commitment: Organizational commitment refers to employee loyal behavior toward the organization that is manifested in their acceptance of the organization's values, pursuit of organizational goals and intention to continue their membership in the company (Porter et al., 1974; Robbins, 2003). Organizational commitment could be described as \"the employee's psychological state with respect to his/her relationship with the organization\" (Agarwal and Sajid, 2017, p. 125). Commitment \"binds\" an employee to the organization (Allen and Meyer, 1990, p. 301). As a result, for example, committed employees are willing to accept organizational changes and are willing to stay in the organization even in difficult turbulent times (Meyer and Allen, 1997). Employees with high level of organizational commitment are more satisfied with their careers in the organization (Ashforth and Mael, 1989).\nMethodology: Setting\nResults: As shown in Table I, the newcomers' perceptions of organizational culture differ significantly between the paired samples of clan culture (t (89)=-5.56, p<0.001), market culture (t (89)=6.09, p<0.001), and hierarchy culture (t (89)=4.77, p<0.001). No statistical difference was found only for adhocracy culture (t (89) =-1.53, p=0.13). The same results were obtained when running paired-sample t-test separately for faculty (clan culture, t (40)=-4.49, p<0.001, market culture, t (40)=4.24, p<0.001, hierarchy culture, t (40)=3.23, p<0.01, adhocracy culture, t (40)=-0.62, p=0.54) and staff (clan culture, t (48)=-3.38, p<0.01, market culture, t (48)=4.98, p<0.001, hierarchy culture, t (48)=3.47, p<0.01, adhocracy culture, t (48)=-1.57, p=0.12). These results demonstrate a gap between perceived and preferred organizational culture. The newcomers perceive the current organizational culture as market (M=29.58, SD=13.81), but their preferred type is clan (M=39.76, SD=20.28).\nDiscussion: This study examined the influence of a perceived organizational culture on organizational identification and commitment of employees of a Russian university that is transforming to become EMI. Data were obtained from 90 new full-time employees using three validated instruments. Key findings of the study are summarized in Table III and discussed in this section.\nConclusions: The study showed that new employees at an EMI university in Russia perceive the university culture as market (Cameron and Quinn, 2006), which is not typical for institutions of higher education. Employees prefer a clan culture, which is common for colleges and universities. Not surprisingly, the study also showed that it is the clan, and adhocracy, culture that positively impacts employee organizational commitment and identification. These results support our suggestion that universities undergo an organizational change as they transition to become EMI that leads to a change in their values and organizational culture. The study also suggests that after working for a year, new employees do not identify with the market culture and long for a traditional for institutions of higher education clan culture. This discrepancy hinders the connection between employees and the organization and warrants further investigation by researchers and attention from the university leaders and HR professionals.\nLimitations: The study was limited to one state Russian university located in a mega-city. It may be difficult to generalize the findings to other EMI universities located in small cities in Russia or in other countries or to private universities. The participants were newly hired employees, so further research should be conducted to explore perceptions of organizational culture, organizational commitment and identification of those employed for longer periods of time. Also, such variables as faculty rank, department and tenure status as well as administrators' position type and level in the university were not addressed and could be included in future studies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the impact of a perceived organizational culture on organizational identification and commitment of employees of a Russian university that is transforming to become an English-medium instruction (EMI) university.\nDesign/methodology/approach: Data were collected between February and March 2018, via an online survey that was disseminated among 115 new employees; 90 were completed and used for analyses. The survey included three scales.\nFindings: Employees of the EMI university perceived its culture as market, which is not a common characteristic of universities that usually have a clan culture. The study has also demonstrated a discrepancy between the perceived (market) and the preferred (clan) organizational culture. The study has also showed that a clan, and not a market, culture strengthens employee organizational commitment and identification.\nOriginality/value: Most research has examined EMI universities from the perspectives of teaching and learning. This study contributes to the limited conceptual and theoretical base around these universities by examining their processes from a perspective of management. This paper suggests that the adoption of English as a medium of instruction requires organizational change that leads to change in organizational culture.\n",
"role": "assistant"
}
] |
Determining the factors influencing the level of awareness and usage of open source digital repository software by academic librarians in India
|
[
"India",
"Digital library",
"Library management",
"Digital repository",
"Open source software",
"Awareness and usage"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: With the emergence of information and communication technology (ICT), the digital library has become a ubiquitous term in the library landscape and has changed the way people access and interact with information. Corollary, the concept of extended libraries that transcends physical boundaries has come up. Digital library has unleashed immense potential to empower the users, as well as the librarians to conceive, assemble, build and disseminate new knowledge and information collections (Bainbridge et al., 2003). In the digital library landscape, DSpace, Flexible Extensible Digital Object Repository Architecture (Fedora), EPrints and Greenstone, DiVA, Bepress and ContentTool are dominant digital repository software (DRS) for building a digital library or institutional repository (IR) (ROAR, 2020; Caplan, 2008). Most of the software are freely available for downloading, but the institute has to bear the cost of implementing, customizing and uploading content into the IR, albeit, policy document, technical paper and documentations are available to implement it in the institute.\n2. Literature review: The OSS movement has produced huge numbers of expensive and scalable software; and also raised awareness among the public on the important issues such as open access, open education and open content. Digital libraries, open access and OSS are the consequence of the \"open models of exchange\" and \"open innovation\" that help societies grow up and thrive (Krishnamurthy, 1995). There exits significant similarities in philosophies and practices of OSS community and librarianship; and both of them advocate freedom of use and free access to information (Morgan, 2004; Eyler, 2003). OSS offers a bouquet of applications for libraries and information centres. Because of its intrinsic and economic benefits, it is advocated to adopt and use OSS in libraries (Kampa and Kaushik, 2019).\n3. Objectives: The main objective of the present study is to explore the level of awareness and usage of open source DRS. The present study attempts to ascertain the following objectives:\n4. Research questions: The present study formulates the following research questions based on the objectives of the study:\n5. Methodology: A quantitative method was used in the present study. To achieve the research objectives, a structured questionnaire was administered to academic librarians in India. Instrument was posted in various library mailing lists, email groups and forums. The instrument had three sections (A, B and C). Section A had five questions on name, gender, name of the institution, designation and experience. Section B consisted of questions relating to awareness and usage of OSS and participation in workshop/seminar/conference. Section C consisted of questions pertaining to awareness and usage of open source DRS, namely, DSpace, Fedora, GSDL and EPrints; and response to questions were recorded in four-point scale as \"Never heard of it = 1\", \"I am aware but never used it = 2\", \"Use it only sometimes = 3\", \"Use it on regular basis = 4\". Of 438 responses received, 374 responses were found to be complete in all aspects for statistical analysis. The collected data were analyzed by descriptive statistics. In addition to descriptive statistics, a statistical significance test called Fishers' exact test was used to analyze the relationship between educational qualifications and awareness and usage; and between participation in workshop/seminar and awareness and usage of open source DRS.\n6. Selection of open source digital repository software: The present study selected the mostly used open source DRS, namely, DSpace, GSDL, EPrints and Fedora. Unlike open source ILS, development of the most of digital repository happened in academic settings, Massachusetts Institute of Technology (MIT) and Hewlett-Packed (HP) labs developed DSpace, University of Southamptom developed EPrints and University of Waikato developed GSDL. DSpace and Fedora are developed in Java language, whereas EPrints and GSDL are developed in Perl. EPrints and GSDL licensed under GNU GPL; DSpace and Fedora licensed under Berkeley software distribution (BSD) and Apache license 2.0, respectively (Table 2).\n7. Results: 7.1 Demographic characteristics of the participants\n8. Discussion: The present study has examined the level of awareness and usage of open source DRS, including investigating the relationships between qualifications and participation in workshop/seminar and awareness and usage of open source DRS. The results shows that the level of awareness and usage of open source DRS, namely, DSpace (Mean = 2.92, SD = 0.906) and GSDL (Mean = 2.18, SD = 0.699) are high among the academic librarians in India. In total, 33.4% and 11.5% of the participants are using regularly DSpace and GSDL, respectively (RQ1). The high level of awareness and usage of DSpace and GSDL might be attributed to the availability of documentations, scalability of the software and various features, including numerous workshops/seminars, which are being organized on both the software.\n9. Conclusions: Digital repository, which espouses the promotion of efficient delivery of information, has witnessed a spiralling adoption and use in the academic institutions since the past decade. The findings of the present study show that the level of awareness and usage of open source DRS, i.e. DSpace and GSDL are quite high among the academic librarians in India. DSpace is the mostly used open source DRS followed by GSDL. Academic qualifications and participation in workshop/seminar on different OSS are the two determining factors that strongly influence the level of awareness and usage of open source DRS by academic librarians in India. This research has various implications at different level. The higher educational institutions and other professional bodies should promote OSS and organize workshop and training programme on OSS, thereby increasing the level of awareness and usage of OSS. Course module on different OSS can be incorporated in syllabus of MLIS, which could enhance the awareness level of budding academic librarians towards OSS, including open source DRS. Consequently, it will increase the adoption and usage of OSS.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyze the level of awareness and usage of open source digital repository software (DRS). The paper also studies the factors, which influence the level of awareness and usage of different open source DRS by academic librarians in India.\nDesign/methodology/approach: The study administered an online questionnaire to academic librarians in India to know their level of awareness and usage of open source DRS. The questionnaire aimed to gather the awareness and usage of open source DSR. In total, 374 complete responses were collected from academic librarians in India and the collected data were analyzed using descriptive statistics. Moreover, Fishers' exact test was used to identify whether factors i.e. qualification and participation in workshop/seminar influence the level of awareness and usage of open source DRS.\nFindings: The results of the study reveal that the level of awareness and usage of open source DRS, namely, DSpace (Mean = 2.92, SD = 0.906) and Greenstone digital library software (GSDL) (Mean = 2.18, SD = 0.699) are high amongst the academic librarians in India. In total, 33.4%, 11.5% of the participants are using DSpace and GSDL, respectively, on regular basis. Fishers' exact test shows that factor(s) i.e. qualifications and participation in workshop/seminar affect the level of awareness and usage of open source DRS. The results show that there exits strong relation between participation in workshop/seminar and awareness and usage of DSpace (Fishers' exact test = 13.473, p < 0.05).\nOriginality/value: This paper is the new type of study exploring level of awareness and usage of open source DRS by academic librarians in India. It identifies the factors that affect the awareness and usage of open source DRS. It is the first study to analyze the statistical significance between Indian librarians' participation in workshop/seminar and their level of awareness and usage of different open source DRS.\n",
"role": "assistant"
}
] |
A complete agro-financial service framework for emerging economies
|
[
"Agriculture",
"Agro-financial",
"Crop insurance",
"Delivery",
"Price risk",
"Risk mitigation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: risks in agriculture and criteria for risk management: Agricultural produce is subject to a variety of risks namely climatic, biological, geological, market and man-made. There are two main approaches that are generally used to handle risks: risk control and risk financing. Risk control can occur in several ways: through the diversification of income sources, migration, use of hazard-resistant technology and reduced consumption. Risk financing includes crop or weather insurance and saving. It can also entail selling assets and obtaining loans from formal or informal sources (Arias and Covarrubias, 2006).\n2 Need for crop insurance: The agricultural risks along with how the farmers manage them impact farm income, productivity, and access to credit. These risks among others could lower the farmers' anticipated income and have negative effects on their standard of living, ability to provide for themselves and their families, ability to build capital and ability to access credit from lenders (Bryla et al., 2001).\n3 Development of crop insurance: Crop insurance is cumbersome to administer and prone to losses. Claims ratio has been around 500 per cent. Insurance companies feel that crop insurance is a liability and not a profitable proposition at all. Estimating crop loss due to an unexpected weather event as well as estimation of potential yield and actual yield is a difficult task (Venkatesh, 2008).\n4 Complete agro-financial service: a conceptual framework: Having a look at the agricultural risks faced by the farmers, we see that the major systemic risks whose effect can be dampened by the purchase of crop insurance are price variations, yield risk and other market risks such as demand.\n5 Delivery method: To identify the effective delivery methods for insurance services, we, first of all, need to know the factors which affect the take up of insurance products. Gine et al. (2008) assert that the take up rate depends primarily on the availability of liquid cash at the time of the product marketing and familiarity with the product. Further, Makki and Somwaru (2001) state that the choice of crop insurance contracts is influenced by the level of risk, expected indemnity benefits from the contract, cost of insurance and premium subsidy. The above studies also bring out other factors affecting the farmers' participation which include weather conditions at the point of purchase, \"advice from others\" (Gine et al., 2008) and trust in the insurance agent.\n6 Conclusion: This paper has been able to determine the various elements that need to be considered while designing an agricultural insurance product and thus, has proposed a holistic framework for the same. The framework proposed would take care of the problems faced both by the farmers and the insurance companies and can act as a basis for new product development in the agricultural risk management domain.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose a conceptual framework for a comprehensive agricultural financial service to address the problems faced by the farmers and the insurance companies and to suggest the product delivery method to increase the acquisition rate of the product.\nDesign/methodology/approach: A theoretical framework is proposed based on the pros and cons of various insurance services floated out earlier as well as the various risks faced by the farmers. Subsequently, on the basis of the factors that affect the take up rate of insurance products, the paper proposes an adequate delivery strategy.\nFindings: An efficient integrated framework for risk management in agriculture has been developed by interlinking various elements like price risk, yield risk and demand risk in addition to providing additional advisory and credit services. Other services only cover one or two of these and hence, are not adequate. Also, illiteracy of farmers is the main bottleneck that decreases the take up rate and can be addressed by appointing agro-financial agents.\nPractical implications: The frameworks proposed if brought into practice will lead to a holistic agro-financial service and help in overall advancement of the agricultural sector and economy.\nOriginality/value: This paper would help insurance companies to come out with better insurance products for the agricultural community that will be easier to administer and economically more viable and thus fulfils the criteria for a holistic insurance framework.\n",
"role": "assistant"
}
] |
Interlending and document supply: a review of the recent literature: 85
|
[
"Open access",
"Document supply",
"Electronic books",
"Electronic journals"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Although much material is now free on the web this review is still dependent on the support of the British Library where much of the material is still held in print. Over 150 LIS journals and reports are scanned as well as many web sites and discussion lists.\nIntroduction: This last quarter has been remarkably quiet for publications relevant to ILL librarians. Thus I have taken the opportunity to write an extended review on open access (OA), specifically on how it is playing out in the UK. Clarity is difficult in the current situation but I hope I have captured some of the important decisions that have been made and conveyed something of the complexity of OA in the real and messy world that we live in. OA affects us all but its consequences are particularly complex for ILL librarians.\nOpen access: Librarians, and indeed most people, underestimate the ability of big business to get out of a tight corner. The latest example is the response to OA by the big commercial academic publishers (coat tailed by the society publishers). Until recently the big commercials were resolutely opposed to OA; just check Elsevier and its support for anti-OA legislation in the US and many public statements over the years. Springer saw which way the wind was blowing back in 2008 when it acquired BioMedCentral. However in the last two years the others have tumbled to the fact that OA is here to stay - the only question is which route is chosen. Green OA threatens their bottom line and they have fought hard and successfully in the UK at least to convince academe and more importantly the government that Gold OA is the only way forward to a sustainable future for academic publishing. Their victory is underlined in one of Richard Poynder's interviews on OA with Sami Kassab. Kassab \"is an executive director at the investment company Exane BNP Paribas, where he runs the Media Research team covering professional publishing. Amongst the companies Kassab monitors are Reed Elsevier, Thomson Reuters, Informa, John Wiley, Wolters Kluwer, and Pearson. Currently, Kassab is positive about the sector, arguing that scientific publishing offers 'best in class defensive growth in a very resilient industry'. Kassab believes that open access (OA) is still a marginal activity and in any case poses neither a short-term nor a long-term threat to large scholarly publishers. In fact, he says, it will enable them to monetise more articles than they have been able to monetise historically.\" How can they do this? It is simple, he argues - in Gold OA the author or their funder pays to publish, thus the more articles are published the more revenue is generated. This is a direct relationship unlike subscriptions which cannot be raised in direct relation to articles published without sparking opposition from cash strapped librarians.\nThe rocky road to open access in the UK - a special section: This is a good time to consider the state of play in the UK which has been in the forefront of campaigning for open access to research material. There has been a plethora of reports etc. which may be of interest to readers.\nE-books: E-books are certainly popular - although not so much as the hype suggests which various studies have confirmed. However the consumer as citizen has taken to e-books in a big way with the arrival of acceptable platforms, students use them, and writers and researchers use them. So a salutary piece by Clifford Lynch is welcome. In it he emphasises the illusion that we buy e-books when in fact we usually just rent them; not only that but rented under onerous conditions. For example, lending is usually forbidden. As he points out the really difficult problem will arise when books are only published electronically. Well worth a read and freely available (Lynch, 2013). Another anecdotal piece confirms the view that students' main concern is not format but price. Generally speaking they prefer print to \"e\" - another healthy corrective to the current hype. There is also an interesting thread following on from the online version (Howard, 2013b).\nDocument supply: Scanning and delivering material is familiar to ILL librarians who have to jump through the hoop imposed by publishers on e-only resources. However scanning and delivery for users of printed and microform material held by a library is not so well known. Yale University Library introduced a new service in 2012 doing just that. The service is free of charge and \"In most locations Scan and Deliver workflows have operated within the interlibrary loan or access services units, incorporating request processing into already existing positions and workflows.\" and \"During the first academic year of operations YUL patrons placed 16,781 scanning requests. Of these requests, 12,242 were fulfilled by Yale libraries for a fill rate of 73 per cent\". It is seen as a great success by patrons. It would have been interesting to hear how Yale integrated this new service into eILL but this is a minor quibble in what is an interesting and detailed description of this new service (Bruno, 2013).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide a review of the literature concerning interlending and document supply.\nDesign/methodology/approach: A review based on the reading of over 150 journals and related material including reports, conferences and blogs.\nFindings: This review has concerned itself primarily with the recent developments in open access and the impact that these will have on document supply. However some articles of importance in document supply as such have been noted.\nOriginality/value: This paper represents a useful resource for librarians and others concerned with interlending and document supply as well as such related matters as resource sharing and open access.\n",
"role": "assistant"
}
] |
Teachers' perceptions on inclusion in basic school
|
[
"Teacher education",
"Inclusive education"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The Salamanca Declaration (UNESCO, 1994), a United Nations resolution concerning principles, policy and practice in special education, has been adopted by several countries. According to this document, in an inclusive school all students learn together, whenever possible, regardless of the difficulties and differences presented. The regular classroom is understood as the most appropriate place to combat prejudice and promote inclusion, favoring the construction of an inclusive society and providing every child with an adequate education.\nTheoretical reference: Disability is seen by Vigotski (1997) as a socially constructed phenomenon, referred to in interpretations that seek homogeneity, labeling and classification of the disabled person. Amaral (1994) points out that disability is preconized in the social approach to the disability, forged in exclusion and non-acceptance of its difference. On the other hand, Omote (1994) states that disabilities present disadvantages and social discredit.\nMaterials and methods: In order to carry out this investigation, a questionnaire was developed by a partnership between researchers from the Pontifical Catholic University of Sao Paulo, Brazil and the University of Minho, Portugal, and applied to mathematics teachers.\nResults: Teachers' profile\nDiscussion: Nowadays, inclusive education has been defined in Brazil according to a wide perspective, which goes beyond the simple concept of specialized attention, as it was considered in recent times. According to the Law of Directives and Bases in Brazil (Brazil, 2013), it is a modality of school education, focused on the education of the individual, with the objective of the exercise of citizenship.\nConclusion: In this work, the authors sought to show the reality of public schools in Sao Paulo, Brazil and Minho, Portugal regarding knowledge, availability and use of support materials in the teaching of mathematics to students with SEN. From the results obtained, there is a clear need not only for promoting initial and further teacher training that takes into consideration the profile of this teacher but also for promoting the development of support materials (games, software, devices and assistive technology) in a collaborative way, involving users, teachers, engineers in a way to ensure a good usability and adequate adaptability. Thus, the inclusion of SEN students in schools must not take place only with their physical integration, but must also consider their integration at social, emotional and educational levels.\n",
"role": "user"
},
{
"content": "Purpose: Despite dealing with special educational needs (SEN) students, many teachers feel unprepared for this task. This situation reveals the urgent need for studies in different areas, directed toward the inclusion of students in regular classrooms. Therefore, a diagnosis about the situation of inclusive education and the resources available in schools offering regular teaching becomes of paramount importance. Therefore, the purpose of this paper is to present the results of an investigation that sought information on pedagogical work in inclusive education and in the use of support materials by teachers of basic education in Brazil and Portugal.\nDesign/methodology/approach: In order to carry out this investigation, a questionnaire was developed by a partnership between researchers from the Pontifical Catholic University of Sao Paulo, Brazil and the University of Minho, Portugal, and applied to mathematics teachers. The study participants consisted of 197 mathematics teachers, working in primary school, secondary school and young people and adult education. Data collection was carried out through a questionnaire, available online and designed in Google Forms, with 48 questions (both open and closed formats).\nFindings: From the results obtained, there is a clear need not only for promoting initial and further teacher training that takes into consideration the profile of this teacher, but also for promoting the development of support materials (games, software, devices and assistive technology) in a collaborative way, involving users, teachers, engineers in a way to ensure a good usability and adequate adaptability. Thus, the inclusion of SEN students in schools must not take place only with their physical integration, but also must consider their integration at social, emotional and educational levels.\nOriginality/value: It is understood that the teacher should receive a solid training in successful inclusion experiences in terms of technological, educational and didactic experiences. Another problem that seems to be recurrent is that support materials have been developed in a way that is somehow disconnected from the reality of the classroom. The context in which the support material is inserted is fundamental to the success of its utilization. What is more, it cannot be isolated from the individuals who will use it. It thus becomes urgent to prepare the school environment for the reality of inclusion. This involves aspects from changes in infrastructure and development of assistive technology to assist the student with SEN in their learning, to the establishment of public policies that involve teacher initial and further training, specialized support and curricular discussions.\n",
"role": "assistant"
}
] |
Antecedents and consequences of extrinsic superstitious beliefs: a cross-cultural investigation
|
[
"Self-esteem",
"Experiential consumption",
"Anthropomorphism",
"Intrinsic religiosity",
"Magical thinking",
"Superstitious beliefs"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: It is bad luck to be superstitious (Andrew W. Mathis).\nLiterature review: Anthropomorphism\nTheoretical background: Experiential consumption\nHypotheses: Known to enhance product evaluation when product characteristics correspond with human schematics, anthropomorphism influences consumers' beliefs and activities (Aggarwal and McGill, 2007; Payne et al., 2013). For example, animal symbol-based anthropomorphism activates cultural schema (Lloyd and Woodside, 2013), which partly explains sports fans' zeal for their favorite team's mascot. People anthropomorphize when trying to control uncertain outcomes through superstition (Waytz et al., 2010). As such, anthropomorphic beliefs should heighten extrinsic superstitious beliefs. Hence:\nStudy No. 1 methodology: Korean respondents: Measurement items\nData collection: Developed in English, the questionnaire was translated into Korean. To ensure cross-cultural equivalence, one bilingual expert translated it and then another expert back-translated it (Berry, 1980; Cheung and Rensvold, 1999). Respondents were college students enrolled at a mid-sized university in Seoul, South Korea. They received minimal course extra credit for participation. After soliciting the cooperation of two instructors, appointments were made to collect data from their students during normally scheduled classes. Students were assured response anonymity.\nResults: Factor structure\nStudy No. 2 methodology: US respondents: To evaluate determinants and outcomes of extrinsic superstitious beliefs cross-culturally, the same hypotheses and 24 items evaluated in Study No. 1 were retested with US respondents. Using minimal course extra credit as inducement, students enrolled in several marketing courses at a southwestern US university completed the questionnaire during a regularly scheduled class session. They were assured response anonymity.\nResults: A measurement model was estimated with LISREL 8.72. Aside from BehSUPER (0.434), the AVE for each scale exceeds 0.50 (i.e. ANTHRO=0.636, HAPPY=0.847, ESTEEM=0.605, ExtSUPER=0.683, HORO=0.793, and RELIG=0.710). Other than ExtSUPER-BehSUPER (ph2=0.756), AVE for each scale is greater than the squared correlations between each scale and the other scales. Measurement model estimation produced the following goodness-of-fit statistics: kh2(329df)=690.12 (p=0.00), CFI=0.95, NNFI=0.95, GFI=0.81, RMSEA=0.071, and SRMR=0.061. Coefficient a for each scale ranges from 0.747 to 0.931.\nDiscussion: To augment the experience, excitement, and community-building concomitant with daily living, consumers readily espouse superstitions likes fetishes (Fernandez and Lastovicka, 2011), supernatural explanations (Simmons and Schindler, 2003), and lucky charms (Kramer and Block, 2008). Despite the relational, social, and fiscal importance of personal superstitions, methodological inquiry examining the effect of superstitious beliefs on consumers' behavior remains incomplete. To enhance this understanding, experiential consumption theory (Holbrook and Hirschman, 1982) and magical thinking (Vyse, 2000; Zusne and Jones, 1989) were used to develop an explanatory model of superstition belief determinants and consequences - with anthropomorphism, self-esteem, and intrinsic religiosity as precursors, and horoscope importance and behavioral superstitious beliefs as consequences of extrinsic superstitious beliefs.\nManagerial implications: Across both samples, anthropomorphism and horoscope importance correlate positively with extrinsic superstitious beliefs. Thus, marketers could enhance favorable product/brand attitudes by encouraging consumers: first, to treat commercial objects as animate and capable of affecting outcomes (e.g. golfers nicknaming their \"lucky\" putter; car owners sweet-talking their underperforming car); and second, depicting horoscope followers favorably in promotional efforts.\nLimitations and future research directions: This research is not without limitations. Although cross-cultural data were used to explain antecedents and consequences of extrinsic superstitious beliefs, greater generalizability of the findings requires data from non-students and other cultures (e.g. Europe, South America) (Winer, 1999). Furthermore, the measurement scales used may be less valid within some superstition-related contexts, which could affect measurement properties and interrelationships. In addition, findings should be interpreted cautiously, as between-sample factor loadings are not invariant, suggesting South Korean and US respondents differed in their conceptualization of measurement items.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to advance the understanding of antecedents and consequences of superstitious beliefs.\nDesign/methodology/approach: From survey data drawn from 206 South Korean and 218 US respondents, structural equation modeling is used to test the posited hypotheses.\nFindings: To extrinsic superstitious beliefs, both the South Korean and US models support the subjective happiness through self-esteem path and the anthropomorphism path; from these beliefs, both models support the horoscope importance path and the behavioral superstitious beliefs path. Only the US model supports the path from self-esteem to extrinsic superstitious beliefs, and only the South Korean model supports the path from intrinsic religiosity to extrinsic superstitious beliefs.\nResearch limitations/implications: South Korean and US student data may limit generalizability. As effect sizes in this context are established, researchers have a benchmark for future quantitative superstition research.\nPractical implications: By further understanding antecedents and consequences of superstitious beliefs, marketers are in a better position to appeal to targeted customers. Anthropomorphism and intrinsic religiosity, not fully studied by marketing scholars, show promise as segmentation variables related to consumers' attitudes and behaviors.\nSocial implications: To avoid unethical practice, marketers must limit themselves to innocuous superstition cues.\nOriginality/value: Leaning on experiential consumption theory and the \"magical thinking\" literature, this study augments the superstition literature by exploring carefully selected yet under-researched determinants and consequences of superstitious beliefs across eastern and western consumer groups.\n",
"role": "assistant"
}
] |
Developing supply chain security design of logistics service providers: An analytical network process-quality function deployment approach
|
[
"Supply chain security",
"Analytical network process (ANP)",
"Logistics service provider",
"Quality function deployment (QFD)",
"Security design",
"Security risk",
"Third party logistics (3PL)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Supply chain security has been recognized as an important part of managing business risks and also has received increased attention in the academic literature (e.g. Autry and Bobbitt, 2008; Blackhurst et al., 2008; Manuj and Mentzer, 2008a, b; Martens et al., 2011; Reade, 2009; Wagner and Bode, 2008; Whipple et al., 2009; Williams et al., 2008, 2009a, b; Sheu et al., 2006; Voss et al., 2009a, b; Voss, 2013). Security risks can occur at any point of the supply network and the failure of any elements among the supply network would cause the whole network to fail (Rice and Caniato, 2003). As an important actor of supply chains, logistic service providers (LSPs) contribute to supply chain security. LSPs play an important role in improving supply chain performance through collaboration and integration of logistics management activities (Panayides and So, 2005). LSP serves in the supply chain process that brings products and services to end customers. To remain competitive and deliver value to customers, companies increasingly focus on core competencies and outsource functions that can be performed more efficiently by third-parties, for example, logistics functions (Christopher, 1998, 2005). An average of 42 percent of total logistics expenditures were spent on outsourcing in 2011 (Langley and Capgemini, 2011). Logistics is such an integral part of the supply chain that any security impact generated cannot be underestimated. Therefore, LSPs are key players in supply chains and directly affect supply chain security performance. However, there is limited literature specifically about LSP's security issues.\nThe ANP-QFD approach: QFD and ANP for LSPs\nApplication of the ANP-QFD approach in supply chain security design: The LSP company\nResults and discussion: First, the list of CRs and DRs in Table II is used in this case study. In step 2, the relative importance of the CRs is determined by asking the case company the following question: \"Which CR should be emphasized more in developing a LSP's supply chain security performance, and how much more?\" Bearing the assumption that the CRs are not inter-dependent, the relative importance weight W\nConclusions: With the increased attention on supply chain security as part of managing business risks, many LSPs are now subject to scrutiny by customers for their security performance. Hence, security considerations should be infused in LSPs' business operations and decisions. This study has addressed the applicability of ANP-QFD approach in the logistics and supply chain security context. Based on a three-stage process of desk research, content validation test, and in-depth case study, this paper formulated systematic metrics for LSPs to develop their security DRs to meet customer demands. As demonstrated in the case study, the ANP-QFD approach allows a systematic and structured framework for our case company to develop their DRs to meet customer demands. It evaluates the inner dependency of CRs and DRs, as well as comparisons between them, thereby allowing LSPs to know the effectiveness of DRs addressing CRs. The ANP-QFD approach is not limited to only LSPs, as it is widely applicable to other areas of supply chain functions.\n",
"role": "user"
},
{
"content": "Purpose: Supply chain security has been recognized as an important part of managing business risks. The purpose of this paper is to propose a methodology with systematic metrics for logistics service providers (LSPs) to develop their security design to meet customer demands.\nDesign/methodology/approach: The study is based on a three-stage process of desk research, content validation test, and in-depth case study. The study demonstrates an integrated analytical approach which combines analytical network process (ANP) with quality function deployment (QFD), and then applies the approach to an international LSP.\nFindings: A case study of an international LSP shows how the ANP-QFD approach can be deployed to understand customers' expectation for security and develop tactics and measures with the aim to achieve the desirable outcome for LSPs' security design.\nOriginality/value: The paper contributes to supply chain security literature and practice. This study is among the research taking the first step to use ANP-QFD to translate the customer requirements (CRs) for supply chain security into systematic metrics for LSPs to develop their security design. Results from such research into the development of security performance can benefit LSPs in terms of increasing the effectiveness and improving the customer orientation of security efforts. By extension, other firms can enhance their security design by referring to the case study and the integrated analytical method. The flexibility of this ANP-QFD approach offers leeway for firms to change the CRs and design requirements based on their unique circumstances.\n",
"role": "assistant"
}
] |
Developing trust through stewardship: Implications for intellectual capital, integrated reporting, and the EU Directive 2014/95/EU
|
[
"Integrated reporting",
"Intellectual capital",
"Agency theory",
"Corporate disclosure",
"Stewardship theory",
"EU Directive 2014/94/EU"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: We live in a world of distrust. It is a sad indictment on our modern society that, in most countries, society distrusts the very institutions that are supposed to be its pillars (Ries et al., 2018). Regrettably, there is a loss of trust in business because people do not know which companies and brands are honest or acting in their best interests. The recent Volkswagen \"defeat device\" scandal is a perfect example of how the trust in a company and brand was destroyed overnight (Hotten, 2015). In the defeat device scandal, Volkswagen sold investors and consumers a \"lemon\" (Akerlof, 1970). Governments responded with instruments demanding mandatory disclosure and greater transparency, to which companies and other self-interested stakeholders countered with voluntary disclosure instruments of their own making.\n2. IC, integrated reporting and the EU Directive: changing corporate reporting and managerial practices: 2.1 IC and the EU Directive\n3. Lessons learned from IC reporting to inform future practice: As Dumay et al. (2010) outline in their discussion of the GRI, it is important to reflect on past practices and theories to understand future practices. For example, our model includes the core elements of IC, even though IC reporting has ceased to exist in any meaningful way. Most companies that experimented with its practice only did so for a short time but could not see the value of disclosing what they saw as essentially sensitive information (Schaper et al., 2017). However, this does not mean that managing the six capitals and taking the outcomes desired by the EU Directive into consideration is a not worthwhile undertaking. Mouritsen et al. (2001) argue that the main benefits of measuring, visualising, and managing IC are internal. Thus, by understanding what works and doesn't, and by recognising past mistakes from IC reporting, we can learn how to better comply with the EU Directive.\n4. Stewardship: the way forward for IC and corporate disclosure: The failure of IC reporting to make up for the deficiencies of accounting must be accepted as a fact. Similarly, we need to learn from the failure to understand why voluntarily disclosing information is also failing to change behaviour. Additionally, we can see from Dumay's (2016) revised IC definition that IC has fundamentally changed from a new way of understanding how to create economic value, to understanding how it contributes to society and the environment.\n5. Conclusion: In this paper, we reflect on the current and compelling issues permeating the international landscape of corporate reporting and disclosure. The EU Directive is a regulatory action that reflects the societal and political demands for more disclosure by business. While the European regulation mandates that large undertakings report on non-financial information, little is still known about how companies can adopt and pursue the final purpose of the EU Directive. The EU Directive adopts a \"smart regulatory\" approach, which combines mandatory disclosure requirements with voluntary aspects that translates into practice (Johansen, 2016). This paper contributes to developing practical and theoretical recommendations to put the EU Directive into action and achieve its goals of rebuilding lost customer and investor trust in business and institutions.\n",
"role": "user"
},
{
"content": "Purpose: This paper examines the gap between reporting and managers' behaviour to challenge the current theoretical underpinnings of intellectual capital (IC) disclosure practice and research. The authors explore how the key features from IC and integrated reporting can be combined to develop an extended model for companies to comply with EU Directive 2014/95/EU and increase trust in corporate disclosures and reports.\nDesign/methodology/approach: This essay relies on academic literature and examples from practice to critique the theories that explain corporate disclosure and reporting but do not change management behaviour. Based on this critique, the authors argue for a change in the fundamental theories of stewardship to frame a new concept for corporate disclosure incorporating using a multi-capitals framework.\nFindings: We argue that, while the inconsistency between organisations' reporting and behaviour persists, increasing, renewing or extending the information disclosed is not enough to instil trust in corporations. Stewardship over a company's resources is necessary for increasing trust. The unanticipated consequences of dishonest behaviour by managers and shareholders compels a new application of stewardship theory that works as an overarching guide for managerial behaviour and disclosure. Emanating from this new model is a realisation that managers must abandon agency theory in practice, and specifically the bonus contract.\nResearch limitations/implications: We call for future empirical research to explore the role of stewardship theory within the dynamics of corporate disclosure using the approach. The research implications of those studies should incorporate the potential impacts on management behaviours within a stewardship framework and how those actions, and their outcomes, are disclosed for rebuilding public trust in business.\nPractical implications: The implications for integrated reporting and reports complying with the new EU Directive are profound. Both instruments rely on agency theory to coax managers into reducing information asymmetry by disclosing more. However, agency theory only re-affirms the power managers have over corporate information. It does not change their behaviour, nor to act in the interest of all stakeholders as the stewards of an organisation's resources.\nSocial implications: We advocate that, in business education, greater emphasis is needed on how stewardship has a more positive impact on management behaviour than agency, legitimacy and stakeholder theories.\nOriginality/value: We reflect on the current and compelling issues permeating the international landscape of corporate reporting and disclosure and explain why current theories which explain corporate disclosures do not change behaviour or engender trust in business and offer an alternative disclosure model based on stewardship theory.\n",
"role": "assistant"
}
] |
The Brazilian population consumes larger serving sizes than those informed on labels
|
[
"Food consumption",
"Food labelling",
"Nutritional information",
"Trans fatty acids",
"Ultra-processed food"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: A nutritional transition, characterized by changing in life styles and eating patterns, has been observed in the last decades, in several countries as well as Brazil. Among eating patterns changes, an increase in the consumption of animal, processed, and ultra-processed food products has been replacing consumption of grains and cereals (World Health Organization, 2004; Popkin, 2006; Popkin et al., 2012; Malik et al., 2013).\nMethods: Data collection\nResults: Information was collected from the labels of 1,071 processed and ultra-processed food products. It was observed that 15 out of 17 groups presented recommended serving size in grams or millilitres (median of 73 foods per group; amplitude from 7 to 152 foods) and two groups presented recommended serving size in calories (up to 500 kcal per serving).\nDiscussion: In the present study, the average serving size intake by the Brazilian population was larger than serving sizes declared on labels among products with trans fat within their ingredient lists that were false negatives, particularly among foods with fractioned household measurements.\nConclusions: This study has been performed with the use of a national database on food consumption, as well as the information in 1,071 processed and ultra-processed food labels marketed in Brazil. Results obtained in this study show that the Brazilian population is consuming serving sizes larger than recommended and larger than what is declared on labels of processed and ultra-processed foods. In addition, such consumption has been found to be larger in foods that may contain trans fat. Furthermore, the presence of trans fat may be not clearly and visibly shown on food labels. Among factors which may be contributing to underestimate trans fat, increasing its intake by consumers, the following can be highlighted: the variability in declared serving sizes and the presentation of serving sizes with non-applicable fractioning. Considering the risks that consuming this type of lipids can cause on human health, this study indicates that declared serving sizes should be standardized, as well as clearer and with more feasible household measurements. In this sense, nutritional information calculated per 100 g of product can be a proper strategy to inform the real content of trans fat on labels. The need to review Brazilian nutrition labelling laws is evident as regards to serving size variability, household measure fractioning, and declaring trans fat within nutritional information. It is expected that the aforementioned suggestions make consumer understanding of nutrition labelling easier, as well as the use of such information to make food choices and determine intake. Considering that the intake of serving sizes has been larger than what is stated on labels, it is important for nutritionists, dieticians, and healthcare professionals to educate patients with regards to controlling serving sizes consumed. Public health campaigns involving the use of mass media should also be helpful to increase knowledge on ways to control trans fat consumption using nutrition labelling. This study is also proven to be important and novel, contributing with information as to the manner in which nutrition labelling has been presented to Brazilian consumers, discussing its possible consequences for food choices, intake, and the guarantee of consumer rights.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to relate average serving size intake by the Brazilian population and declared serving size, the presence of trans fat and household measure fractioning declared on labels of processed, and ultra-processed food products.\nDesign/methodology/approach: Cross-sectional study that analyzed the food labelling of all processed and ultra-processed food products sold in a supermarket in southern Brazil.\nFindings: A total of 1,071 processed and ultra-processed food products were analyzed. In 88 per cent of food groups, the average serving size consumed was larger than what was declared on labels. Consumed serving size was up to 9.2 times larger than the declared ones in food products with trans fat among their ingredients list and in false negatives and up to 9.9 times larger in foods with fractioned household measure (p<0.001). The Brazilian population consumes, on average, larger serving sizes than those declared on labels, which may represent a significant intake of trans fats without the consumers' noticing.\nOriginality/value: This study has been performed with the use of a national database on food consumption, as well as the information from a large number of processed and ultra-processed food labels marketed in Brazil. This study is also proven to be important and novel, contributing with information as to the manner in which nutrition labelling has been presented to Brazilian consumers, discussing its possible consequences for food choices, intake, and the guarantee of consumer rights.\n",
"role": "assistant"
}
] |
Bottom-up approach for productivity measurement in large public organizations
|
[
"Public sector organizations",
"Performance measures",
"Bottom up decision making",
"Organizational performance",
"Finland"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: There is an increasing need to make the most of the scarce resources in the public sector. Hence, productivity of public services has to be constantly sustained and improved. Productivity (the ratio between output and input) is often regarded as an essential part of organizational performance (Hannula, 1999; Tangen, 2005) and as a prerequisite for the wealth and competitiveness of nations (Singh et al., 2000; Sink, 1983). It is therefore necessary to gather productivity information at the levels of nations, organizations, departments and units. From managerial perspective, productivity measures are valuable in monitoring the progress of productivity improvement and identifying targets for productivity improvement.Performance measurement of public services is a topic that has been studied rather a lot during the last decade. There is both theoretical and empirical research on design (Kaplan, 2001; Wisniewski and Olafsson, 2004), implementation (Cavalluzzo and Ittner, 2004; Collier, 2006) and use of measurement systems (Chan, 2004; Greiling, 2005) as well as various frameworks of organizational performance (Boyne, 2003; Talbot, 1999). These studies comprehensively examine various dimensions of public service performance without specific emphasis on productivity. Productivity measures may act as a part of performance measurement systems. However, it has noted that performance measurement systems do not often specifically address issues related to productivity (Hodginson, 1999). Another option is to design separate measurement systems specifically focusing on productivity. The advantage in a separate system can be that productivity is paid specific attention and more effort is invested in gathering proper information on such an important measurement object.Productivity measurement of large public organizations is a challenging task due to several reasons. First, there are many departments and units providing various services. The same methods may not be applicable in all operations (Nasi et al., 2001; Rosen, 1993). Second, there are different information needs at various organizational levels (Wisniewski and Stewart, 2004). Third, most public organizations provide services - a traditionally challenging context for productivity measurement (Gronroos and Ojasalo, 2004; Sherwood, 1994).Despite the importance of productivity, issues related to the measurement of productivity have still not received adequate attention (Singh et al., 2000). Singh et al. (2000) reviewed 23 studies on productivity measurement, the majority of which were carried out in the manufacturing context with the approach of statistical analysis focusing on macro-level (e.g. industry) examination. Many of the existing studies on the topic of public service productivity measurement are theoretical (Hodginson, 1999; Larsson, 1995) or examine the issue at the macro-level (Lorenzo and Sanchez, 2007; Phelps, 2009; Simpson, 2009). They support political decision-making and top-level analysis but are not very useful for managerial purposes at the operative level. The need for more detailed and accurate basic information has also been voiced in the discussion on macro-level productivity (Ministry of Finance, 2007).This study tries to identify means to overcome some of the current problems in public service productivity measurement. The objective of the study is to develop a new method for measuring productivity in large public organizations. The basic idea of the method is to aggregate measurement information from component measures used at the operative level (e.g. units providing single services). With this kind of method it is possible to take better account of the specific characteristics of different services. At the same time, information for evaluating the productivity improvement of larger organizational entities is provided.This paper proceeds as follows: First, the current knowledge of the issue is examined by reviewing the literature related to productivity measurement challenges and the potential measurement approaches to solving the challenges. Second, a case study is carried out in the City of Helsinki, Finland. The aim is to design, test and evaluate a productivity measurement method for large public organizations. Finally, some concluding remarks and suggestions for further research are provided.\n2. Service productivity measurement: 2.1 Challenges\n3. Designing the new measurement method: 3.1 Data and methodology\n4. Testing and evaluating the new method: 4.1 Testing\n5. Conclusions: There is an abundance of research on productivity measurement. However, productivity measurement of public services has proven to be a big challenge. Many of the current measurement methods do not as such satisfy the needs of public managers. This study presented a novel measurement method which aims to solve at least some of the problems. The contribution of the method is two-fold: first, it enables the gathering of more detailed measurement information from the operative levels of large organizations; second, it makes it possible to use different measurement methods in different service providing departments and units. This study demonstrated that the new method provides similar results compared to a more conventional measurement method with the same kind of basic data. The new method was evaluated with four measurement criteria. It may be concluded that while the method satisfies three criteria at least as well as a conventional method, it seems to be more relevant.The method is a step forward in developing more sophisticated measurement systems in the public sector. It emphasizes the role of productivity measures in managerial purposes not solely as monitoring or statistical tools. However, the most laborious task is to design appropriate productivity measures for individual services. Hence, a great effort is needed in order to implement the method properly. The quality of the measurement results provided by the method is ultimately dependent on the component measures used. If the validity of component measures is poor the method will provide equally poor information. The experiences of this study speak for the use of quality-related weights in the outputs of component measures. Some of the advantages of the method may materialize when the productivity matrix is applied as a component measure of productivity. In this way it is possible to capture broadly productivity-related aspects. The possibility of using the method more generally in performance measurement, should also be considered. It could be used for purposes of aggregating any measurement information gathered at the bottom levels of organizations, in order to integrate it with decision-making at upper organizational levels.The design work of this study was closely connected to the case organization, which may be regarded as rather advanced in measurement and accounting practices. The application of the method makes sense in the context of large organizations with functional structures. In addition, detailed cost-information is needed in order to apply the method properly. However, since the method is quite flexible, and because there are many similar large organizations in the public sector, the method may be applied quite widely.More research on the topic is needed. This study tested the method only to a limited extent. It is necessary to gain more understanding by applying the method in different organizations and over longer time periods. It could also be compared to some other measurement systems in use. The soundness of the new method will be revealed only after years of experience of using it. Since the method provides detailed information on productivity, its true benefits may only be realized if the measurement results are properly analyzed. There still remain many productivity measurement challenges in public organizations, likewise, a need to design better productivity measures for individual services. For example, knowledge-intensive expert services and administration are very challenging contexts for productivity measurement.\n",
"role": "user"
},
{
"content": "Purpose: - This study aims to identify means to overcome some of the current problems in public service productivity measurement. The objective is to develop a new method for measuring productivity in large public organizations.\nDesign/methodology/approach: - The research is carried out as a constructive case study in the City of Helsinki, Finland. Interviews are used in order to identify practical requirements for measurement. The measurement data from around 200 units is used in testing the method.\nFindings: - The new method applies an approach proposed in the literature. It aggregates measurement information from component measures used at the operative level. Based on test calculations and evaluation it is argued that the new method is managerially more relevant than a conventional method.\nResearch limitations/implications: - This study tested the method only to a limited extent. It is necessary to gain more understanding by applying the method in different organizations, and by using longer time periods.\nPractical implications: - Current productivity measures of public services have been criticized for providing information that lacks managerial relevance. The method described in this study aims to improve relevance in two ways. It enables the gathering of more detailed measurement information from the operative levels of large organizations. In addition, it facilitates the use of different measurement methods in different service providing departments and units.\nOriginality/value: - Many of the existing studies examine productivity measurement at the macro-level. This study presents a measurement method that is a step forward in developing more sophisticated measurement systems in public organizations. The study also describes and highlights the role of component productivity measurement at the operative level.\n",
"role": "assistant"
}
] |
Gender effects on buyer perceptions of male and female sales representatives in China
|
[
"China",
"Customer loyalty",
"Gender",
"Queen bee phenomenon",
"Buyer perceptions",
"Sales representatives"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Developing strong business associations through the application of \"relationship marketing\" has become a central objective for many companies (Day, 2000). Relationship marketing can be defined as using marketing-focused activities to develop and maintain long-term loyal connections with other businesses (Morgan and Hunt, 1994). To develop these strong and loyal relationships between buyers and sellers, it is essential for both parties to trust and respect each other (Rauyruen and Miller, 2007). Consequently, it is important to understand the variables that may contribute to or detract from, creating long-lasting partnerships. One of those variables is gender.\nBackground: Past research has explored the differences in how men and women process information and make decisions. In their study, McGuiness and Pribram (1979) found that women are more sensitive than men to most stimuli and suggest this may affect how information is processed. Similarly, other research has concluded that women use a high-context communication style and respond to non-verbal stimuli interpreting it with associative imagery and providing more elaborate descriptions than men (Darley and Smith, 1995; Meyers-Levy, 1988). One study found that men tend to use different hemispheres of their brain for specific tasks (left-side verbal and right-side visual) while women often use both sides of their brain for information processing (Meyers-Levy and Maheswaran, 1991). In addition, women have been found to exhibit greater sensitivity to information relevant to others when making judgments (Meyers-Levy and Sternthal, 1991). In another study, Watts et al. (1982) suggested that women tend to consider others well-being to a greater degree than their male counterparts. Further, women are reported to have higher activations of mirror neurons than men. This enhanced ability to process mirror neurons improves an individual's ability to detect subtle facial expressions and help them to understand another's intention and state of mind. Such understanding leads to more empathy (Enticott et al., 2008) and having greater empathy positively impacts trust (Aggarwal et al., 2005). This more empathetic viewpoint of women has been corroborated in other research (Meyers-Levy, 1988; Areni and Kiecker, 1993). In summary, there seems to be some indication that men and women process information in different ways (Lewis, 2013), specifically, women tend to be more context-dependent in their decision-making than do men (Miller and Ubeda, 2012).\nHypotheses: A number of studies have investigated the importance of the perceived credibility of salespeople (Landeros et al., 1995; Shepherd, 1999) and the firms that they represent (Goldsmith et al., 2000; Kantsperger and Kunz, 2010; Ou et al., 2012; Shim et al., 2007; Solomonson, 2012; Sun and Lin, 2010). Most researchers have conceptualized credibility as having two major components. One, an affective-based dimension that is represented by perceptions of trust and the second, a cognitive dimension, is characterized by the perceived expertise of the source (Dholakia and Sternthal, 1977; Hovland et al., 1953; Newell and Goldsmith, 2001; Rarick, 1963; Sharma and Sheth, 1997).\nMethodology: Questionnaire development\nDiscussion and implications: This study, conducted in China investigating whether gender plays a role when evaluating the attributes of sales representatives by buyers, provides a number of intriguing results that should be of interest to both researchers and practitioners. The most significant finding is the surprising bias toward female sales representatives by women buyers. In addition, the research also indicates that though there are still significant cultural differences between China and the USA, factors such as expertise and trust similarly seem to drive loyalty in business-to-business relationships. Below we discuss in more detail these outcomes and their implications.\nLimitations: There are a number of limitations within the current study. First, this research uses a quantitative methodology in both the collection and analysis of the data. Thus, future studies may want to use a qualitative data set to gain a more in-depth understanding of the business-to-business relationships between men and women in the workplace. Also, as the study was concentrated on a relatively small number of business professionals from only one area in China, subsequently researchers should consider increasing the geographic domain where respondents are sampled, to help improve the generalizability of the results. Specifically, it may be that business professionals residing in more developed east coast provinces with large cities have different perspectives from those living in more rural and less modern communities. Additionally, future research should be conducted with purchasing agents in a variety of other countries to empirically test the proposed model (using the same measures). This will help researchers to better understand the differences and similarities of the roles that expertise, trust and loyalty play in business-to-business relationships in cultures around the world. In addition, though the instruments show strong reliability and validity, this is the first study for which they have been adapted for use in China; thus, future research should be undertaken using these scales to firmly establish them as acceptable measures for similar investigations on business relationships in this and other, geographic areas. Finally, additional studies should include an investigation on whether the queen bee phenomenon for Chinese women applies to evaluations of women in situations other than those within the work environment.\nConclusion: It has been argued that gender differences in perceptions of buyer-seller relationships are growing narrower (Pullins et al., 2004). While this seems to be true in the USA, it has not (until now) been empirically tested in China. Somewhat surprisingly, the gender differences we did uncover are not from male buyers in their assessments, but from female buyers in their evaluation of women sales representatives. This result may be explained by a condition known as the queen bee phenomenon, in which women in positions of power are more critical of other women in their sphere of influence. Understanding how these and other gender perceptions impact buyer-seller relationships, is an interesting and potentially fruitful area of future research and may have some significant implications for business practice. By gaining answers to these and other questions that relate to facilitating relationships across borders, organizations doing business in China will be more likely to successfully navigate cultural hurdles on the road to long-term successful business partnerships.\n",
"role": "user"
},
{
"content": "Purpose: Women in many countries are breaking through the gender barrier and are working in positions where they have a major impact on the buying and selling activities in business-to-business relationships. A number of studies have investigated the role gender plays in driving perceptions of sales representatives in the USA, however, little research has been undertaken on this important topic in China, one of the largest and most influential countries. Consequently, the purpose of this study is to examine whether the gender of both the buyer and seller, affects perceptions of expertise, trust and loyalty in business relationships.\nDesign/methodology/approach: A survey instrument was developed, tested and used on individuals with purchasing responsibility in China. Confirmatory factor analysis was used in the pre-test and the final study data to develop and refine measurement instruments, assessing construct validity, identifying method effects and evaluating factor variance across groups. Independent t-tests were used to compare male and female buyers on their evaluation of sales reps.\nFindings: The results indicate that the gender of the sales person does not seem to be a consideration for male buyers in evaluating reps in any of the variables tested. However, female buyers consistently give less favorable evaluations to female sales people than male sales representatives. The possible implications of these findings are discussed.\nResearch limitations/implications: First, this research uses a quantitative methodology in both the collection and analysis of the data. Thus, future studies may want to use a qualitative data set to gain a more in-depth understanding of the business-to-business relationships between men and women in the workplace. Also, as the study was concentrated on a relatively small number of business professionals from only one area in China, subsequently researchers should consider increasing the geographic domain where respondents are sampled, to help improve the generalizability of the results.\nPractical implications: The gender bias by female buyers found in this study has several important implications for businesses in China. Specifically, the suggests that female buyers may carry with them a \"collective negative bias\" against other women, in this case, female sales reps. The solution to this problem is not to avoid assigning female sales reps to female buyers, rather, it is to change the negative collective social-esteem identity preconceptions. Organizations can do this through training by making women aware of their own predisposition to unfairly evaluate women that they interact with in the workplace.\nOriginality/value: It has been argued that gender plays a small role in perceptions of sales representatives in buyer-seller relationships. While this seems to be true in the USA, it has not (until now) been empirically tested in China. Somewhat surprisingly, the gender differences we did uncover are not from male buyers in their assessments, but from female buyers in their evaluation of women sales representatives. This result provides some interesting insight into Chinese business relationships and how some women in positions of power are more critical of other women within their sphere of influence.\n",
"role": "assistant"
}
] |
The CEO branding mix
|
[
"CEO appearance",
"CEO branding",
"CEO image",
"CEO reputation",
"Company image",
"Corporate reputation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: What do Mark Zuckerberg, Oprah Winfrey, Richard Branson and Jeff Bezos have in common? They are not only CEOs and living icons of the brands they created but also brands themselves. There are many reasons why CEOs today get more attention than in the past. When the media are looking for a business story, they often turn to CEOs or founders. Some companies are larger than countries in terms of number of citizens or employees or in revenue. For example, Walmart has about 2.2 million employees, comparable to the populations of Qatar, Namibia or Slovenia, and its revenues are larger than the gross domestic product (GDP) of Singapore, Denmark or South Africa. Like movie stars who signal and generate expectations about the success of a forthcoming movie, CEOs herald company performance expectations to the market and specifically to shareholders. Their personas and personality impact their image and reputation, which are tied to their organizations' reputation.\nThe four P's of the CEO branding mix: CEO branding is complex because it considers multiple attributes of the CEO; it affects many aspects of the company and is important for various stakeholders. This is probably one reason why CEO branding is discussed in disciplines such as finance, economics, accounting, management, psychology and strategy. When I studied the many studies across all these disciplines, I identified four major components of the CEO branding mix: performance, personality, prestige (reputation) and persona.\nCEO prestige: The impact of CEO prestige or CEO reputation on companies is mixed. There are two schools of thought on this. One, the efficient contracting hypothesis, states that reputable CEOs are more likely to take actions that are in the best interest of their companies. On the other hand, the rent extraction hypothesis shows how reputable CEOs over-emphasize their personal career enhancement and take actions that may be in their own best interests rather than that of their companies.\nCEO personality: The normative stakeholder theory and leadership trait theory discuss the importance and impact of CEO personality on companies. Studies show mixed results.\nCEO persona: People form judgments about others based not only on objective physical characteristics such as gender and age but also on more subjective characteristics such as education, social status, appearance and facial expressions. All these attributes of the CEO can impact the CEO image and, subsequently, company reputation and performance.\nCEO performance: CEO performance and company performance can coincide if you look only at financial-performance indicators such as profits, net operating cash flow, ROI or market-performance indicators such as market share or stock price. However, CEO performance also includes other factors, as Kaufman's (2008)\nConclusion: Analysts and investors see the CEO as the embodiment of the company, especially if the CEO is also the founder, while consumers can see the CEO as an authentic brand ambassador. Whether the CEO or the company wants this to happen or not, the CEO brand has become part of the company brand. Like all aspects of the company's reputation, it needs to be built, measured and managed.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to outline and discuss the main elements of the chief executive officer (CEO) that affect financial and non-financial aspects of companies. CEO reputation and corporate reputation and performance are intertwined concepts. It presents a conceptual framework, the \"4Ps of the CEO branding mix\", and shows how they individually and collectively influence company reputation and performance.\nDesign/methodology/approach: An extensive inter-disciplinary research was conducted. Four primary elements of the CEO (performance, personality, prestige and persona) were identified that positively or negatively impact companies.\nFindings: CEO reputation (prestige) can not only positively but also negatively impact companies and celerity CEO's are no different. There are certain personality traits, such as honesty and humility, which are mostly associated with positive company outcomes, while other personality traits, such as machiavellianism and narcissism, seems to negatively impact companies. Certain aspects related to the CEO as a persona, such as CEO tenure (experience) and education, and also CEO's physical appearance and facial expressions impact CEO's image and, subsequently, company reputation and performance. Finally, this paper shows that CEO performance is more than company financial and market performance and includes aspects such as strategy execution or CEO succession planning.\nPractical implications: The four 'P's of the CEO branding mix provide a useful framework for CEO's and the corporate communications department to developing a consistent and comprehensive CEO and corporate communication and branding strategy.\nOriginality/value: This paper contributes as it outlines the various aspects and elements which impact CEOs image and reputation. The four P's serve as a useful framework for CEO's as well as corporate communication departments to systematically measure and manage the CEO's image and reputation and, consequently, company reputation and performance. It forms the basis for developing a consistent and comprehensive CEO and company communication and branding strategy.\n",
"role": "assistant"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.